The question is "why do people need fainting couches for this project and why are they pretending like 3 year old features of apis that already exist in thousands of projects are brand new innovations exclusive to this?"
The answer is: "the author is celebrity and some people are delusional screaming fanboys"
My response is: "that's bullshit. let's be adults"
i disagree with your dropbox example. dropbox is apprently easier to use than a selfhost ftp site and well maintained by a company. but this clawedbot is just a one-man dev developed project. there are many similar "click to fix" services.
Not exactly, clawdbot is an open source project with hundreds of contributors (including me!) in only 3 weeks of its existence. Your characterization of just a one-man dev developed project is inaccurate.
I'm genuinely sorry you think that, and it's not my intention to offend you.
However your comment reads exactly like you saying to a Dropbox user "This is a user going to rsync, setting up a folder sync in a cron job, running the cron job, and saying "wow isn't dropbox great".
Sometimes the next paradigm of user interface is a tweak that re-contextualizes a tool, whether you agree with that or not.
This is a GitHub user on GitHub using a GitHub feature through the GitHub interface on the GitHub website that any GitHub user with a GitHub project can enable through GitHub features on GitHub.
And the person is saying "my stars! Thanks clawdbot"
There's obviously an irrational cult of personality around this programmer and people on this thread are acting like some JW person in a park.
First those are completely different sentiments. One is a feature built into the product in question the other is a hodgepodge of shit.
Second, and most importantly, Dropbox may as well not exist anymore. It’s a dead end product without direction. Because, and this is true, it was barely better than the hodgepodge of shit AND they ruined that. Literally everything can do what Dropbox does and do it better now.
What specific aspect of this is a GitHub feature? Can you link to the documentation for that feature?
The person you're replying to mentions a fairly large number of actions, here: "cloned the codebase, found the issue, wrote the fix, added tests. I asked it to code review its own fix. The AI debugged itself, then reviewed its own work, and then helped me submit the PR."
If GitHub really does have a feature I can turn on that just automatically fixes my code, I'd love to know about it.
this is the whole message of this hype that you can churn out 500 commits a day relatively confidently the way you have clang churn out 500 assemblies without reading them. We might not be 100% there but the hype is looking slightly into the future and even though I don't see the difference to Claude code, I tend to agree that this is the new way to do things even if something breaks on average it's safe enough
I agree. It is basically claude code running dangerously all the time. That is actually how I use CC most of the time, but I do trust Anthropic more than random github repo.
(I have the same sentiment about manifest v3 and adblocker, but somehow HN groupthink is very different there than here)
Edit: imagine cowork was released like this. HN would go NUTS.
Yeah but you're still using anthropic's subscription and tokens. That's not really an alternative. That's why we're shipping our own model with cortex.build
It's good at making new skills for itself, and the ability to add to WhatsApp, telegram, and discord means sharing access to internal applications and not needing users to get onto VPN makes a great combination.
What systems are making new skills for themselves? Not being snarky, I find this sort of self-teaching incredibly interesting but have only ever seen this approach
Once an alternative to one of their things, like immich, becomes viable, people run as fast as they can.
The strategy of doing everything you can to make sure your customers truly and utterly despise you and want to spit in your face is probably not productive.
byterover has been doing something similar for a while. amp was initially doing a variation of this and then pivoted. I built a similar tool about 9 months ago and then abandoned it.
The approach seems tempting but there's something off about it I think I might have figure out.
For me, the only metric that matters is wall-time between initial idea and when it's solid enough that you don't have to think about it.
Agentic coding is very similar to frameworks in this regard:
1. If the alignment is right, you have saved time.
2. If it's not right, it might take longer.
3. You won't have clear evidence of which of these cases applies until changing course becomes too expensive.
4. Except, in some cases, this doesn't apply and it's obvious... Probably....
I have a (currently dormant) project https://onolang.com/ that I need to get back to that tries to balance these exact concerns. It's like half written. Go to the docs part to see the idea.
reply