Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Its really becoming a good litmus test for how someones coding ability whether they think LLMS can do well on complex tasks.

For example, someone may ask an LLM to write a simple http web server, and it can do that fine, and they consider that complex, when in reality its really not.



It’s not. There are tons of great programmers, that are big names in the industry who now exclusively vibe code. Many of these names are obviously intelligent and great programmers.

This is an extremely false statement.


People use "vibe coding" to mean different things - some mean the original Karpathy "look ma, no hands!", feel the vibez, thing, and some just (confusingly) use "vibe coding" to refer to any use of AI to write code, including treating it as a tool to write small well-defined parts that you have specified, as opposed to treating it as a magic genie.

There also seem to be people hearing big names like Karpathy and Linus Torvalds say they are vibe coding on their hobby projects, meaning who knows what, and misunderstanding this as being an endorsement of "magic genie" creation of professional quality software.

Results of course also vary according to how well what you are asking the AI to do matches what it was trained on. Despite sometimes feeling like it, it is not a magic genie - it is a predictor that is essentially trying to best match your input prompt (maybe a program specification) to pieces of what it was trained on. If there is no good match, then it'll have a go anyway, and this is where things tend to fall apart.


Funny, the last interview I watched with Karpathy he highlighted the way the AI/LLM was unable to think in a way that aligned with his codebase. He described vibe-coding a transition from Python to Rust but specifically called out that he hand-coded all of the python code due to weaknesses in LLM's ability to handle performant code. I'm pretty sure this was the last Dwarkesh interview with "LLMs as ghosts".


Right, and he also very recently said that he felt essentially left behind by AI coding advances, thinking that his productivity could be 10x if he knew how to use it better.

It seems clear that Karpathy himself is well aware of the difference between "vibe coding" as he defined it (which he explicitly said was for playing with on hobby projects), and more controlled productive use of AI for coding, which has either eluded him, or maybe his expectations are too high and (although it would be surprising) he has not realized the difference between the types of application where people are finding it useful, and use cases like his own that do not play to its strength.


karpathy is biased. I wouldn't use his name as he's behind the whole vibe coding movement.

You have to pick people with nothing to gain. https://x.com/rough__sea/status/2013280952370573666


I don't think he meant to start a movement - it was more of a throw-away tweet that people took way too seriously, although maybe with his bully pulpit he should have realized that would happen.


Still I don’t think he’ll speak against it.


How many more appeal-to-authority counter arguments are going to be made in this thread


They are more effective then on the ground in your face evidence largely because people who are so against AI are blind to it.

I hold a result of AI in front of your face and they still proclaim it’s garbage and everything else is fraudulent.

Let’s be clear. You’re arguing against a fantasy. Nobody even proponents of AI claims that AI is as good as humans. Nowhere near it. But they are good enough for pair programming. That is indisputable. Yet we have tons of people like you who stare at reality and deny it and call it fraudulent.

Examine the lay of the land if that many people are so divided it really means both perspectives are correct in a way.


Just to be more pedantic, there is more nuance to all of that.

Nobody smart is going to disagree that LLMs are a huge net positive. The finer argument is whether or not at this point you can just hand off coding to an LLM. People who say yes simply just haven't had enough experience with using LLMs to a large extent. The amount of time you have to spend prompt engineering the correct response is often the same amount of time it takes for you to write the correct code yourself.

And yes, you can put together AGENT.md files, mcp servers, and so on, but then it becomes a game of this. https://xkcd.com/1205/


If you want to be any good at all in this industry, you have to develop enough technical skills to evaluate claims for yourself. You have to. It's essential.

Because the dirty secret is a lot of successful people aren't actually smart or talented, they just got lucky. Or they aren't successful at all, they're just good at pretending they are, either through taking credit for other people's work or flat out lying.

I've run into more than a few startups that are just flat out lying about their capabilities and several that were outright fraud. (See DoNotPay for a recent fraud example lol)

Pointing to anyone and going "well THEY do it, it MUST work" is frankly engineering malpractice. It might work. But unless you have the chops to verify it for yourself, you're just asking to be conned.


Let me just list all of these people:

Steve Yegge (Veteran engineer, formerly Google and Amazon): A leading technical voice who describes vibe coding as acting as an orchestrator. He maintains that engineers who do not master "agentic engineering" and AI-driven workflows will be left behind as the industry moves toward "hyperproductivity".

Patrick Debois (Founder of DevOps): Often called the "godfather of DevOps," Debois now advocates for the "AI native developer". He views vibe coding as a high-level abstraction where the engineer's role shifts from a "producer" of lines of code to a "supervisor" of complex automated systems.

Simon Willison (Co-creator of Django): Recognized for his highly technical workflows that use AI to handle mechanical implementation while he focuses on rigorous documentation, tool coverage, and validation—a process often cited as the professional gold standard for vibe coding.

Stephen Blum (Founder/CTO of PubNub): A technical leader who has integrated generative coding into production-scale architecture. He characterizes the 2026 developer's role as directing agents for everything from database migrations to security audits rather than manually performing these tasks.

Gene Kim (Renowned DevOps researcher and author): Co-author of The Phoenix Project, Kim has publicly championed vibe coding as one of the most enjoyable technical experiences of his career, citing how it allows him to build sophisticated prototypes in minutes rather than days.

Patrick Debois (Founder of DevOps): Often referred to as the "godfather of DevOps," Debois advocates for "AI-native engineering". He views vibe coding as a mature abstraction layer where elite engineers focus on "system orchestration" rather than producing individual lines of code.

Geoffrey Huntley (Founder of the "Vibe Coding Academy"): A highly technical engineer known for pushing the boundaries of AI-driven development. He is a primary source for experimental techniques that use agents for everything from infrastructure to core logic.

Boris Cherny (Author of Programming TypeScript): An authority on type systems and engineering rigor, Cherny now provides deep technical guidance on how to integrate high-level intent with reliable, production-ready source code using tools like Claude Code.

Stephen Webb (UK CTO at Capgemini): A key industry figure declaring 2026 as the year "AI-native engineering goes mainstream". He supports vibe coding as a legitimate method for rewriting legacy systems and refactoring entire modules autonomously.

Linus Torvalds (Creator of Linux and Git): In a significant endorsement for the paradigm, Torvalds reported in early 2026 that he used Google Antigravity to vibe code a Python visualizer for his AudioNoise project. He noted in the project's documentation that the tool was "basically written by vibe-coding".

Theo Browne (Founder of Ping.gg, T3.gg): Known for his deep technical influence on the web development community, Browne is a primary educator for tools like Claude Code. He advocates for vibe coding as a way to bypass the "boring parts" of development, allowing engineers to focus on higher-level architecture and product logic.

McKay Wrigley (Developer and AI educator): A leading technical figure focused on structured tutorials and advanced workflows for agentic programming. He is widely followed by senior engineers seeking to move beyond simple chat interfaces into full-scale autonomous software generation.

Charlie Holtz (Software engineer and infrastructure specialist): Known for building advanced infrastructure tools, Holtz is recognized as an engineer "pushing the boundaries" of what can be built using vibe coding for complex, back-end systems.

Cian Clarke (Principal Engineer at NearForm): A veteran in the Node.js ecosystem who has transitioned toward spec-driven development. He advocates for "AI native engineering" where specialized agentic roles (such as security or performance agents) are orchestrated to build and refactor large-scale enterprise systems.

IndyDevDan (Senior developer and educator): A highly technical voice advocating for "deep mastery" of AI-assisted engineering. He focuses on teaching developers how to maintain rigorous engineering standards while leveraging the speed of vibe coding.

Mitchell Hashimoto (Founder of HashiCorp, Creator of Terraform): Now focused on his terminal project Ghostty, Hashimoto has become a leading voice on "pragmatic AI coding." In 2026, he detailed his workflow of using reasoning models (like o3) to generate comprehensive architecture plans before writing a single line of code. He argues this "learning accelerator" approach allows him to build outside his primary expertise (e.g., frontend) while maintaining strict engineering rigor by reviewing the output line-by-line.

Kent C. Dodds (Renowned Web Development Educator & Engineer): A highly influential figure in the React community, Dodds has fully embraced the paradigm, stating in 2026 that he has "never had so much fun developing software." He advocates for a "problem elimination" mindset where AI handles the implementation details, allowing senior engineers to focus entirely on user experience and application architecture.

Guillermo Rauch (CEO of Vercel, Creator of Next.js/Socket.io): Rauch has been a vocal proponent of vibe coding as the bridge between business logic and shipping software. He argues that vibe coding solves the "execution gap," enabling technical founders and engineers to ship complex products without getting bogged down in boilerplate, effectively treating the AI as a "junior engineer with infinite stamina" that requires high-level direction.

DHH (David Heinemeier Hansson) (Creator of Ruby on Rails & CTO at 37signals): Historically a skeptic of industry hype, DHH has acknowledged the "tipping point" in 2026, noting that agentic coding has become a viable tool for experienced developers to deliver on specs rapidly. His shift represents a major endorsement from the "craftsman" sector of the industry, validating that AI tools can coexist with high standards for code quality.

Rich Harris (Creator of Svelte): Harris has spoken about how AI-driven workflows liberate developers from "code preferences" and syntax debates. He views the 2026 landscape as one where the engineer's job is to focus on the "what" and "why" of a product, while AI increasingly handles the "how," allowing for a renaissance in creativity and shipping speed.

Addy Osmani (Engineering Manager for Chrome Web Platform): While deeply embedded in the browser ecosystem, Osmani has published extensively on his "AI-augmented" workflow in 2026. He characterizes the modern senior engineer not as a typist but as a "Director," whose primary skill is effectively guiding AI agents to execute complex engineering tasks while maintaining architectural integrity.

The above is just a smattering of individuals. I can keep going.


Is this list AI-generated?


>If you want to be any good at all in this industry, you have to develop enough technical skills to evaluate claims for yourself. You have to. It's essential.

This is an orthogonal off topic point. My or anyones skills don't have to do with the topic at hand. The topic at hand is AI.

>Because the dirty secret is a lot of successful people aren't actually smart or talented, they just got lucky. Or they aren't successful at all, they're just good at pretending they are, either through taking credit for other people's work or flat out lying.

Again orthoganol to the point. But I'll entertain it. There's another class of people who are delusional. They think they're good, but they're not good at all. I've seen plenty of that in the industry. More so then people who lie, it's people who lie to themselves and believe it. Engineers so confident in their skills, but when I look at them I think they're raw dog shit.

>I've run into more than a few startups that are just flat out lying about their capabilities and several that were outright fraud. (See DoNotPay for a recent fraud example lol)

Again so?

>Pointing to anyone and going "well THEY do it, it MUST work" is frankly engineering malpractice. It might work. But unless you have the chops to verify it for yourself, you're just asking to be conned.

Of course. But it's idiotic when there is a huge population of people who are smarter than you better than you and proven to be more capable than you saying they can do it. I need to emphasize it's not just one person saying it. Tons and tons and tons of people are saying it.

Fraud happens in the margins of society it rarely ever happens at a macro level, and if it does happen at a macro level the trend doesn't last long and will mostly die within a year at most.

So when multitudes of highly reputed people are saying one thing, and your on the ground self verification of that thing is directly opposite of what they are saying. Then you need to re-evaluate your OWN verification. You need to investigate WHY there is a discrepancy, because it is utter stupidity to deny what others have seen as fraud and believe that your own judgements and verifications are flawless.

No offense, my dude, but your philosophy on this topic embodies the delusional stupidity I am talking about. People lie to themselves. That is the key metric here.

I don't need to explain ANY of this to you. You know it, because every explanation I just gave is an OBVIOUS facet of life in general. It needs to be explained to someone like you despite it's obviousness because of self delusion.


> Fraud happens in the margins of society it rarely ever happens at a macro level, and if it does happen at a macro level the trend doesn't last long and will mostly die within a year at most

Ahahahahahaha. Oh man. I think you have some reallll hard lessons in front of you about the nature of industries that have lots and lots of money being thrown at them.

I have been a part of this industry for 10+ years at this point, at companies you have heard of. There is a lot -- I mean a lot -- of people who will do and say anything if they think it'll get them something.

Yes, that includes people who have pedigrees. Yes, that includes people with all the traits you mention. It's the nature of being in an industry where money gets thrown around in buckets.

You don't have to be a cynic about people, you don't have to be paranoid, it doesn't have to poison your outlook on life. I work with lots of smart great folk and I don't walk around eying my coworkers suspiciously. You do need to be street smart.

If the start and end of your critical thinking is "well this person said so"? That's not critical thinking, the polite word for that is starchasing. If you don't or can't develop the technical chops to evaluate claims for yourself, you'll never get out of that trap.


I’m talking about actual public fraudulent lies. These are weeded out quickly. Think flat earth.

> I have been a part of this industry for 10+ years at this point, at companies you have heard of.

I’ve been at it longer. And at companies where you use the products everyday.

> I mean a lot -- of people who will do and say anything if they think it'll get them something.

You’re not that bright are you? Of course they will. I’m talking about public fraud. Like the flat earth movement. These things don’t last long. I’m not talking about human nature and people predilection for lying.

Your brain is somehow fixated into thinking your some 10 year veteran (oooooh your so great) who’s seen it all and you’re talking to a greenhorn when really you’re just not smart enough to understand what’s being said. Bro wake up. You missed the point and went off on a tangent.

> You do need to be street smart.

This is next level. Let me spell it out for you: you’re not street smart. You’re not smart. You don’t look at things critically you don’t self examine your own judgements. You just approach everything with a sort of cocky confidence and you get shit wrong. Constantly. You “clocked” me in completely wrong, your comments all over HN are wildly and factually off base.


I think the author is way understating the uselessness of LLMs in any serious context outside of a demo to an investor. I've had nothing but low IQ nonsense from every SOTA model.

If we're being honest with ourselves, Opus 4.5 / GPT 5.2 etc are maybe 10-20% better than GPT 3.5 at most. It's a total and absolute catastrophic failure that will go down in history as one of humanity's biggest mistakes.


Non sequitor.

You don't have to be bad at coding to use LLMs. The argument was specifically about thinking that LLMS can be great at accomplishing complex tasks (which they are not)


Wtf are you talking about. Great programmers use LLMs for complex tasks. That was the point of my comment


And my point is that what you think are complex tasks are not really complex.

The simple case is that if you ask an agent to do a whole bunch of modifications across a large number of files, it often loses context due to context windows.

Now, you can make your own agents with custom mcp servers to basically improve its ability to do tasks, but then you are basically just building automation tools in the first place.


See my other response. I didn't define what a complex task is. I used people with reputation, intelligence and ability greater then myself to say, if they endorce it, then they must be using it on complex tasks and it must be successful to them.

I can certainly see how you're better than every one of those people and how to you what they call "complex" is just simplistic. I've never met anyone great like you.


You didn't state any complex tasks though. You only stated programmers who use LLMs.


i thought it was implied, guess not.

Great programmers wouldn't support or back AI if it couldn't handle complex tasks. AI can handle complex tasks inconsistently when operating on it's own. They can handle complex tasks consistently when pair programming with a human operator.


Yeah, take Ryan Dahl:

His tweets were getting ~40k views average. He made his big proclamation about AI and boom viral 7 million

This is happening over, and over, and over again

I'm not saying he's making shit up but you're naive if you don't think they're slightly tempted by the clear reaction this content gets


He’d get an equivalent reaction for talking shit about AI. Anyway it’s not just him. Plenty of other people.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: