Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

AI isn’t going away, but it’s also clear the much promised impacts aren’t there and aren’t coming anytime soon. A bit like the claims a few years back that we’d all have self driving cars by now.

The most likely outcome is an AI bubble correction that will be somewhat painful and wipe out many/most AI startups, followed by AI settling into day to day in a way that’s useful and found in many places, but not world-as-we-know-it-ending like the AI bros predict.

 help



If AI just means automation, then sure. We absolutely need more automation and if LLMs are not the mechanism then something else better be. More automation is the life blood of our industry. But are LLMs a game changer or today's fuzzy logic? [1] Time will tell...

[1] https://www.electronicdesign.com/technologies/embedded/digit...

P.S. I'm not saying fuzzy logic doesn't have applications, I know rice cookers are a thing, but I think it's safe to say we have other options for controlling non-linear systems these days.


> the much promised impacts aren’t there and aren’t coming anytime soon

at least according to industry analysts, the thesis at the moment is that reasoning models (which loop over their own output and backtrack if necessary) will bring fidelity close to 100% and find novel solutions not present in the training dataset. but they consume more tokens, they require more computing and the infra for it is still being built. so the outlook for those impacts is ~2030


  > fidelity close to 100%
what is fidelity meaning here? creating perfectly lifelike images and video or code that is "perfect" even with imperfect inputs? or something else...?

WE do have self driving cars with Waymo data showing it is clearly better than human drivers in certain markets like Phoenix. It is human regulations, laws and the general societal unease that is preventing a total rapid change. In fact a Robotaxis only urban area which is continuously mapped might be feasible today and probably could even reduce the no of cars needed for the population making it accessible to many more.

As a counterpoint, Waymo conducted a pilot in NYC then abandoned the permit for it:

https://www.thecity.nyc/2026/04/06/waymo-driverless-cars-tes...

Phoenix is probably about as good a location as you could get for a self driving car. It’s not yet clear how wide their success will be outside of that niche.


> certain markets like Phoenix

So, basically the easiest robotaxi market on the planet? Call me when it works in Bucharest, Mumbai, Istanbul, Cairo, etc.

For software the last 80% of effort needed to finish the 20% remaining items is the hardest and hardware is even harder.


No, it’s actually the same issue with AI in a lot of cases. In perfect conditions it can work reliably, but outside of that it falls apart in a way humans don’t.

This has not been my experience with Waymo. I drove a total of about ~3.5 hours in Waymos in LA when I was visiting and their robustness to very unusual situations absolutely floored me.

I am sure you can find truly out-of-distribution cases where the car will make a mistake, but the data shows that this is more rare than a human driver making a mistake.


How many times did they need remote assistance? Those teams aren’t driving remotely but Waymo doesn’t pay for entire groups to exist without need.

No, it’s actually the same issue with AI in a lot of cases. In perfect conditions it can work reliably, but outside of that it falls apart.

AI has the same problem. It’s not that it doesn’t work, but that folks just aren’t all that interested in adopting it at scale. Tech makes this “build it and they will come” error a lot. The tech is quite good, but it’s all the non tech aspects of this that are why it’s not getting impact at scale.

The tech is good but not as good as advertised: note how Microsoft is simultaneously running ads saying Copilot can run your business and claiming it’s only for entertainment purposes in the EULA? Self-driving vehicles have a similar struggle where the manufacturers talk about the capabilities but aren’t willing to sign a legal agreement accepting liability for errors except in the easiest situations (and in the case of Waymo, only with pliable governments and control so they could immediately halt operations in the event of a major problem).

That’s more “build part of it, say you built all of it, and wonder why they don’t come”.


You’re generalizing too much here. One of the biggest problems with LLM’s today is in-fact that they are not at the level being advertised. This is not solely a case of regulation standing in the way of a «revolution».

Ever driven in Bali?

> AI isn’t going away, but it’s also clear the much promised impacts aren’t there and aren’t coming anytime soon.

Even if it doesn't result in increased productivity, AI can still take the fun out of the job (goodbye coding, hello code reviews all day).


depends if post-correction it is worth anyone's money to keep training new frontier models. It could be that it isn't, so we are left with models that were trained in the bubble, but are now increasingly out of date, or (open?) models that are trained much more cheaply somehow with consequent lack of utility.

Good point. At some point there will be a reality check for the giant pile of burning cash that is new model training.

Was there any recent technology that really delivered what was the general promise?

Starlink is pretty darn good.



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: