Hacker Newsnew | past | comments | ask | show | jobs | submit | not_ai's commentslogin

And expensive, exactly the way a pay per use product would push its customers…

“It’s not working well enough!” We tell them. They respond with “Have you tried using it more?”


Back in 2024 I read a study saying: "Ask 4 LLMs the same question, if they all give you the same answer there is some 95-99% chance its correct"

Soooo... Its not just greed. There is something there.


Yes exactly. I’m talking about this in the article. I found out that when Claude and Codex both review the same PR and both find the same issue, our team fixes it 100% of the time.

What's the point of pair programming then if they both have the same opinions?

They don't. And you would be surprised how a good model actually pushes back on some comments.

The point was: when they do agree, it is a very strong signal.


There are a number of different models out there.

Haha yeah... Wait until they start jacking up the subscription prices

They don't change the prices, they just modify the amount of compute allocated - slower speeds and fewer tokens, they can set everything in the background to optimize costs and returns, and the user never realizes anything has changed.

Sometimes they'll announce the changes, and they'll even try to spin it as improving services or increasing value.

Local AI capabilities are improving at a rapid pace, at some point soon we'll have an RWKV or a 4B LLM that performs at a GPT-5 level, with reasoning and all the bells and whistles, and hopefully that'll shake out most of the deceptive and shady tactics the big platforms are using.


> They don't change the prices, they just modify the amount of compute allocated - slower speeds and fewer tokens, they can set everything in the background to optimize costs and returns, and the user never realizes anything has changed.

I can't imagine that this is the way it will go... Tokens haven't been getting cheaper for flagship models, have they? You already see something closer to their real cost if you compare e.g. the Claude subscriptions to their actual token pricing.

> Local AI capabilities are improving at a rapid pace, at some point soon we'll have an RWKV or a 4B LLM that performs at a GPT-5 level, with reasoning and all the bells and whistles, and hopefully that'll shake out most of the deceptive and shady tactics the big platforms are using.

Maybe, but LLMs are scale game, and data center will always be more capable than your local device. So, you will always be getting a worse version locally. Or do you think we'll LLMs in data centers stop getting better and local LLMs will somehow catch up?


I have a very different experience. Claude code tui is the worst tui I have ever used. How is it possible that an inactive tui regularly eats 8gb of ram, has freezing issues and rendering issues?

If I wasn’t forced to use it I wouldn’t as there are better options available.


So many rendering issues...


Thanks for sharing this is the first I’ve seen this. I wish they had expanded on exactly what mid-level might be missing rather then just saying “fundamentals” and “practical intuition”


Preferably the C-Suite.


I understand the impulse in this direction, but I’m not sure it would serve as much of a disincentive, as there would likely just be a highly-paid scapegoat. Why not something more lasting and less difficult to ignore, like compulsory disclosure of the model’s source code (in addition to compensation for the victim(s)). Compulsory disclosure of the source would be a massive disadvantage.


The source code isn't where the money is, what you want is the training data. Force them to serve and make freely available all the data they stole to sell back to us. That way everyone and anyone can use it when training their own models. That might just be punitive enough.


> as there would likely just be a highly-paid scapegoat

the point of executives is someone has to take responsibility. that's why they get paid. the buck has to stop somewhere.


exactly. That's why they get the big bucks. They're ultimately responsible


The C-suite is only responsible when the company does good or stonks go up. When they do something bad, it's either: external market forces, the laws of physics, an uncertain macroeconomic environment, unfair competition, or lone wolf individual employees way down the totem pole.


After spending all that money and firing a bunch of people? Is the new group doing anything at this point?


They are busy demonstrating that Mark Zuckerberg has no sense at all.


I’m happy to see that Flink is in this stack, I wish that Pulsar was as well instead of Kafka.


The monthly releases seem to indicate otherwise.


Something's deeply wrong here.


Things have changed quite a bit in the past 30 years!

I encourage you to peek at their changelog (https://www.sudo.ws/releases/changelog/) for more insight into why this project is still under active development.


I just learned about amathia (https://modernstoicism.com/there-is-nothing-banal-about-phil...), which seems to apply here.


Then fork it and finish it. I’m sure it will be a huge success.


You should look up "doas". It might enlighten you.


If you have a point to make then make it. I don’t accept anonymous homework assignments.


It's a kitchen sink tool that does way too many things.


At the company I work for they locked down installing extensions through the marketplace. Some are available but most are not and there is a process to get them reviews and approved. You might be able to side load them still but I haven’t cared enough to want to try.

They did the same with Chrome extensions.


I did not know this and explains why I see so many teslas with their blinkers on and not maneuvering despite having ample room and time. Ultimately this behavior makes them unsafe for their occupants as well as others around them.

Cars only work because we can predict driver behavior, if they break that prediction that’s when bad things are likely to happen…

Lately I’ve started to ignore Tesla blinker.


This is most likely due to the fact that it is really bad at resetting blinker when the steering wheel is straight’ish again. Extremely annoying as any other car is much more sensitive (and sensible).

In a tesla an on-ramp to straight highway is rarely enough to stop the blinker, something I’ve never experienced in any other car.

Couple this with, IMO, the best baseline speaker system of any manufacturer… I’ve been driving with the blinker on for several kilometers at times!


Autopilot doesnt turn on the turn signal or change lanes, what you are dealing with is humans.


Enchanced autopilot and self-driving do.


Not really, it’s just the interface OpenAI gave for creating short videos with their AI. They push people to it hoping for engagement, but it’s not the sole reason people go — unlike TikTok.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: