I genuinely don't understand why some people are so critical of LLMs. This is new tech, we don't really understand the emergent effects of attention and transformers within these LLMs at all. It is very possible that, with some further theoretical development, LLMs which are currently just 'regurgitating and hallucinating' can be made to be significantly more performant indeed. In fact, reasoning models - when combined with whatever Google is doing with the 1M+ ctxt windows - are much closer to that than people who were using LLMs expected.
The tech isn't there yet, clearly. And stock valuations are over the board way too much. But, LLMs as a tech != the stock valuations of the companies. And, LLMs as a tech are here to stay and improve and integrate into everyday life more and more - with massive impacts on education (particularly K-12) as models get better at thinking and explaining concepts for example.
The tech isn't there yet, clearly. And stock valuations are over the board way too much. But, LLMs as a tech != the stock valuations of the companies. And, LLMs as a tech are here to stay and improve and integrate into everyday life more and more - with massive impacts on education (particularly K-12) as models get better at thinking and explaining concepts for example.