Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't get the anti-LLM sentiment because plenty of trends continue to show steady progress with LLMs over time. Sure, you can poke at some dumb things LLMs do as evidence of some fundamental issue, but the frontier capabilities continue to amaze people. I suspect the anti-LLM sentiment comes from people who haven't given a serious chance at seeing all the things they're capable of for themselves. I used to be skeptical, but I've changed my mind quite a bit over the past year, and there are many others who've changed their stance towards LLMs as well.


Or, people who've actually trained and used models in domains where "stuff on the internet" is of no relevance to what you are actually doing realize the profound limitations to what these LLMs actually do. They are amazing, don't get me wrong, but not so amazing in many specific contexts.


People who think that "steady progress" will continue forever have no basis for their assumption.

You have a ad-hominem attack and your own personal anecdote with, which are not an argument for LLMs.


It'll steadily continue the same way Moore's law has continued for a while. I don't think people question the general trend in Moore's law besides the point where it's nearing the limit of physics. It's a lot harder to claim LLMs don't work as a universal claim, whereas claiming something is possible for LLMs only needs some evidence.


Yes, LLMs will continue to progress until they hit the limits of LLMs.

The idea that LLMs will reach AGI is entirely speculative, not least because AGI is undefined and speculative.


Lecun has already been proven wrong countless times over the years regarding his predictions of what LLMs can or cannot do. While LLMs continue to improve, he has yet to produce anything of practical value from his research. The salt is palpable, and for this he's memed for a reason.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: