Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This seems like circular reasoning to me.

"Human's don't just parrot what other people say, therefore we must be doing something more than what LLMs do, because an LLM only parrots the words that it is trained on. But an LLM is not truly intelligent because it is not doing what humans are doing—and because the LLM is not intelligent, all it is doing is parroting the training data."

Where in this line of reasoning is it proved that the mechanisms are different?

What if, instead, the conscious intelligence that differentiates humans from other animals is an emergent quality of language learning, specifically? Isn't there evidence that children that don't have access to language—deaf children who are not taught sign language, or feral children, or seriously neglected children—often cannot reason before they are taught language, and suffer from long-term cognitive impairment for having lacked language during their early development? Wouldn't this point to a fundamental linkage between language and human intelligence?



> What if, instead, the conscious intelligence that differentiates humans from other animals is an emergent quality of language learning, specifically

Sure, you can argue that you have solved the human exclusive part of intelligence, that is a possibility. But we have not yet solved the monkey part of intelligence, the part that all mammals possess that lets them act intelligently in this world. Without such animal intelligence I believe it is impossible for the model to not make really stupid mistakes no human would make, because it lacks the intuitive understanding of the world all animals has.

I don't think training on text will ever produce that level of understanding, no matter how hard you try, text just isn't the right medium to build an intuitive understanding of reality like a dog has.


At least I used to think that, but LLMs are basically a very unexpected (to me, at least) counterexample to that theory:

Formulating plausible sounding sentences seems to be possible without having a much deeper world model.

> What if, instead, the conscious intelligence that differentiates humans from other animals is an emergent quality of language learning, specifically?

There’s various linguistic theses claiming similar things (“human minds are wired for language”, i.e. Chomsky’s universal grammar, and conversely “language shapes general cognition”, i.e. Sapir-Whorf).

At least in the light of LLMs (if not long before), I think neither are actually still serious possibilities/useful models of the relationship between language and cognition.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: