Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Reproductions of the product of thought, more like it.

I assume pretty much everyone here knows the gist of how LLMs work? "Based on these previous tokens, predict the next token, then recurse." The result is fascinating and often useful. I'm even willing to admit the possibility that human verbal output is the result of a somewhat similar process, though I doubt it.

But somehow, even highly educated/accomplished people in the field start talking about consciousness and get all spun up about how the model output some text supposedly telling you about its feelings or how it's going to kill everyone or whatever. Even though some basic undergraduate-level[0] philosophy of mind, or just common human experience, feels like it should be enough to poke holes in this.

[0] Not that I care that much for academic philosophy, but it does feel like it gives you some basic shit-from-shinola filters useful here...



I'm a functionalist, to me a complete "reproductions of the product of thought", as you beautifully put it, is enough to prove consciousness. LLMs are not there yet, though.

If you're interested: https://en.wikipedia.org/wiki/Functionalism_(philosophy_of_m...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: