Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Today’s LLMs are not humans and don’t process information anything like humans.


That's irrelevant. What's important is that LLMs are intentionally designed as fully general systems, so they can react like humans within confines of the model's sensory modalities and action space. Much like humans (or anything else in nature), they don't have separate control channels or any kind of artificial "code vs. data" distinction - and you can't add it without loss of generality.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: