Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes, this is the problem. We have to be able to efficiently describe what LLMs do and how they do it, when what they do is superficially familiar, but how is fundamentally alien. We haven't yet developed the necessary language to discuss them on their own terms.

Anthropomorphism and resorting to metaphors like "hallucinate" and "confabulate" are inevitable if you don't want to have to preface every comment with a paragraph of technical discussion. They get the necessary point across which is that the "reality" LLMs construct is not necessarily tethered to actual reality. They're deceptively convincing but can't be trusted.



I fully agree with this perspective. The terminology will change as the field continues to evolve. As long as any anthropomorphizing terms are chosen carefully and are not aggrandizing, it shouldn't be a problem. IIRC "hallucinate" was a term previously used to describe characteristics other network types such as RBMs and had just been carried over to LLMs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: