Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I wrote a short preprint arguing that LLM hallucinations aren’t an artifact of scale but a consequence of an “open-loop” architecture. Transformers optimize for internal coherence, not grounding. I propose an “Epistemic Loop” to distinguish primary data from latent recombination. Before the next paper, where I plan to outline a functional design for the Epistemic Loop Architecture (ELA), I’d be grateful for any critical feedback and discussion on the ontological assumptions presented here.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: