Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

LLMs definitely make "mistakes". It's well-documented by both users and the providers themselves. Even if 5-10% of questions get a hallucination that sends someone down a totally wrong path, that's too much. It's a really high bar, to be clear, but an important one imo.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: