Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think the reality is that these AI output the "average" of what was in their training set, and people receive it differently depending on if they are below or above this average.

It's a bit like what happens with "illusion of knowledge" or "illusion of understanding". When one knows the topic, one can correct the output of AI. When one doesn't, one tends to forget it can be inaccurate or plain wrong.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: