Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Im currently working on a way to basically make LLM spit out any data processing answer as code which is then automatically executed, and verified, with additional context. So things like hallucinations are reduced pretty much to zero, given that the wrapper will say that the model could not determine a real answer.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: