Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Using prompts know to be problematic? Some sort of... Voight-Kampff test for LLMs?


I doubt it's that simple. What about memories running in prod? What about explicit user instructions? What about subtle changes in prompts? What happens when a bad release poisons memories?

The problem space is massive and is growing rapidly, people are finding new ways to talk to LLMs all the time




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: