Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I am not sure if cloud-based LLMs even allow modifying assistant output.

In general they do. For each request, you include the complete context as JSON, including previous assistant output. You can change that as you wish.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: