Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's nevertheless interesting how LLMs seem to default to the 'fast thinking' mode of human interaction -- even CoT approaches seem to just be mimicking 'slow thinking' by forcing the LLM to iterate through different options. The failure modes I see are very often the sort of thing I would do if I were unfocused or uninterested in a problem.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: