Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

At a $1,000/month price point, wouldn't the economics start favoring buying GPUs and running local LLMs? Even if they're weaker, local models can still cover enough use cases to justify the switch.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: