Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You could try Cerebras. It's still vastly vastly vastly cheaper than what many people and all companies pay for Opus. And it's absurdly ridiculously stupendously fast. And GLM-4.7 is quite capable! https://www.cerebras.ai/blog/glm-4-7 https://news.ycombinator.com/item?id=46544047

You can definitely keep tweaking. It's also helpful just to ask it about what your possible concerns are and it will tell you and explain what it did.

I spent a good chunk of 2025 long time being super super careful & specific, using mostly very very cheap DeepSeek and just leading it by the leash at every moment and studying the output. It still felt like a huge win. But with more recent models, I have trust that they are doing ok, and I'm better at asking some questions once the code is written to hone my understanding. And mostly I just trust it now! I don't have to look carefully and tweak to exacting standards, because I've seen it do a ton of good work & am careful in what I ask.

There's other tactics that help. Rather than stare carefully at the code, making sure you and the AI are both running the program frequently, have a rig to test what's under development (ideally I'm an integration test type of way, which it can help set-up!). And then having what good programmers have long had, good observability tools at their back. Be that great logging or ideally sweet tracing. We have such better tools to see the high level behavior of systems now. AI with some prompts to go there can be extremely good about helping enhance that view.

It is going to feel different. But there's a lot you can do to get much better loops.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: