Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've wondered about this too. The LLM could just write machine code. But now a human can't easily review it. But perhaps TDD makes that ok. But now the tests need to be written in a human readable language so they can be checked. Or do they? And if the LLM is always right why does the code need to be tested?


The LLM might be terrible at writing machine code directly. The kinds of mistakes I see GPT-4 making in Python, PostScript, or JS would be a much bigger problem in machine code. It "gets confused" and "makes mistakes" in ways very similar to humans. I haven't had a chance to try DeepSeek R1 yet.


At a certain point I don't see why a human needs to be in the loop at all. But I suppose that's the most dystopian part of it all.


Maybe the human has the money.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: