Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I wish. I have just witnessed a engineer on our (small) team push a 4k line change to prod at the middle of the night. His message was: "lets merge and check it after". AI can help good team become better, but for sure it will make bad teams worse.

I sorry friends, I think imma quit to farming :$



I don’t really see how this is an AI issue. We use AI all the time for code generation but if you put this on my desk with specific instructions to be light on review and it’s not a joke, I’m probably checking to see if you’re still on probation because that’s an attitude that’s incompatible with making good software.

People with this kind of attitude existed long before AI and will continue to exist.


Totally, and im not saying otherwise. I'm saying that it takes the same amount of work to become a good engineering team even with AI. But it takes exponentially less work to bacome worse team. If they say C++ makes it much more easier to shut yourself in the foot, in a similar manner LLMs are hard to aim. If your team can aim properly, you are going to hit more targets more quickly, but if and when you miss, the entire team is in wheelchairs.


Making good software isn’t what matters in most workplaces - making software that works (even if you have taped over the cracks) is.

It’s always been this way in toxic workplaces - LLM’s amplify this.


Try to comply to an infosec standard. Typically one of many compliance controls are "every change must be reviewed and approved by another person". So no one can push on their own.

I know folks tend to frown on security compliances, but if you honestly implement and maintain most of the controls in there, not just to get a certificate -- it really make a lot of sense and improves security/clarity/risks.


One should not be able to push to prod on their own especially in the middle of the night? Unless its a critical fix


> Unless its a critical fix

The bar for human approval and testing should be even higher for critical fixes.


Exactly. Wake someone up to review.


Who cares, AI has lowered the bar. If AI can produce rubbish 20+% of the time, so can we.


If I could at all help it, I would simply not work somewhere with that sort of engineering culture. Huge red flag.


There’s a weird thing going on - I can see value in using LLM’s to put something together so you can see it rather than investing time to do it properly initially.

But to just copy, paste and move on… terrible.


Thats the gist of it. I've been trying to tell the founders that if we invest 2x more time on proper planning we will get 20x more outcomes in return. It's as simple as that, its not about just writing stuff and pushing, its about understanding the boundaries of what you make, how it talks with other stuff, and what are the compromises you are willing to take in return for faster speeds.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: