> However, code quality is becoming less and less relevant in the age of AI coding, and to ignore that is to have our heads stuck in the sand. Just because we don't like it doesn't mean it's not true.
It's the opposite, code quality is becoming more and more relevant. Before now you could only neglect quality for so long before the time to implement any change became so long as to completely stall out a project.
That's still true, the only thing AI has changed is it's let you charge further and further into technical debt before you see the problems. But now instead of the problems being a gradual ramp up it's a cliff, the moment you hit the point where the current crop of models can't operate on it effectively any more you're completely lost.
> We are in the very earliest months of AI actually being somewhat competent at this. It's unlikely that it will plateau and stop improving.
We hit the plateau on model improvement a few years back. We've only continued to see any improvement at all because of the exponential increase of money poured into it.
> It's only trending in one direction. And it isn't going to stop.
Sure it can. When the bubble pops there will be a question: is using an agent cost effective? Even if you think it is at $200/month/user, we'll see how that holds up once the cost skyrockets after OpenAI and Anthropic run out of money to burn and their investors want some returns.
Think about it this way: If your job survived the popularity of offshoring to engineers paid 10% of your salary, why would AI tooling kill it?
> That's still true, the only thing AI has changed is it's let you charge further and further into technical debt before you see the problems. But now instead of the problems being a gradual ramp up it's a cliff, the moment you hit the point where the current crop of models can't operate on it effectively any more you're completely lost.
What you're missing is that fewer and fewer projects are going to need a ton of technical depth.
I have friends who'd never written a line of code in their lives who now use multiple simple vibe-coded apps at work daily.
> We hit the plateau on model improvement a few years back. We've only continued to see any improvement at all because of the exponential increase of money poured into it.
The genie is out of the bottle. Humanity is not going to stop pouring more and more money into AI.
> Sure it can. When the bubble pops there will be a question: is using an agent cost effective? Even if you think it is at $200/month/user, we'll see how that holds up once the cost skyrockets after OpenAI and Anthropic run out of money to burn and their investors want some returns.
The AI bubble isn't going to pop. This is like saying the internet bubble is going to pop in 1999. Maybe you will be right about short term economic trends, but the underlying technology is here to stay and will only trend in one direction: better, cheaper, faster, more available, more widely adopted, etc.
> What you're missing is that fewer and fewer projects are going to need a ton of technical depth.
> I have friends who'd never written a line of code in their lives who now use multiple simple vibe-coded apps at work daily.
Again it's the opposite. A landscape of vibe coded micro apps is a landscape of buggy, vulnerable, points of failure. When you buy a product, software or hardware, you do more than buy the functionality you buy the assurance it will work. AI does not change this. Vibe code an app to automate your lightbulbs all you like, but nobody is going to be paying millions of dollars a year on vibe coded slop apps and apps like that is what keeps the tech industry afloat.
> Humanity is not going to stop pouring more and more money into AI.
There's no more money to pour into it. Even if you did, we're out of GPU capacity and we're running low on the power and infrastructure to run these giant data centres, and it takes decades to bring new fabs or power plants online. It is physically impossible to continue this level of growth in AI investment. Every company that's invested into AI has done so on the promise of increased improvement, but the moment that stops being true everything shifts.
> The AI bubble isn't going to pop. This is like saying the internet bubble is going to pop in 1999.
The internet bubble did pop. What happened after is an assessment of how much the tech is actually worth, and the future we have now 26 years later bears little resemblance to the hype in 1999. What makes you think this will be different?
Once the hype fades, the long-term unsuitability for large projects becomes obvious, and token costs increase by ten or one hundred times, are businesses really going to pay thousands of dollars a month on agent subscriptions to vibe code little apps here and there?
> Again it's the opposite. A landscape of vibe coded micro apps is a landscape of buggy, vulnerable, points of failure. When you buy a product, software or hardware, you do more than buy the functionality you buy the assurance it will work. AI does not change this. Vibe code an app to automate your lightbulbs all you like, but nobody is going to be paying millions of dollars a year on vibe coded slop apps and apps like that is what keeps the tech industry afloat.
This is what everyone says when technology democratizes something that was previously reserved for a small number of experts.
When the printing press was invented, scribes complained that it would lead to a flood of poorly written, untrustworthy information. And you know what? It did. And nobody cares.
When the web was new, the news media complained about the same thing. A landscape of poorly researched error-ridden microblogs with spelling mistakes and inaccurate information. And you know what? They were right. That's exactly what the internet led to. And now that's the world we live in, and 90% of those news media companies are dead or irrelevant.
And here you are continuing the tradition of discussing a new landscape of buggy, vulnerable products. And the same thing will happen and already is happening. People don't care. When you democratize technology and you give people the ability to do something useful they never could do before without having to spend years becoming an expert, they do it en masse, and they accept the tradeoffs. This has happened time and time again.
> The internet bubble did pop... the future we have now 26 years later bears little resemblance to the hype in 1999. What makes you think this will be different?
You cut out the part where I said it only popped economically, but the technology continued to improve. And the situation we have now is even better than the hype in 1999:
They predicted video on demand over the internet. They predicted the expansion of broadband. They predicted the dominance of e-commerce. They predicted incumbents being disrupted. All of this happened. Look at the most valuable companies on earth right now.
If anything, their predictions were understated. They didn't predict mobile, or social media. They thought that people would never trust SaaS because it's insecure. They didn't predict Netflix dominating Hollywood. The internet ate MORE than they thought it would.
Your whole argument is based on 'the technology improves'.
Ok, so another fundamental proposition is monetary resources are needed to fund said technology improvement.
Whats wrong with LLMs? They require immense monetary resources.
Is that a problem for now? No because lots of private money is flowing in and Google et al have the blessing of their shareholders to pump up the amount of cash flows going into LLM based projects.
Could all this stop? Absolutely, many are already fearing the returns will not come. What happens then? No more huge technology leaps.
This has literally never happened in the history of humanity. Name one technology where development permanently stopped due to lack of funding, despite there being...
1. lots of room for progress, i.e. the theoretical ceiling dwarfed the current capabilities
2. strong incentives to continue development, i.e. monetary or military success
3. no obviously better competitors/alternatives
4. social/cultural tolerance from the public
Literally hasn't happened. Even if you can find 1 or 2 examples, they are dwarfed by the hundreds of counter examples. But more than likely, you won't find any examples, or you'll just find something recent where progress is ongoing.
Useful technology with room to improve almost always improves, as people find ways to make it better and cheaper. AI costs have already fallen dramatically since LLMs first burst on the scene a few years back, yet demand is higher than ever, as consumers and businesses are willing to pay top dollar for smarter and better models.
1. As I said before, we've long since reached diminishing returns on models. We simply don't have enough compute or training data left to make them dramatically better.
2. This is only true if it actually pans out, which is still an unknown question.
3. Just... not using it? It has to justify its existence. If it's not of benefit vs. the cost then why bother.
4. The public hates AI. The proliferation of "AI slop" makes people despise the technology wholesale.
> This is what everyone says when technology democratizes something that was previously reserved for a small number of experts.
What part of renting your ability to do your job is "democratizing"? The current state of AI is the literal opposite. Same for local models that require thousands of dollars of GPUs to run.
Over the past 20 years software engineering has become something that just about anyone can do with little more than a shitty laptop, the time and effort, and an internet connection. How is a world where that ability is rented out to only those that can pay "democratic"?
> When the printing press was invented, scribes complained that it would lead to a flood of poorly written, untrustworthy information. And you know what? It did. And nobody cares.
A bad book is just a bad book. If a novel is $10 at the airport and it's complete garbage then I'm out $10 and a couple of hours. As you say, who cares. A bad vibe coded app and you've leaked your email inbox and bank account and you're out way more than $10. The risk profile from AI is way higher.
Same is even more true for businesses. The cost of a cyberattack or a outage is measured in the millions of dollars. It's a simple maths, the cost of the risk of compromise far oughtweights the cost of cheaper upfront software.
> You cut out the part where I said it only popped economically, but the technology continued to improve.
The improvement in AI models requires billions of dollars a year in hardware, infrastructure, end energy. Do you think that investors will continue to pour that level of investment into improving AI models for a payout that might only come ten to fifteen years down the road? Once the economic bubble pops, the models we have are the end of the road.
> What part of renting your ability to do your job is "democratizing"? The current state of AI is the literal opposite. Same for local models that require thousands of dollars of GPUs to run.
"Renting your ability to do your job"?
I think you're misunderstanding the definition of democratization. This has nothing to do with programmers. It has nothing to do with people's jobs. Democratizing is defined as "the process of making technology, information, or power accessible, available, or appealing to everyone, rather than just experts or elites."
In other words, democratizing is not about people who who have jobs as programmers. It's about the people who don't know how to code, who are not software engineers, who are suddenly gaining the ability to produce software.
Three years ago, you could not pay money to produce software yourself. You either had to learn and develop expertise yourself, or hire someone else. Today, any random person can sit down and build a custom to-do list app for herself, for free, almost instantly, with no experience.
> The improvement in AI models requires billions of dollars a year in hardware, infrastructure, end energy. Do you think that investors will continue to pour that level of investment into improving AI models for a payout that might only come ten to fifteen years down the road? Once the economic bubble pops, the models we have are the end of the road.
10-15 year payouts? Uhhh. Maybe you don't know any AI investors, but the payout is coming NOW. Many tens of thousands of already gotten insanely rich, three years ago, and two years ago, and last year, and this year. If you think investors won't be motivated, and there aren't people currently in line to throw their money into the ring, you're extremely uninformed about investor sentiment and returns lol.
You can predict that the music will stop. That's fair. But to say that investors are worried about long payout times is factually inaccurate. The money is coming in faster and harder than ever.
> Democratizing is defined as "the process of making technology, information, or power accessible, available, or appealing to everyone, rather than just experts or elites."
Your definition only supports my point. The transfer of skill from something you learn to something you pay to do is the exact and complete opposite of your stated definition. It turns the activity from something that requires you to learn it to one that only those that can afford to pay can do.
It is quite literally making this technology, information, and power available to only the elite.
> Uhhh. Maybe you don't know any AI investors, but the payout is coming NOW.
What payout? Zero AI companies are profitable. If you're invested in one of these companies you could be a billionaire on paper, but until it's liquid it's meaningless. There's plenty of investors who stand to make a lot of money if these big companies exit, but there's no guarantee that will happen.
The only people making money at the moment are either taking cash salaries from AI labs or speculating on Nvidia stock. Neither of which have much do with the tech itself and everything to do with the hype.
Dont waste your time on him. He reminds me of people who are so concentrated on one part of the picture, they can't see the whole damn thing and how all the pieces fit and interact with each other.
You're describing yourself imo. Your point ignores hundreds of years of history and says zero about the forces that shape technological development and progress, which have been studies fairly exhaustively.
"Thousands of dollars of GPU" as a one-time expense (not ongoing token spend) is dirt cheap if it meaningfully improves productivity for a dev. And your shitty laptop can probably run local AI that's good enough for Q&A chat.
Why not? Once the true cost of token generation is passed on to the end user and costs go up by 10 or 100 times, and once the honeymoon delusion of "oh wow I can just prompt the AI to write code" fades, there's a big question as to if what's left is worth it. If it isn't, agents will most certainly go away and all of this will be consigned to the "failed hype" bin along with cryptocurrency and "metaverse".
I worry people are lacking context about how SaaS products are purchased if they think LLMs and "vibe coding" are going to replace them. It's almost never the feature set. Often it's capex vs opex budgeting (i.e., it's easier to get approval for a monthly cost than a upfront capital cost) but the biggest one is liability.
Companies buy these contracts for support and to have a throat to choke if things go wrong. It doesn't matter how much you pay your AI vendor, if you use their product to "vibe code" a SaaS replacement and it fails in some way and you lose a bunch of money/time/customers/reputation/whatever, then that's on you.
This is as much a political consideration as a financial one. If you're a C-suite and you let your staff make something (LLM generated or not) and it gets compromised then you're the one who signed off on the risky project and it's your ass on the line. If you buy a big established SaaS, do your compliance due-diligence (SOC2, ISO27001, etc.), and they get compromised then you were just following best practice. Coding agents don't change this.
The truth is that the people making the choice about what to buy or build are usually not the people using the end result. If someone down the food chain had to spend a bunch of time with "brittle hacks" to make their workflow work, they're not going to care at all. All they want is the minimum possible to meet whatever the requirement is, that isn't going to come back to bite them later.
SaaS isn't about software, it's about shifting blame.
There's little to no evidence that companies are actually doing layoffs to focus on "AI-enabled" work.
All there is are layoffs because of interest rates and concerns about the economic outlook. Companies using "AI" as a fig leaf justification and people are apparently falling for it.
> Reviews are billed on token usage and generally average $15–25, scaling with PR size and complexity.
You've got to be completely insane to use AI coding tools at this point.
This is the subsidised cost to get users to use it, it could trivially end up ten times this amount. Plus, you've got the ultimate perverse incentive where the company that is selling you the model time to create the PRs is also selling you the review of the same PR.
The bet is that compute gets cheap enough before the crunch that it won't matter. You should model it at 10x - but you also need to factor in NPV and opportunity cost. Even if pricing spikes later, the value extracted at today's rates might still put you ahead overall.
The relevant comparison for most enterprise isn't whether $15/PR is subsidised - it's whether it beats the alternative. For most shops that's cheap offshore labour plus the principal engineer time spent reviewing it, managing it, and fixing what got merged anyway. Most enterprise code is trivial CRUD - if the LLM generates it and reviews it to an equivalent standard, you're already ahead.
> At work, all that matters is that value is delivered to the business. Code needs to be maintainable so that new requirements can be met. Code follows design patterns, when appropriate, because they are known solutions to common problems, and thus are easy to talk about with others. Code has type systems and static analysis so that programmers make fewer mistakes.
This is a narrow view of software engineering. Thinking that your role is "code that works" is hardly better than thinking you're a "(human) resource that produces code". Your job is to provide value. You do that by building knowledge, not only of the system you're developing but of the problem space you're exploring, the customers you're serving, the innovations you can do that your competitors can't.
It's like saying that a soccer player's purpose is "to kick a ball" and therefore a machine that launches balls faster and further than any human will replace all soccer players, and soon all professional teams will be made up of robots.
I think your view is sentimental. For businesses the code usually IS the value, and devs ARE human resources that produce code. It sounds cynical, but it’s basically how most orgs operate. From the company’s POV employees function as cogs in a larger system whose purpose is to generate value considering that businesses are structured to optimize outcomes i.e. Profit. If tech appears that can produce the same output more cheaply or efficiently, companies will most definitely as we've seen so far explore replacing people with it. I mean take a look at corporate posture around LLMs. But do I get the point you’re making about knowledge, domain understanding, and solving real problems because those things clearly matter in practice but from the company’s pov, they matter only because they help produce better code/systems which are still the concrete artifact that embodies the business logic and operations. A symbolic model of the business itself encoded in software. So the framing of devs as human resources that produce code and code as the primary value correctly describes how many businesses see the relationship. And I don't really see the equivalence between SWE-ing in a business context and sports
> From the company’s POV employees function as cogs in a larger system whose purpose is to generate value considering that businesses are structured to optimize outcomes i.e. Profit. If tech appears that can produce the same output more cheaply or efficiently, companies will most definitely as we've seen so far explore replacing people with it.
Businesses wish this were the case, and many will even say it or start to believe it. But it doesn't bare out to be true in practice.
Think about it this way, engineers are expensive so a company is going to want to have as few of them as possible to do as much work as possible. Long before LLMs came along there have been many rounds of "replace expensive engineers" fads.
Visual programming was going to destroy the industry, where any idiot could drag and drop a few boxes and put together software. Turns out that didn't work out and now visual programming is all but dead. Then we had consultants and software consultancies. Why keep engineers on staff and have to deal with benefits and HR functions when you can hire consultants for just long enough to get the job done and end their contracts. Then we had offshoring. Why hire expensive developers in markets like California when you can hire far cheaper engineers abroad in a country with lower wages and laxer employment law. (It's not a quality thing either, many of these engineers are unquestionably excellent.)
Or, think about what happens when software companies get acquired. It's almost unheard of for the acquiring company to layoff all of the engineering staff from the acquired company right away, if anything it's the opposite with vesting incentives to convince engineers to stay.
If all that mattered was the code and the systems, and people were cogs that produced code that businesses wanted to optimise, then none of these actions make sense. You'd see companies offshore and use consultants with the company that does "good enough" as cheaply as possible. You'd see engineers from acquisitions be laid off immediately, replaced with cheaper staff as fast as possible.
There are businesses like that operate like this, it happens all the time. But, all of the most successful and profitable tech companies in the world don't do this. Why?
>If all that mattered was the code and the systems, and people were cogs that produced code that businesses wanted to optimise, then none of these actions make sense.
No, No... Of course all that matters isn't just the code. My framing was about how organizations model the work SWE do economically.
>Visual programming was going to destroy the industry, where any idiot could drag and drop a few boxes and put together software. Turns out that didn't work out and now visual programming is all but dead. Then we had consultants and software consultancies. Why keep engineers on staff and have to deal with benefits and HR functions when you can hire consultants for just long enough to get the job done and end their contracts. Then we had offshoring. Why hire expensive developers in markets like California when you can hire far cheaper engineers abroad in a country with lower wages and laxer employment law. (It's not a quality thing either, many of these engineers are unquestionably excellent.)
It seems like we're agreeing along the same tangent. With this argument, you're admitting that businesses do see SWE as cogs in a wheel and seasonally try to replace them... The seasonality of 'make the engineer replaceable' fads really does point to businesses trying to simplify what devs actually do since most of what they measure is working code output because it’s a tangible artifact (this is waht the OP meant by implying being a working code producer at work). Knowledge, judgment, architectural intuition, and domain understanding are harder to quantify, so they disappear from the model even though they ARE the real constraint. So for the record, I do agree with you that code isn't everything but I maintain that SWEs are modelled based on working codes produced even in more successful companies that invest in domain knowledge and long-term system understanding.
Metrics, performance reviews, sprint velocity, delivery timelines, all orbit around observable artifacts because those are what management systems can actually track objectively and equitably. It's a handy abstraction just like looking only at the ins/outs of a logic gate as opposed to looking at the implementation and wiring. Of course, a NOT gate would get upset over being called a 'bit flipper', it's not all thar physically exists but from our POV, it doesn't exactly matter. This applies to human labor even if a leaky abstraction w
> you're admitting that businesses do see SWE as cogs in a wheel and seasonally try to replace them...
Not quite. I agree that companies will try to do this, but every company that has tried to treat engineering staff as replaceable units of person-hours has failed.
> Metrics, performance reviews, sprint velocity, delivery timelines, all orbit around observable artifacts because those are what management systems can actually track objectively and equitably. It's a handy abstraction just like looking only at the ins/outs of a logic gate as opposed to looking at the implementation and wiring.
Yes, and these metrics are, usually, worthless.
It's not that companies and managers will not try to replace engineers with AI. I'm sure they will. I'm sure many will be laid off because "AI does it cheaper now".
My point is that companies that have gone down this route in the past have failed, and AI is no different. Companies that lean strongly into AI as a workforce replacement will fail too.
lol but you have to first 'view' something as replaceable before yu try to replace it, no? So companies DO see SWEs as cogs and try but fail to actually make them replaceable, yes?
It's not even as simple as "views as replaceable". It's pure economics. It's someone looking at a spreadsheet going "We spent a lot of money on SWE salaries, our financial results look better if we fire some of them. Is there a cheaper option?"
From that perspective, yes some management view SWE as replaceable. My argument is that all attempts to actually implement that have failed to date, and the most successful financial companies are staffed by upper management who know that to remove much of the SWE staff would doom the company in the medium term.
It's a move of either desperation ("we'll go bankrupt if we don't do this"), or short-sightedness ("if I cut 40% of headcount, our P&L will be better, which will result in better quarterly results, which is likely to increase share price, which gives me a bigger performance bonus. Who cares what happens after that."), or a lack of experience in managing software companies and watching this play out before.
AI, even if it lives up the hype, is no different.
Listen, If you truly want help you've made the first step by realising what's wrong, but you won't get help here.
This community is obsessively pro-AI. Asking here is the equivalent of asking the guy who has sat at the slot machine next to you for the past three hours if he thinks you have a gambling problem. Of course he's going to say "no" or try to justify it, to do otherwise would be to admit to himself that he has a problem.
I don't have advice for you, other than to look up what gambling, drug, or alcohol addicts do. The path to recovery for all addiction is long and painful, but it can be done. Good luck.
> AI is actually better getting those built as long as you clean it up afterwards
I've never seen a quick PoC get cleaned up. Not once.
I'm sure it happens sometimes, but it's very rare in the industry. The reality is that a PoC usually becomes "good enough" and gets moved into production with only the most perfunctory of cleanup.
One trick for avoiding this is to use artifacts in the PoC that no self-respecting developer would ever allow in production. I use html tables in PoCs because front-end devs hate them - with old-school properties like cellpadding that I know will get replaced.
I also name everything DEMO__ so at least they'll have to go through the exercise or renaming it. Although I've had cases where they don't even do that lol. But at least then you know who's totally worthless.
I've been in this game a long time and I've seen a lot, but this AI hype cycle is exhausting. Like no technology before it I've watched extremely smart and capable engineers fall into AI like it's a cult. I've had colleagues and friends I've known for years drop head first into this shit.
At first I was interested in the tech, I deep-dived into it. Understood as much as I could. I understand how an LLM works and what it can and can't do. So, I realised pretty quickly that their use is limited. I figured it would blow over in a few years, the real use cases would be weeded out, and we'd all move on to the next thing like normal.
What I didn't account for is how addictive this technology is. The moment something "feels" like a person it's ascribed magical qualities, and people fall for it. Anyone can, doesn't matter how smart you are.
For the past six months I've felt nothing beyond a deep melancholic sadness. Not that my industry is changing, it isn't. Not really. These models will not replace people, and anyone who thinks they can is either trying to sell you something or is delusional. The readjustment and the end of the hype cycle will come eventually. But, I fear many people will never be able to let it go. I'm saddened that we're going to lose a generation of brilliant people to fiddling with token predictors, and many of them will never recover from it.
AI will set the industry back twenty years. Not because we will be replaced, but because so many people will be dragged into psychosis and addiction or waste decades chasing the future on a lie.
And there's nothing any of us can do about it now.
No. Software is being centralized. If the snake oil AI companies are selling about the coming agentic age were true, then the end result is not "anyone can produce software" it is "anyone stupid enough to rent the ability to run their business from an AI vendor can produce software".
It's the opposite, code quality is becoming more and more relevant. Before now you could only neglect quality for so long before the time to implement any change became so long as to completely stall out a project.
That's still true, the only thing AI has changed is it's let you charge further and further into technical debt before you see the problems. But now instead of the problems being a gradual ramp up it's a cliff, the moment you hit the point where the current crop of models can't operate on it effectively any more you're completely lost.
> We are in the very earliest months of AI actually being somewhat competent at this. It's unlikely that it will plateau and stop improving.
We hit the plateau on model improvement a few years back. We've only continued to see any improvement at all because of the exponential increase of money poured into it.
> It's only trending in one direction. And it isn't going to stop.
Sure it can. When the bubble pops there will be a question: is using an agent cost effective? Even if you think it is at $200/month/user, we'll see how that holds up once the cost skyrockets after OpenAI and Anthropic run out of money to burn and their investors want some returns.
Think about it this way: If your job survived the popularity of offshoring to engineers paid 10% of your salary, why would AI tooling kill it?
reply