>The current market doesn’t value those skills particularly highly, but instead prioritizes a different set of skills: working in the details, pushing pace, and navigating the technology transition to foundational models / LLMs.
depends on the assumption that technology must "transition" to "foundational models / LLMs". The author doesn't seem to interrogate this assumption. In fact, most of the career malaise I've seen in my work is based on the assumption that, for one reason or another, technologists "must transition" to this new world of LLMs. I wish people would start by interrogating this bizarre backwards assumption (ie., - damn the end product! Damn the users! It must contain AI!) before framing career discussions around it.
However,
>decision-makers can remain irrational longer than you can remain solvent
It’s important to understand how AI will affect your field and recalibrate your position or contribution accordingly
It is a big enough change for this to be a valid question for anyone in the world today
Leaving what you’re doing and going into “AI” will likely set you up for a crypto level disaster
Vibe coding is a thing but vibe business building or job hunting isn’t! So beware of hype and know that in the end money is made by serving people and it will be equally hard with vibe coding too because the bar is higher
AI will create newer opportunities for sure but follow the opportunity, not the AI is what the sentiment here is I guess
>It’s important to understand how AI will affect your field and recalibrate your position or contribution accordingly
In the case of my industry (middling cybersecurity) we're seeing the following "advances"
- When you ask someone a question, they vomit your question into co-pilot, paste the result, and presume that they have helped somehow.
- All meetings now have not-useful meeting notes and no one reads these.
- People are considering implementing security co-pilot, which will introduce useful advances such as spending much more time building promptbooks so co-pilot can understand our logs.
- A lot more people think they're engineers, and vomit out scripts which do things the "authors" do not anticipate.
>- When you ask someone a question, they vomit your question into co-pilot, paste the result, and presume that they have helped somehow.
We have been dealing with this at my job also. It's really concerning how this is becoming normalized and how often we've had to deal with it. Somehow there are people that have "Engineer" in their title that think this is acceptable workplace behavior and work product for a professional making $XXX,XXX/year.
We had a person join our team recently who doesn't know our stack at all (which is fine, we were happy to teach them). When another engineer reviewed their pull request and asked a question, they pasted the question into Copilot and responded to the pull request with the answer (which was wrong!), even going so far as to say "Copilot thinks it's this: ...". I almost lost it. Your job is to learn, understand, and apply that knowledge, not paste incorrect model responses back and forth between web forms!
It's baffling and enraging. Are people _trying_ to demonstrate to management and their teammates that they're actually worthless? Are our expectations as a profession really this low, that we don't expect people to understand the code that they push?
Our senior leaders have also been completely captured by this crap. Recently our CTO (public company in the US you've heard of) announced in chat that engineers with an aversion to relying on LLMs have an attitude problem that is incompatible with our company direction. I was blown away.
Only in the short term. In the medium to long term, false assumptions will kill a company. As an employee, you would be better off recognizing it before the crunch hits.
Nah, we have enough captured business that I doubt it’d make a difference. It’s also not actively terrible for the customer, it just doesn’t bring anything to the table for the use cases we’ve used it for.
Then again, maybe it’s good to give people some experience with it even if there’s no real reason to use it right at this moment.
Think about what you're asking when you tell these people to interrogate the assumption that LLM based AIs are going to be the dominant technology going forward. Hundreds of billions, the growth of the technology industry, the entire US stock market, and the global economy has been wagered on this technology. Imagine the turmoil when those in power realize the reality of what they're betting the farm on.
The next time I am angrily typing to claude 3.7 in all caps because he overengineered a bunch of code I didn't even ask him to write in the first place, I'll be sure to let him know his continued failures are risking the entire world economy.
I think SWE's have a serious blind spot here. I use the (rough) analogy of bowling to help illustrate this.
People need to knock over pins in the bowling lane. SWE's are the pro bowlers who can (usually) throw pretty cleans shots to knock over pins. Now bumpers have been invented (LLMs) and regular folks who only have the faintest idea of how to roll a ball are knocking over pins. To the pro's these bumpers are all manner of bad and useless. To the laymen, they are an absolute revolution.
I can tell you, with a straight face, the my (non-tech) company has already forgone hiring a pro bowler in at least four instances now because of these bumpers. Just last week we skipped on a $1k/mo CAD SaaS because Claude was able to build the needed narrow-scope tooling in 10 minutes.
I'm sure a pro could come in and make that python program 3x as fast and use 60% less memory. But the fact of the matter is that we paid Anthropic $20, and spent 10 minutes to get a working niche manufacturing file interpreter/editor/converter.
LLM's are finally bridging the language barrier between computers and humans. Right now the tech exists to make this even more widespread, it's just a matter of time before someone creates a tech-illiterate IDE that users can paste AI generated code into and functioning programs come out the other side. No need to ever even see a terminal or command line. I wouldn't be surprised if this isn't already in the works.
"Hey Google, create an app that allows me to take a picture of a house plant, and then allows me to verbally make entries into a diary about that plant" Sure thing! Give me 3 minutes and the app will be on your homescreen and shareable .apk in your documents folder! I'll also cancel the $9.99/mo app that does the same thing for you. (ok probably not this part but you get the idea.)
You're just describing programming without realizing that having a stochastic and imprecise programming language is a leaky abstraction.
The problem isn't that a "pro bowler" could come in and do something irrelevant for more money. Professional programmers understand that you shouldn't prematurely optimize things like memory or performance in contexts where that doesn't matter. You're ignoring the most important metric which is correctness. When we write a program, we make it correct then we can optimize it if that's actually important.
What's going to happen when you have your non-expert write a program using these tools and it inevitably gets something wrong? Are you prepared to build business process on top of a program whose author can't tell you how it works which might contain bugs? Do you even realize that this is just a recipe for producing mountains and mountains of technical debt?
You seem to believe this is going to put programmers out of a job but you're just explaining why they'll need even more of us in the future when these piles of garbage inevitably cost the company money.
You can be as snarky as you want but the reality is we're years deep into a market cycle that has seen a tremendous amount of capex with very little visible return.
How much more productive do you think claude makes you as compared to Google or Stack Overflow? 15%? 50%? 200? Do you think that's enough to satisfy the market or are we all trading on unrealistic expectations? Do you think shareholders are going to like it that they're losing billions a quarter so Anthropic can run a service that helps you write web dev projects marginally faster? Do you even understand the amount of value that's tied up in these questions having a good answer right now?
You don't stop the car but speed up faster toward the wall. That has been the strategy in the last decade or so. In other words, you invent a new AI-BLOCK-WEB-4.X thing and move the bubble one stage further.
I think bubbles form and pop all the time. I would add (in the US) home price bubble, biotech bubble (of mid 2010s), cybersecurity bubble and more.
Pops are less noticeable when a lot of money is sloshing around so deflating one bubble immediately start inflating something else. Worker bees can switch to the next thing in the same "building next great thing, work is plentiful, money no object" environment.
So the bubbliness, IMO, is a function of the macroeconomic state, specifically the amount of money in the economy. Things get sober (and very ugly) when the money printing cycle ends, as it eventually must to avoid sliding into hyperinflation. My 2c.
I think 2008 set the stage that the government will bail out reckless economic activity for the top at the expense of the tax payer. Actions no longer have consequences and you better be big enough to hold the economy hostage to your scheme.
I'm entirely in agreement with you, I just have a dark sense of humor.
I do actually use claude quite a bit recently and I suspect (completely anecdotal, so take it with a boatload of salt) it speeds up my development time by about 25% on average. But its a very lumpy sort of speed up that is sometimes a slow down depending upon how wacky the LLM's answers are. And I find claude 3.7 to be worse than claude 3.5 in many ways which makes me confused about how hyped up it is and further disillusioned about the "common sense" idea people around these parts have that the technology is just going to keep improving significantly year over year.
I use it entirely through the web interface and using the lowest price basic monthly subscription fee. I am probably a cost center for Anthropic rather than a profit center, but I don't really see that as my problem to worry about. I'll enjoy the thing while it lasts and then not cry too hard when the house of cards collapses.
Seeing more and more of this now, where even in "these parts" anything short of "AGI IS HERE" even a few months ago would have you labelled as "Luddite" "skeptic" and "left behind".
When is the general public going to start asking questions? It's really not a game--people's 401ks have been propped up by the "Magnificent Seven" (disproportionately by one company) for some time now.
What happens "if" these proclamations fail to manifest? How is this all supposed to work out?
I don't think companies like Meta, Alphabet or even Amazon are extremely overvalued. Their growth in revenues and earnings is still very solid, so even if A.I is a no show their earnings should continue to do fine (I'd even add Microsoft to the list).
Sure stock price might slow down significanty but it can easily add 10% yearly for the forseeable future. Its much less than what we're used to, but its solid growth that has not much to do with A.I.
And I totally agree people should forget about the s&p 500 returns we've seen in the last 15 years going forward, it would probably be less (though I have no crystal ball, but regression to the mean seems likely).
Actually, all stocks are dead and tech heavy Nasdaq has entered the correction zone this year. It will be a real while before the stocks return to their pre-correction value. The AI hype drivers have lost massive value, hence the desperation will get immense, expect the “AI ready to replace expensive SWE/Lawyer/Entertainers/Creatives” narrative to start any moment now! Of course everything is smoke&mirrors so those will be charged at such premium that the naysayers will not be able to afford it and irrational leaders will buy it and cite immense success to maximise their C level bonuses and will early retire before the tower comes falling down.
Correction is one thing but saying "stocks are dead" is hyperbole.
Just my poor ass startup is paying Google Cloud around 1 million USD a year to have our operations going. We're quite a cheap company but we're totally dependent on them.
And Cloud is a minor part of Google's earnings they have Youtube, Search, Android etc etc. This isn't a bubble, these are real earnings that are probably not going away any time soon. Does it mean Google is bullet proof? No. They may experience a big correction at some point but that has always been the case for all companies.
I can make the same case for Meta, Microsoft, to lesser extent Amazon. I'll leave Tesla aside it's indeed too richly valued.
I have no analogy for this except the railroads of the Gilded Age. Did railroads become a pretty big deal? Yeah. They were also a giant vortex that slurped up endless investment, far more than the real demand could possibly justify. And it ends, well, we know how it ends.
Fiber optic and data center build out in the nineties is similar. Overinvestment led to a bust for a period, but the infrastructure was useful and provided the foundation for the next wave of Internet growth. LLMs could be similar.
We've become so accustomed to the rip-roaring growth that came from widespread Internet adoption and now that it has piped down, we're desperate to find the next big boom. VR, crypto, blockchain, generative AI. And each time, like degenerate gamblers, we're feeling it, this must be it, the next Big 'Un, the bet that redeems all the bets that went wrong, bigger, riskier, bigger, riskier.
But it just won't be, nothing in our lifetime will ever come close to what the Internet boom was. The window for becoming a Jeff Bezos or Mark Zuckerberg as easily as they did is closed now, and you just need to live with it. The title of this chapter till the end of our days will remain "After the Internet Boom" and it will chronicle this pathetic desperation.
You're right of course, but nobody ever gave me a free VR headset or a few thousand dollars of Monero. OpenAI and Google will both give you, for free, access to an LLM whenever you want. I can do a Google search and help with LLM adoption right now, it will be at the top of the page.
That's an overblown claim. AI companies failing won't mean technology doesn't advance nor that companies betting against/independently from AI would recind.
Exactly my thoughts reading this article.
Luckily if next few years we will have thousands of projects written using 'ai' there will be need for someone to debug and fix all of that broken software.
Or maybe not, maybe it will be cheaper to just slap another ten k8s pods to mitigate poor performance...
I believe, we are beyond the point of “bad software written, bad software deployed, business as usual” point long ago when AWS/GCP/Azure became an important requirement in job description.
A bad piece of software can be decently hidden by burning more money in cloud bills, which gives the inflated sense to the leadership that their products are doing global scale ground breaking.
With AI, I would not be surprised if the quality actually improves and the cost comes down(or stays same). Of course, more bad software will be written by now many aspiring entrepreneurs to realize their dream idea of spotify clone, then sacrificing their life saving on complex cloud bills and ever so profitable rise of revenue of all cloud services citing this as benefit of AI while doing some more layoffs to jack up the stock prices.
The real revelations will come(it always does, nature and economy works in cycles), when excessive layoff caused damage will come due and now everyone will scramble to rehire people in few years. Unlike the Ford innovation of replacing horse carts, software is more prevalent in our every aspects of life, same as doctors and lawyers and civil service, hence we need to honestly play the game until the wave turns and then cash in by making in 200x killing just like the businesses are cashing in on right now.
> I believe, we are beyond the point of “bad software written, bad software deployed, business as usual” point long ago when AWS/GCP/Azure became an important requirement in job description.
> A bad piece of software can be decently hidden by burning more money in cloud bills, which gives the inflated sense to the leadership that their products are doing global scale ground breaking.
Doesn't this apply to almost all software out there nowadays?
Bloated enterprise frameworks (lots of reflection and dynamic class loading on the back end, wasteful memory usage; large bundles and very complicated SPA on the front end), sub optimal DB querying, bad architectures, inefficient desktop and mobile apps built on web technologies because of faster iteration speed, things like messing up OS package management where it's not easy to halt updates and they don't integrate well with the rest of the system (e.g. snap packages), messy situation with operating systems where you get things like ads in the start menu or multiple conflicting UI styles within it (Windows), game engines that are hard to use well to the point where people scoff just hearing UE5 and so on.
Essentially just Wirth's law, taken to the maximum of companies and individuals optimizing for shipping quickly and things that catch attention, instead of having good engineering underneath it all: https://en.wikipedia.org/wiki/Wirth%27s_law
Not the end of the world, but definitely a lot of churn and I don't see things improving anytime soon. If anything, I fear that our craft will be cheapened a lot due to prevalence of LLMs and possible over-saturation of the field. I do use them as any other tool when it makes sense to do so... but so does everyone else.
> With AI, I would not be surprised if the quality actually improves and the cost comes down(or stays same). Of course, more bad software will be written by now many aspiring entrepreneurs to realize their dream idea of spotify clone, then sacrificing their life saving on complex cloud bills and ever so profitable rise of revenue of all cloud services citing this as benefit of AI while doing some more layoffs to jack up the stock prices.
At this point we all speculating really. But from logical point of view, LLMs are trained on code written by humans. When more and more code will be written by LLMs instead, models will be trained on content written by other models. It will be very hard to distinguish which code on Github was wrote by human or some model (unless the quality will differ substantially). If this will be the case I would say that quality of code written by them will drop. Or the quality of models will drop. Or code written by model will be still using the pre-LLM patterns, because model-written code will not be part of training data.
It may be that LLM written code will be working but hardly comprehensible for human.
For now models does not have negative feedback loop that humans have ('oh code does not compile' or 'code does compile but throws an exception' or 'code compile and works but perform poorly').
Anyway, I am sure that there will be impact to the whole industry, but I doubt models will be primary source of source code. Helpful tool for sure but not a drop-in replacement for developers.
No one knows what the "current market" thing really is, people are making wild guesses looking at the hands of other people who also make wild guesses.
I do think some transitions are inevitable and not because AI must be used, but because once enough companies figure out where it genuinely improves efficiency, the competitive pressure to follow suit becomes real
The same holds true for software developers imo. If you can't figure out how to use LLMs to improve efficiency, your likely a dinosaur of the past soon (unless you work on somethink __very__ specific where LLMs dont help much).
I can barely think of any real application where they would help. I have a weekend project that's already too much context -- I asked Claude to change some Tailwind styles for me and it just shat up the whole file. and that was a toy!
even if allowed, how is Claude going to help my at work where a single file in a large project, one of many, is tens of thousands of lines long?
The best use I've gotten out of LLMs is as an autocomplete. I use Cursor at work, and it's pretty good at consistently calculating the next 10-20 characters I want to type out. Anything longer, save for some situations where the changes I'm making are super repetitive, the quality dives off a cliff.
I've yet to coax out good/working code of significant complexity from these models without putting an amount of effort into prompting that would be greater than just working it through myself without any LLM assistance.
The use I do get out of Cursor can save a lot of time for me, so I do think it's a productivity boost as is.
This guy probly doesn't doesn't do any actual work just reads ppl like Mario Damadai who are out there claiming 90% of coding will be done by llms in next 3-6 months.
I just read this for the first time and was totally underwhelmed. What is the takeaway? "Derisk, Enable, Finish"? This is not insightful or even interesting.
> I’m a software engineering leader and writer, currently serving as Carta’s CTO. I’ve worked at Calm, Stripe, Uber, Digg, a few other places, and cofounded a defunct iOS gaming startup
oh look another out of touch 'leader' . So sick of these ppl.
His career advice should be how to get your work done while appeasing "leaders" like him at work.
my company now has mandate that 'all coding work must be done by AI' and only manually if its not possible. They bought licenses to all AI coding tools.
Which would've been great if these things actually work. I've never felt more like a stupid cog in my whole career than now.
>The current market doesn’t value those skills particularly highly, but instead prioritizes a different set of skills: working in the details, pushing pace, and navigating the technology transition to foundational models / LLMs.
depends on the assumption that technology must "transition" to "foundational models / LLMs". The author doesn't seem to interrogate this assumption. In fact, most of the career malaise I've seen in my work is based on the assumption that, for one reason or another, technologists "must transition" to this new world of LLMs. I wish people would start by interrogating this bizarre backwards assumption (ie., - damn the end product! Damn the users! It must contain AI!) before framing career discussions around it.
However,
>decision-makers can remain irrational longer than you can remain solvent
is unfortunately painfully true.