I’ll bite - I’ve been a dev at a new company for about a year and a half. I had mostly done front end work before this, so my SQL knowledge was almost nonexistent.
I’m now working in the backend, and SQL is a major requirement. Writing what I would call “normal” queries. I’ve been reaching for AI to handle this, pretty much the whole time - because it’s faster than I am.
I am picking up tidbits along the way. So I am learning, but there’s a huge caveat. I notice I’m learning extremely slowly. I can now write a “simple” complexity query by hand with no assistance, and grabbing small chunks of data is getting easier for me.
I am “reading, debugging, and maintaining” the queries, but LLMS bring the effort on that task down to pretty much 0.
I guarantee if I spent even 1 week just taking an actual SQL class and just… doing the learning, I would be MUCH further along, and wouldn’t need the AI at all. It’s now my “query tool”. Yeah, it’s faster than I am, but I’m reliant on it at this point. I will SLOWLY improve, but I’ll still continue to just use AI for it.
All that to say, I don’t know where the future goes - our company doesn’t have time to slow down for me to learn SQL, and the tool does a fine job - it’s been 1.5 years and the world hasn’t ended, I can READ queries rather quickly - but writing them is outsourced to the model.
In the past, if a query was written on stack overflow, I would have to modify it (sometimes significantly) to achieve my goal, so maybe the learning was “baked in” to the translation process.
Now, the LLM gives me exactly what I need, no extra “reinforcement” work done on my end.
I do think these tools can be used for learning, but that effort needs to be dedicated. In many cases I’m sure other juniors are in a similar position. I have a higher output, but I’m not quickly increasing my understanding. There’s no incentive for me to slow down, and my manager would scoff at the idea, really. It’s a tough spot to be in.
I can corroborate this. I coached mechanical engineers who had to learn some programming to conduct research by analyzing factory machine data I provided them (them being the domain experts). The ones who learned python and sql using AI hardly had learned anything after half a year, the ones I instructed where to find the API docs and a beginner tutorial weren’t just much further along, they were also on a faster trajectory for the future. I think AI is a beginner trap because it allows them to throw shit at the wall and see what sticks. It is much more useful in the hands of an expert in the long term.
I think this has been shown for fast majority with homework. You just don't learn much by copying homework from somewhere else. Actual effort is needed for learning process. Unless you are some weird most likely rare genius...
Also makes me think of lot of incidental learning that can go on. Like when looking at API docs noticing the other things. Might not be useful now, but could very well be later.
Maybe I’m missing the point but we have some of these implemented without the tool - the only one that needs an API key is the log scraping. It’s been surprisingly cheap and if we want to swap models we can.
I suppose at that point I’m wondering if it would have just been faster for… you, (I’m assuming) the developer to make that change and deploy it? Is the AI really faster on small changes like that, if you understand the platform/code/CI/CD enough???
Maybe for a non-dev it would be nice to submit a ticket and have it auto-fixed by an agent. But in the devs case, it feels like it would be faster to just do it manually.
Interesting read. Thought I’m not sure I loved the way the graphs were laid out. I didn’t do a deep dive on them but some felt like they were displaying the data in a way that wasn’t rooted in the facts.
Either way. A whole lot of words and sentiments that could have been inferred. People use the new tools, we feel they make us more productive (but by how much), and it’s scary because C suites are bearing down in FOMO.
I would have LOVED if they got some form of stats in here as to how much performance people are getting out of these. I’ve heard 100X, actually. I’ve heard 5X frequently. Some people think it slows us down. Nobody really knows, and I guess it depends on how you’re using it. I personally have said to my CEO that I feel 30-40% faster, though I hate to have numbers associated with it… these tools have been around for years now. 5X faster than… what? It’s just expected to learn the tools and use them where they help. I would love a consensus on actual, regular folk and how much more productive it makes them. I’m doubtful it’s north of 10X. 4-5X seems optimistic? Not sure.
At my company, it’s being essentially shoved down our throats - “be 5X faster, tomorrow, this guy on the AI podcast said this is possible!!” And if you aren’t using the tools to build some useless internal application, you’re looked at as a non-adopter.
It’ll be interesting to see where things go in the next year.
I’ve heard MANGA suggested as the acronym with Facebook’s new name. Though maybe it should be MAAAN, said in a tone of exasperation at tech company activities
I’m a newer full stack engineer, previously did mostly web dev. It’s been useful in the areas that I’m not super interested in. We’re working on a 700KLOC legacy monolithic CRUD app with 0 documentation, it’s essentially the Wild West. We’ve found it very difficult to apply AI in a meaningful way (not just code output, reviews, documentation writing, automation). For a small team with lots to do on what is essentially a “keep the lights on” we’re in an interesting place, as it feels the infrastructure / codebase isn’t set up to handle newer tools.
I use the code generation heavily in my day to day, though verification is a priority for me, as is gaining an understanding of the business logic + improving my skills as a developer. There’s a healthy balance between deploying 100% generated code and not using the tools at all.
It’s useful for research tasks, identifying areas I’ll be working in when developing a feature. However, this team has a gigantic backlog and there are TONS of things we are behind on, so it does feel like AI isn’t moving the needle for us, though it is helpful. I’d like to apply it in different areas, but my senior engineer is very anti-AI, so he doesn’t find the tools useful and is actively against using them. Like I said, there’s surely a balance…
I see us using / relying on them more in the future, due to pressure from above, along with the general usefulness of them.
How many pages of architecture / constraints did you write? I guess I’m curious what type of text input renders 200K lines of code output. It must be a similar level of tokens in just docs / prompting. Have you verified all of that? Was that AI generated?
Would be very interested to see whether it’s not just… regular LLM snowballing a paragraph into 12 pages of “technical design documents” and 10K lines of code. Not sure what kind of niche you’re in or what the business logic is, but it sounds to me like you’ve built a machine that… generates code you don’t need to look at??
There was a 200 word architectur doc that lasted about 3 weeks before it drifted so it got deleted. I no longer keep architecture docs - tests and code are enough for the agent to answer questions when we have them.
Probably wrote 2000+ words of prompts per day to the agent, Monday to Friday, for like 9 months. Dozens to hundreds of prompts a day back and forth with anywhere from 1-7 concurrent agents at a time.
This is not something anyone would ever one-shot. There are thousands of commits. My commit log looks like a normal squash-merge-to-main-and-deploy workflow.
Speculating here, but I don't believe that the government would have the time or organization to do this. Widespread political unrest caused by job losses would be the first step. Almost as soon as there is some type of AI that can replace mass amounts of workers, people will be out on the streets - most people don't have 1-2 months of living expenses saved up. At that point, the government would realize that SHTF - but it's too late, people would be protesting / rioting in droves - doesn't matter how many drones you can produce, or whether or not you can psychologically manipulate people when all they want is... food.
I could be entirely wrong, but it feels like if AI were to get THAT good, the government would be affected just as much as the working class. We'd more likely see total societal collapse rather than the government maintaining power and manipulating / suppressing the people.
That is a lot assumption right there. Starving masses can't logically and physically fight with AI or government for long. They become weak after weeks or months? At that point government would be smaller and controlled probably be part of AI owners.
IF they dont have 1-2 months of living expenses saved, they die. They can'be a big threat even in millions??? they dont have organization capacity or anything that matches
I’m now working in the backend, and SQL is a major requirement. Writing what I would call “normal” queries. I’ve been reaching for AI to handle this, pretty much the whole time - because it’s faster than I am.
I am picking up tidbits along the way. So I am learning, but there’s a huge caveat. I notice I’m learning extremely slowly. I can now write a “simple” complexity query by hand with no assistance, and grabbing small chunks of data is getting easier for me.
I am “reading, debugging, and maintaining” the queries, but LLMS bring the effort on that task down to pretty much 0.
I guarantee if I spent even 1 week just taking an actual SQL class and just… doing the learning, I would be MUCH further along, and wouldn’t need the AI at all. It’s now my “query tool”. Yeah, it’s faster than I am, but I’m reliant on it at this point. I will SLOWLY improve, but I’ll still continue to just use AI for it.
All that to say, I don’t know where the future goes - our company doesn’t have time to slow down for me to learn SQL, and the tool does a fine job - it’s been 1.5 years and the world hasn’t ended, I can READ queries rather quickly - but writing them is outsourced to the model.
In the past, if a query was written on stack overflow, I would have to modify it (sometimes significantly) to achieve my goal, so maybe the learning was “baked in” to the translation process.
Now, the LLM gives me exactly what I need, no extra “reinforcement” work done on my end.
I do think these tools can be used for learning, but that effort needs to be dedicated. In many cases I’m sure other juniors are in a similar position. I have a higher output, but I’m not quickly increasing my understanding. There’s no incentive for me to slow down, and my manager would scoff at the idea, really. It’s a tough spot to be in.
reply