I'm confused about why there are so many managers/CEOs who mythologize AI.
Some of them use AI as an excuse for layoffs, but many of them do believe that AI (or ChatGPT, specifically) is some kind of magic that can make (1 employee + ChatGPT) equal to 3 employees.
I mean, they must have used ChatGPT, right? How does ChatGPT give them the conclusion about this?
Non-technical management is often completely unable to understand technologies. "AI" is already far beyond a specific technology, it is a general buzzword, which describes an arbitrary thing which solves whatever problem you throw at it.
I am absolutely convinced that none of these companies have done any benchmarking or trials.
>I mean, they must have used ChatGPT, right? How does ChatGPT give them the conclusion about this?
Ask ChatGPT to generate a program for you. Imagine what the result would look like to you if you had never read a single line of code in your life. It is pretty obvious that the output of ChatGPT is indistinguishable from what your developers produce, so they obviously are superfluous.
You may think I am exaggerating, but there are many people in very large companies who think exactly like that. Often their ambitions are smaller, but usually that just worsens their lack of understanding. Problems which could be trivially solved by a mediocre software engineer in a week suddenly become AI Game changing Technologies.
Their investors are heavily invested in AI and are applying pressure / guidance to have their other companies use it to boost the value of their AI investments.
I keep seeing great coders losing their jobs and I'm wondering why you would fire someone who has been given superpowers (allegedly). AI is a force multiplier. If you have 1,000 workers, they should now be able to produce at the rate of 10,000. (OK, maybe more like 1,500.) If you have a clear vision from the top, you will be able to hit your goals ten times sooner.
Let's say company A is planning on releasing product X to the market in FY25, product Y in FY27, and product Z, in FY30. Well, now you should be able to marshal your resources and release all three products on an expedited schedule.
Obviously this is reductive, but it seems like the best companies are going to use this new tool in their toolkit to dominate, and bad companies are going to get crushed. AI is not a panacea, just yet. But it sounds so confusing to me to hear "We invented jet engines which are superior to prop engines, so we're firing a bunch of our pilots."
Because they are selling to market. AI is the hot new thing, so they sell it... It is all about short term now. Make a next few quarters sound hot and line goes up.
What I mean is they really believe it, not just for market selling
For example, I know a boss add AI as a factor in performance reviews, where someone was evaluated as 'not AI-capable' for not using ChatGPT.
He also asked for 3x output from the teams and said, 'If you feel it's too hard to complete, go learn how to work with AI.'
Management calls out employees for not using LLMs due to believing that once LLM use becomes prevalent throughout the company, then they'll finally see the productivity gains they are betting on. Only once the productivity gains materialize will they be able to reduce costs/headcount, so until then they'll chastise employees for not using "AI", in the belief that the employees not using LLMs are the missing pieces holding the productivity gains back.
They are the "boss", however, so you most likely would not if you were working for them, so there's probably just nobody to point out the emperor's naked.
The clear answer is that a GPT can bullshit convincingly, and the nature of these managers' jobs involves a lot of convincing bullshitting. As everyone thinks they represent some kind of model, they assume that a GPT will also perform as well on other's main skills as it performs on theirs.
Confirmation bias in my experience. “ChatGPT can write this email to investors for me!” cognitively balloons into “ChatGPT can replace my engineering team!” Quite possibly a dash of Dunning-Kruger Effect too.
Executives (and managers in general) are used to delegating tasks to subordinates. I think they just don't perceive any substantial difference between delegating a task to a person and delegating the same task to an LLM.
Most arguments that technicians will try to put forth against LLMs will fail here because they apply in the same way to humans. "LLMs sometimes make weird errors? Well, so do human employees!"
Some of them use AI as an excuse for layoffs, but many of them do believe that AI (or ChatGPT, specifically) is some kind of magic that can make (1 employee + ChatGPT) equal to 3 employees.
I mean, they must have used ChatGPT, right? How does ChatGPT give them the conclusion about this?