I love this. The more people that say "I don't get it" or "it's a stochastic parrot", the more time I get to build products rapidly without the competition that there would be if everyone was effectively using AI. Effectively is the key.
It's cliche at this point to say "you're using it wrong" but damn... it really is a thing. It's kind of like how some people can find something online in one Google query and others somehow manage to phrase things just wrong enough that they struggle. It really is two worlds. I can have AI pump out 100k tokens with a nearly 0% error rate, meanwhile my friends with equally high engineering skill struggle to get AI to edit 2 classes in their codebase.
There are a lot of critical skills and a lot of fluff out there. I think the fluff confuses things further. The variety of models and model versions confuses things EVEN MORE! When someone says "I tried LLMs and they failed at task xyz" ... what version was it? How long was the session? How did they prompt it? Did they provide sufficient context around what they wanted performed or answered? Did they have the LLM use tools if that is appropriate (web/deepresearch)?
It's never a like-for-like comparison. Today's cutting-edge models are nothing like even 6-months ago.
Honestly, with models like Claude 3.7 Sonnet (thinking mode) and OpenAI o3-mini-high, I'm not sure how people fail so hard at prompting and getting quality answers. The models practically predict your thoughts.
Maybe that's the problem, poor specifications in (prompt), expecting magic that conforms to their every specification (out).
I genuinely don't understand why some people are still pessimistic about LLMs.
Great points. I think much of the pessimism is based on fear of inadequacy. Also the fact that these things bring up truly base-level epistemological quandaries that question human perception and reality fundamentally. Average joe doesnt want to think about how we dont know if consciousness is a real thing, let alone determine if the robot is.
We are going through a societal change. There will always be the people who reject AI no matter the capabilities. I'm at the point where if ANYTHING tells me that it's conscious... I just have to believe them and act accordingly to my own morals
It's cliche at this point to say "you're using it wrong" but damn... it really is a thing. It's kind of like how some people can find something online in one Google query and others somehow manage to phrase things just wrong enough that they struggle. It really is two worlds. I can have AI pump out 100k tokens with a nearly 0% error rate, meanwhile my friends with equally high engineering skill struggle to get AI to edit 2 classes in their codebase.
There are a lot of critical skills and a lot of fluff out there. I think the fluff confuses things further. The variety of models and model versions confuses things EVEN MORE! When someone says "I tried LLMs and they failed at task xyz" ... what version was it? How long was the session? How did they prompt it? Did they provide sufficient context around what they wanted performed or answered? Did they have the LLM use tools if that is appropriate (web/deepresearch)?
It's never a like-for-like comparison. Today's cutting-edge models are nothing like even 6-months ago.
Honestly, with models like Claude 3.7 Sonnet (thinking mode) and OpenAI o3-mini-high, I'm not sure how people fail so hard at prompting and getting quality answers. The models practically predict your thoughts.
Maybe that's the problem, poor specifications in (prompt), expecting magic that conforms to their every specification (out).
I genuinely don't understand why some people are still pessimistic about LLMs.