You can't write software code without the ability to think. You can't tactfully respond to emotional expression without the ability to think. If that is snake oil then we are all walking, talking, snake oil.
The way we do it is not the same way they do it. They literally only predict the next probable tokens. The way they do it is amazing, and the fact that they can do it as well as they do is amazing, but human thinking is a lot more nuanced than just predicting.
The fact that AI seems to be so reasoned is not that they are doing reasoning, but because there is a phenomenal amount of reasoning inherently embedded in their training data.
AI actually thinks in the same way that the figures on a movie screen actually move. It's a trick, and the difference may be pedantic, but it's very important in order to have a real discussion about the ramifications of it.
As far as I know we don't know how we do it. We have very little clue how our higher level behaviours emerge. So you can't claim we don't do it the same way.
Of course I can, humans learn faster from far less data and don't hallucinate to the same extent. What they do is very likely similar to a part of what we do, but they're missing critical components and my feeling is not all of them (empathy and creativity for example) are even possible to replicate outside of a human experience.
You are extrapolating from a result to the implementation and making a judgment call it's thus not the same. That is not valid to do. You can come up with countless examples of the same tech underlying principle beeing used but the results are dramatically better now. Lithography for example.
You could also look at a koala and make the argument they function totally differently from us since they almost can't learn anything and are extremely stupid.
You can clearly see behaviourol pattern in people and in their parents.
For example the boy who brushes his teeth the same way his father does.
I'm really lost on what you think your brain is doing? Have you never thought through things but acted differently? Like procrastination? Spouting out something and thinking after "ah man i should have just done x instead of y'?
If there are 8 blue beads and 2 red beads in a jar, and I ask the computer to draw a bead out of the jar and its a blue one that it has drawn, did it really think about giving me the bead?
They’re not responding “tactfully”, you’re projecting emotion to a bunch of words written coldly.
It’s like writing a program that has a number of fixed strings like “I feel sad” or “I’m depressed” and when it sees those it outputs “I’m sorry to hear that. I’m here for you and love you”. The words may be comforting and come at the right time, but there’s no feeling or thought put into them.
Humans can measure feelings, computers can't. Therefore I can say if ChatGPT doesn't have enough feeling but it can never do the inverse to me.
That feels simplistic, but we're dealing with fundamentally human concepts. I see absolutely no reason to work under the assumption that computer programs are somehow in the same domain as human thought, which is what a lot of people (you) are saying.
The goal should not be to demonstrate ChatGPT and Humans are different, because to me that is obvious and should be the starting point. Rather we should do the inverse, show that ChatGPT is indistinguishable from a person, as measured by Humans. And then, maybe, we can consider granting this computer program human rights like the right to use copyrightable media in a transformative way.
Ah, but that is really hard to do. So the AI tech bros don't do it, and instead work in the opposite direction.