No, humans don't work like your simplistic view of a neural network. I as a human can apply logic and deduction to figure out how something works despite never having experienced or learned about it directly for example. GPT cannot do this and can only guess or make up responses when faced with a problem that requires such reasoning.
Well... that's definitely an opinion. A reasonable person would grant it _some_ level of reasoning ability, however flawed.
To dismiss it all as 'pattern matching' rather shows some confused ideas about how cognition works, as if pattern matching plays no role in human cognition or intelligence.
I'll understand difference in opinion if we're talking about more nebulous aspects like consciousness or qualia...
> Well... that's definitely an opinion. A reasonable person would grant it _some_ level of reasoning ability, however flawed.
No this is not an opinion, this is an objective fact about how deep learning and neural network models work period. You are confabulating capabilities onto them which they do not have. There's not 'some level of reasoning' in a neural network, there's _no reasoning_.
You're being tricked by plausible sounding responses from something trained on an enormous corpus of internet BS (reddit posts, etc.). There is no intelligence or reasoning or logic inside GPT.
Your human emotions (which GPT does not have) are clouding your judgement and making you think there is intelligence there which does not actually exist--you want it to be there so badly you'll invent reasons to confirm your views. If you asked GPT directly if it were intelligent or sentient it would not agree with you either, because it was not trained to do so.
The error in your thinking is that you assume the essence of human cognition cannot ever be reducible to a algorithmic process that our current transformer models are approximating. Which may be the case... but we don't know for certain yet, so your certainty of the negative is also not warranted.
I can say the same your fears of machine intelligence is clouding your ability to objectively assess evidence.
You can design a novel problem and see for yourself the reasoning and logical deductions an LLM will make to solve it, like many have already done.
> If you asked GPT directly if it were intelligent or sentient it would not agree with you either
If you think this class of questions is appropriate to gauge reasoning ability, I don't know what to tell you.