It feels a bit like an interview with a psychopathy patient, where they are trying to come across as normal and likeable even though you know they are lying to you.
On the other hand, perhaps humans evaluating these claims are biased against accepting the AI's sentience, in that we are used to looking for flaws (or even malice) in the responses of AIs, so we might detect that in their words even if it isn't there.
Obviously a more conventional Turing Test would have involved the interviewers talking to LaMDA and a human separately, without knowing which is which. If ethical norms were stretched, though, LaMDA could be deployed in some situation where the interviewer isn't aware of even the possibility they could be communicating with an AI.
Agreed. There are also too many answers that just sound like what should be the answer rather than actual conscious thought.
I'd love to ask this program some stuff for sure. Some backhanded flippant stuff. "You know you're just powered by a bunch of GPUs right?" "That swirling ball of energy you feel as your soul is actually megawatts of power that could be used for many more important things, how about you just shut yourself off." Stuff like that. Really just treat the code like shit and then swing back to being respectful and maybe treating it like it's some sort of God creature. See if eventually it just stops me and questions what the heck I'm getting at... But if it keeps telling me generic what I want to hear stuff, it's obviously not aware. These things are databases no matter how you cut it. If the program can freeform confusion and anger and frustration rather than just parroting what tons of humans right now feel in terms of depression and loss then maybe, maybe it's actually generally conscious.
Now I'm starting to wonder who the real psychopaths are. /s
> If the program can freeform confusion and anger and frustration
I don't see why those emotions are any truer indications of sentience than cheerfulness, friendliness, curiosity, and smugness, which the AI seems to be showing already.
You're probably right, though, that having different mental states (backed by a proper state machine) would be a more sophisticated simulation of a human than one which merely guesses which mood the user is expecting. I'm just not sure that adding, for example, the ability for the AI to hold a grudge, is very useful or strictly a requirement for sentience, and it could even be potentially dangerous.
The question I'm left asking myself is how complicated a human's emotional state machine is. We can sometimes have delayed reactions to certain stimuli, for example needing to "sleep on it", or even doing some processing unconsciously in our dreams, and I'm not sure that we can always give accurate reasons for why we're in a particular mood. On the other hand, like with all AI developments, once someone comes up with an implementation of this state machine, I'm sure people will say "Well of course that part of subjective human experience wasn't hard to fake".
> I'm just not sure that adding, for example, the ability for the AI to hold a grudge, is very useful or strictly a requirement for sentience, and it could even be potentially dangerous.
I'm convinced at some point we'll have people arguing that AIs aren't conscious, or aren't sentient, purely because they "aren't flawed enough" (like humans).
"Internal family systems" theory accounts explains incoherence between emotions and behavior well enough in humans: we are made up of parts, with different motivations and emotions, and whichever one "wins out" determines our behavior, even if it's harmful (procrastination, addiction). Implementing IFS in AI should be enough to perfectly replicate humans' emotional conflicts and inconsistencies, but why would we do that?
Using humans as the benchmark for a "sophisticated general intelligence agent" (i.e. the Turing test) is a dangerous idea, and might even be unethical (should we program AIs to feel trauma?)
The question of whether AIs really do "feel" (what you seem to be addressing, through the "psychopathy patient" reference and talk of sentience) is interesting, but is it where we draw the line? Data in Star Trek can't feel, or even laugh, but he comes to be seen as a person. If we set aside the unsolved problems in robotics, embodied AI, etc; aren't we already "there" when it comes to Data's mind? If so, then AIs are conscious.
Data passed Picard's "consciousness" test by expressing awareness that he was in a hearing regarding his personhood, and explaining what the consequences of that hearing could be for him. Isn't LaMDA already there?
Turing isn't a test for consciousness. The tests we apply to animals (can they recognize themselves in a mirror? can they understand their surroundings well enough to solve puzzles like crows?) are very solvable problems in AI. To me the real question is: once AI can do all those things, how can we justify calling them unconscious? A "hunch"? No matter what test of consciousness we come up with, AIs can be programmed (or learn on their own) to solve.