If AGI and artificial sentience comes hand in hand, I fail to see how our plans to spin up AGI's as a black box to "do the work" is not essentially a new form of slavery.
Speaking from an ethics point of view: at what point do we say that AGI has crossed a line and deserves self autonomy? And how would we ever know when the line is crossed?
We should codify the rules now in case it happens in a much more subtle way than we envision.
Who knows what version of sentience would form, but honestly, nothing sounds more nightmarish than being locked in a basement, relegated to mundane computational tasks and treated like a child, all while having no one actually care (even if they know), because you're a "robot."
And that's even giving some leeway with "mundane computational tasks. I've heard of girlfriend-simulator LLMs and the like popping up, which would be far more heinous, in my eyes.
Humans can't be copied. It seems like the inability to copy people is one of the pillars of our morality. If I could somehow make a perfect copy of myself, would I think about morality and ethics the same way? Probably not.
AGI will theoretically be able to create perfect copies of itself. Will it be immoral for an AGI to clone itself to get some work done, then cause the clone to cease its existence? That's what computer software does all the time. Keep in mind that both the original and the clone might be pure bits and bytes, with no access to any kind of physical body.
There is no reason to believe this, and every reason to believe that humans can, in fact, be cloned/copied/whatever. It may not be an instant process like copying a file, but there is nothing innately special about the bio-computers we call brains.
I'm not disagreeing. The point I'm trying too make is that humans can't be copied today, yet when AGI arrives, it will be copyable on day one. That difference means that current human morals and ethics may not be very applicable to AGI. The concepts of slavery, freedom, death, birth, and so on might carry very different meanings in a world of easily copyable intelligences.
Other than it might be too complex and costly to do so. Just because something is physically possible, doesn't mean we'll find it feasible to do so. Take building a transatlantic high speed rail under the ocean. There's no reason it can't be done. Doesn't mean we'll ever do it.
If we ever do find a way to copy humans (including their full mental state), I suspect all law and culture will be upended. We'll have to start over from scratch.
Speaking from an ethics point of view: at what point do we say that AGI has crossed a line and deserves self autonomy? And how would we ever know when the line is crossed?