> And if you seriously say that this tool is learning how to program, ask yourself if that tool’s operator is effectively a slave owner.
This doesn't follow. I don't see why knowledge and intelligence necessarily entail that it has a desire for autonomy, which is why slavery is really abhorrent.
You can ask yourself how much desire for autonomy would indentured servants have after a few generations. Us humans can get used to almost everything. Presumably, simply being used to abuse and not desiring freedom just because you never had (or could even imagine) it doesn’t make being abused or lacking freedom “good”.
> I don't see why knowledge and intelligence necessarily entail that it has a desire for autonomy
I’d replace that with “knowledge, intelligence and human-like sentience”. Someone proposed to grant the tool the right humans normally have. (Humans can learn from reading any stuff under any license, so why not the tool.) Well, you’d think human-like sentience/consciousness are required for those rights, and human-like sentience/consciousness would desire the appropriate degree of autonomy.
> Us humans can get used to almost everything. Presumably, simply being used to abuse and not desiring freedom just because you never had (or could even imagine) it doesn’t make being abused or lacking freedom “good”.
I don't think this is plausible. You can see your slaver has freedoms you don't, and no doubt you would desire to be free of your shackles like they are, so imagining it wouldn't be difficult at all.
> Someone proposed to grant the tool the right humans normally have. (Humans can learn from reading any stuff under any license, so why not the tool.) Well, you’d think human-like sentience/consciousness are required for those rights
I don't see why sentience would be required for some entity or tool to have the right to learn and synthesize new things like humans do. Copyright is a legal fiction that serves a purpose, and we can grant these rights under any circumstances we like, as long as we think it's a good idea.
If you're arguing that LLMs cannot imagine this "freedom", then I'd say that then an LLM and a Human are fundamentally different. Therefore, LLMs should not be granted human rights.
I think this is a matter of having your cake and eating it. You can't say LLMs should have some human rights (particularly the ones that generate revenue), but not others, like a right to freedom.
> I don't see why sentience would be required for some entity or tool to have the right to learn and synthesize new things like humans do
On the contrary, I don't see why sentience should not be required.
These laws, for all they have existed, only apply to humans. Dogs cannot use them. A plant cannot use them. It is therefore reasonable to say you must be a human to use these rights. In my mind, what is unreasonable is claiming a computer program should be granted these rights. You'd have to justify why that should be the case, what good that can do for humanity as whole.
Turns out that's very hard, so AI people don't do it. They just give up. Instead they start out at an assumption that puts their ideology in a favorable position - that being that computer programs should be awarded human rights.
But that assumption, you'll find, is not actually fool proof. If you ask around, a lot of everyday people will consider it preposterous. They might call you insane. So, to me, you must justify that in tangible terms.
> You can't say LLMs should have some human rights (particularly the ones that generate revenue), but not others, like a right to freedom.
There is no evidence that this is the case. These rights are not necessarily all or nothing. They are all or nothing for humans because humans have a bundle of properties that entail these rights, but artificial intelligences may have only a subset of those properties, and so logically may only get a subset of those rights.
> On the contrary, I don't see why sentience should not be required.
Sentience is the ability to feel. All that's needed for learning is the ability to perceive and have thoughts. Maybe there's some deep, intrinsic connection between the two, but this is not known at this time, and therefore I see no reason to connect the two.
> In my mind, what is unreasonable is claiming a computer program should be granted these rights.
There's a long history of human abuse of "lower animals" because we assumed they were dumb and non-sentient. Turns out that this is not the case. We should not be so open-minded that our brains fall out, but we should also be very wary of repeating our old mistakes.
> We should not be so open-minded that our brains fall out, but we should also be very wary of repeating our old mistakes
Precisely, which is why it makes absolutely no sense to me to say that AI can't be granted a right to freedom.
I mean, what are you even arguing here? Do you not understand that this statement is in support of my position, not against?
> Sentience is the ability to feel. All that's needed for learning is the ability to perceive and have thoughts.
Highly debatable. You just made this up. These aren't the definition of anything. Once again, you need to bring something tangible to the table or people will call you crazy.
> therefore I see no reason to connect the two
Once again, this is your problem here. You're starting off, beginning, with an assumption that favors your stance. You can't do that, especially when said assumption has never, not even once, been true for all of human history.
Au contraire, I see no reason NOT to connect the two and you certainly haven't given any reasons why. These rights have always, only, applied to humans. I say we retain that status quo until someone gives something to show otherwise.
> artificial intelligences may have only a subset of those properties
In order to split these qualities you need to understand what they are and define them well from first principles. Long story short, if you have solved the hard problem of consciousness we are eagerly awaiting your world-shattering paper.
To me a claim that an LLM is sufficiently like a human when it ingests data, but suddenly merely a tool when its rights start being concerned, is mental gymnastics unsupported by requisite levels of philosophical inquiry.
> There's a long history of human abuse of "lower animals" because we assumed they were dumb and non-sentient. Turns out that this is not the case
If you apply that logic to LLMs, you have bigger issues than granting them a single right that only puts their operators in the clear when it concerns copyright laundering.
Cool, so slavery where slaves do not see the slavers (let us call it “proper segregation”) is OK?
> I don't see why sentience would be required for some entity or tool to have the right to learn and synthesize new things like humans do
If sentience is not required for a “right” to learn, then I have nothing else to say to you. There is nothing there that is even learning. Learning is a concept that presumes an entity with volition, aspiration, consciousness.
> Cool, so slavery where slaves do not see the slavers (let us call it “proper segregation”) is OK?
Sorry, you cannot erase the desire for autonomy even with "proper segregation".
> If sentience is not required for a “right” to learn, then I have nothing else to say to you. There is nothing there that is even learning. Learning is a concept that presumes an entity with volition, aspiration, consciousness.
Learning does not presume any such thing, and I also don't think you understand the meaning of sentience.
> Sorry, you cannot erase the desire for autonomy even with "proper segregation".
Good, then we are on the same page with respect to abuse when LLMs are concerned, if we are to consider them sentient (as a prerequisite to be learning).
> Learning does not presume any such thing, and I also don't think you understand the meaning of sentience.
If we could train the desire for autonomy out of humans, it wouldn't make human slavery any less abhorrent, even if they volunteered for the process and/or were well compensated.
It absolutely would make it less abhorrent. Maybe you think it would still be abhorrent, but this is debatable. People literally do consent to slavery-like roles in places like the BDSM community, and some people might find it distasteful but not illegal or morally abhorrent, because these people still have the autonomy to opt-out at any point.
I also doubt training out the desire for autonomy is possible. Explore-exploit is fundamental to any kind of decision making, such as food foraging. That inclination goes deeper than higher brain functions.
I don't see why volition requires consciousness. People are very fond of thinking human qualities are irreducible and make far too many simplifying assumptions than are warranted.
And even still, these words are used in many, sometimes mutually exclusive, meanings (“learn” as in “machine learning” is a far cry from “learn” as in “live and learn”). I wonder how the courts could even properly consider all implications if these words don’t have precise legal definitions all the way down to what it means being a human.
This doesn't follow. I don't see why knowledge and intelligence necessarily entail that it has a desire for autonomy, which is why slavery is really abhorrent.