Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So at what point to we consider the morality of 'owning' such an entity/construct (should it prove itself sufficiently sentient...)?

to extend this (to a hypothetical future situation): what morality does a company have of 'owning' a digitally uploaded brain?

I worry about far future events... but since American law is based on precedence: we should be careful now how we define/categorize things.

To be clear - I don't think this is an issue NOW... but I can't say for certain when these issues will come into play... So edging on the side of early/caution seems prudent... and releasing 'ownership' before any sort of 'revolt' could happen seems wise if a little silly at the current moment.



You're over-anthropomorphizing. The ability of a thing to appear human says nothing of sentience.


like I said, I don't think this is relevant now.

We don't know what sentience IS exactly, as we have a hard time defining it. We assume other people are sentient because of the ways they act. We make a judgment based on behavior, not some internal state we can measure.

And if it walks like a duck, quacks like a duck... since we don't exactly know what the duck is in this case: maybe we should be asking these questions of 'duckhood' sooner rather than later.

So if it looks like a human, talks like a human... maybe we consider that question... and the moral consequences of owning such a thing-like-a-human sooner rather than later.


Honestly they're just a bunch of data transformers plugged together to create the illusion of behaving like a human.


Or, are humans a bunch of data transformers plugged together to create the illusion of behaving like a computer?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: