Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Please point out even the slightest indication that entropy is beatable.

Otherwise, make an ethical argument that covering every inch of the planet in people is a good idea.

Otherwise, explain how you'll get the proper sterilization procedures working to control population growth and get resource consumption down to sustainable levels.



The problem of population growth exists regardless of life extension technology. The solution we are headed for already is that a lot of people will simply starve to death.


When such issues come into discussion I always think about this island off the coast of Alaska, called St. Matthew Island:

> In 1944, 29 reindeer were introduced to the island by the United States Coast Guard to provide an emergency food source. The coast guard abandoned the island a few years later, leaving the reindeer. Subsequently, the reindeer population rose to about 6,000 by 1963[5] and then died off in the next two years to 42 animals.[6] A scientific study attributed the population crash to the limited food supply in interaction with climatic factors (the winter of 1963–64 was exceptionally severe in the region).[1] By the 1980s, the reindeer population had completely died out.[2] Environmentalists see this as an issue of overpopulation.

(from here: http://en.wikipedia.org/wiki/St._Matthew_Island)


Isn't that lovely? The poor can starve to death and the rich can live forever!



So a prediction exists, with some evidence towards it, that it's possible to do computation that doesn't consume negentropy?

Well, I'll grant you one thing: not quite sure what it is you're computing, but assuming your plan works, that was a surprisingly short path to conserving the universe's resources until your computational substrate suffers proton decay.

I still have only hunches of what computation has to do with actual lives. Please, explain your Evil Plan out loud.


Evil Plan, first draft: Build a Matrix, scan and emulate everyone, recycle the meat.


At least you're openly admitting that's your plan. So, you know, we can drag you behind the chemical sheds and shoot you for Criminally Irresponsible Use of Applied Phlebotinum.


In others words make a domesday robot that remembers people before it kills them.

As they die, some will take solace in a religious belief that numbers in the machine represent everything they were and ever will be. Others will just die.

A digital tombstone to a dead race.


In his defense, if you're dying anyway, you might as well leave a "ghost" behind. The ghost might not be you, and it will certainly have some psychological issues to deal with due to knowing that it's one ontological level "down" from a real, flesh-and-blood person, but you were going to die anyway.


Why do you assume the implementation hardware matters?

If it does, why assume brain-meat is better, as opposed to worse?


I assume that ontological security matters. If I know my consciousness runs on meat, I know that I have my own personal substrate. If I know I'm in the Matrix, I know that whoever has `root` access can alter or deceive me as they please.

The one thing nobody ever specifies about these crazy schemes, which would otherwise be a great way for humanity to get the hell off of Earth and leave the natural ecosystem to itself in our absence, is who will be root, and how he's going to forcibly round up everyone who doesn't like your crazy futurist take-over-everyone's-minds scheme. Hell, what's going to stop him from rampaging across the real Earth and universe, destroying everything in sight, while everyone else fucks around having fun in VR?

I'm really wondering why this nasty, insane idea has been cropping up more frequently lately in geek circles.

And that's not even starting into the sheer ludicrousness of claiming people's consciousness is pure software when we know that all kinds of enzymes and hormones affect our personalities!


> And that's not even starting into the sheer ludicrousness of claiming people's consciousness is pure software when we know that all kinds of enzymes and hormones affect our personalities!

That's a bug to fix in implementation accuracy. I'd obviously prefer more accuracy, but if it comes down to a choice between less-than-perfect available implementation accuracy or dying of old age, I'll happily take a less accurate implementation, especially one that preserves enough information to fix that issue later.

The much more serious bug I am concerned about is the continuity flaw: a copy of me does not make the original me immortal. I'd like the original me to keep thinking forever. Many proposals exist for how to ensure that. The scary problem that needs careful evaluation before implementing any solution: if you do it wrong, the copy will never know the difference, but the original will die.


No human should ever be root. But we might just trust a Friendly AI. Well, provided we manage to make the AI actually Friendly (as in, does exactly what's good for us instead of whatever imperfect idea of what's good for us we might be tempted to program).


And if we don't, we all die (at best), but that's nothing new. Nor is it avoidable by other means than FAI.

The route to unfriendly AI is revenue-positive right up until it kills us.


The question is not really whether such and such implementation is best. The question is, does changing implementation preserves subjective identity?

I bet many people here would not doubt the moral value of the emulation of a human (feelings and such are simulated to the point of being real), but would highly doubt that it would be, well, the "same" person as the original.


That's actually a good point, if a confusing one. I'd like to know the answer as well, though I believe there's a chance the answer will be "mu".


When the robot points the flamethrower at you, and announces using the Siri voice, "Fear not, a backup has been made", you will no longer be confused.


Yeah, by that point I'll know the AI is an Unfriendly AI, and I'll be deeply sorrowful and scared for the future.


Use the Dyson computation. If we uploaded to a matrix that ran on some Dyson-computation approach, then as time went by we'd run slower and slower in real time, but that wouldn't matter to us (and if our population continued to grow that would slow the time factor even further, as the simulation would have to run slower to compute us all - but again, who cares?). But we'd still be able to perform an unbounded amount of computation, so we'd be fine.


What if the hard drive goes on the fritz?


There's no particular reason that everyone has to be on Earth.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: