Q6: Do possible risks from AI outweigh other possible existential risks, e.g. risks associated with the possibility of advanced nanotechnology?
Shane Legg: It's my number 1 risk for this century, with an engineered biological pathogen coming a close second (though I know little about the latter).
Shane Legg is known for being a co-founder of DeepMind, in a business role (as I understand). He's a complete nobody as a researcher (is he even an AI researcher? I would be surprised).
I'm afraid of making a middlebrow dismissal but I'm going to post it anyway, in hopes that someone just skimming would not be mislead.
The question is what Michael Jordan thinks of the "concept of the singularity", and then he dismisses it out of hand.
Crucially, he does this after confessing that no one in his social circle has talked about this issue with him, and without saying anything about what form of Singularity he is dismissing.
I mention this, because oftentimes I see people appealing to authority, quoting them on the issue and the authority in question is not even talking about the same issue!
I worry that my credence in all this superintelligence stuff only stems from familiarity with the arguments and the complete inability of people to engage with the actual argument. Some of the 'rebuttals' in this comments section have answers in Sam's article for crying out loud!
The only times I've heard him mentioned the impression was negative and that he didn't understand any of the actual science.
People hear "machine learning" and they think it is about machines that know how to think. Machine learning is actually just optimization of high dimensional functions. If this language were used it wouldn't sound as sexy, but no one would think machines are going to take over the world.
AI isn't magic. It's really just clever search techniques and mathematical optimization.
There are still many, many things that we don't understand about the brain. Even the things we think we understand, we're not always 100% sure of. Recreating an actual intelligence will be difficult.
What's your point? Nobody said it's magic. The fact that it isn't magic (and that its tremendous complexity far surpasses our current ability to understand it) supports the notion that it won't suddenly spring into existence. If we placed some primordial sludge in a petrie dish overnight, we wouldn't worry that a sentient creature will have materialized. And if we program a computer to optimize numerical functions, there is just as little evidence (perhaps less), to suggest that the computer will somehow gain sentience.
There's a very fine line between AI futurist and best-guess scifi writer. Most "AI thought leaders" are scifi writers, not technical researchers. They take preconditions, generate a story, think how it could happen given plausible technology, then market that as soon-to-be-fact.
It's an entertaining society and endlessly fun to read, but still complete fiction based on internal brain states of individuals and not necessarily based on real world interactions.
Also see: Eliezer Yudkowsky — great writer, fun to read, but largely scifi thought experiments masquerading as research.
http://lesswrong.com/lw/691/qa_with_shane_legg_on_risks_from...
Quote:
Q6: Do possible risks from AI outweigh other possible existential risks, e.g. risks associated with the possibility of advanced nanotechnology?
Shane Legg: It's my number 1 risk for this century, with an engineered biological pathogen coming a close second (though I know little about the latter).