Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How about Shane Legg (One of the cofounders of DeepMind)?

http://lesswrong.com/lw/691/qa_with_shane_legg_on_risks_from...

Quote:

Q6: Do possible risks from AI outweigh other possible existential risks, e.g. risks associated with the possibility of advanced nanotechnology?

Shane Legg: It's my number 1 risk for this century, with an engineered biological pathogen coming a close second (though I know little about the latter).



Shane Legg is known for being a co-founder of DeepMind, in a business role (as I understand). He's a complete nobody as a researcher (is he even an AI researcher? I would be surprised).

The big names of deep learning have all taken a vocal stance against the recent end-of-the-world punditry (most notably Yann LeCun and Andrew Ng). Also notable: roboticist Rodney Brooks http://www.rethinkrobotics.com/artificial-intelligence-tool-...


> Shane Legg is known for being a co-founder of DeepMind, in a business role (as I understand)

You are quite mistaken. He leads the applied AI team there, and has significant history in research.

http://www.vetta.org/publications/


Shane Legg isn't just a business guy - his career has been pretty much focused on AI research since uni - http://www.vetta.org/about-me/


Here's machine learning expert Michael Jordan on the issue: http://spectrum.ieee.org/robotics/artificial-intelligence/ma...


I'm afraid of making a middlebrow dismissal but I'm going to post it anyway, in hopes that someone just skimming would not be mislead.

The question is what Michael Jordan thinks of the "concept of the singularity", and then he dismisses it out of hand.

Crucially, he does this after confessing that no one in his social circle has talked about this issue with him, and without saying anything about what form of Singularity he is dismissing.

I mention this, because oftentimes I see people appealing to authority, quoting them on the issue and the authority in question is not even talking about the same issue!

I worry that my credence in all this superintelligence stuff only stems from familiarity with the arguments and the complete inability of people to engage with the actual argument. Some of the 'rebuttals' in this comments section have answers in Sam's article for crying out loud!


Since you seem to be well-versed in this world, do you know what reputation Nick Bostrom has in these circles?


The only times I've heard him mentioned the impression was negative and that he didn't understand any of the actual science.

People hear "machine learning" and they think it is about machines that know how to think. Machine learning is actually just optimization of high dimensional functions. If this language were used it wouldn't sound as sexy, but no one would think machines are going to take over the world.

AI isn't magic. It's really just clever search techniques and mathematical optimization.


Yes, but intelligence isn't magic either.


There are still many, many things that we don't understand about the brain. Even the things we think we understand, we're not always 100% sure of. Recreating an actual intelligence will be difficult.


> Yes, but intelligence isn't magic either.

What's your point? Nobody said it's magic. The fact that it isn't magic (and that its tremendous complexity far surpasses our current ability to understand it) supports the notion that it won't suddenly spring into existence. If we placed some primordial sludge in a petrie dish overnight, we wouldn't worry that a sentient creature will have materialized. And if we program a computer to optimize numerical functions, there is just as little evidence (perhaps less), to suggest that the computer will somehow gain sentience.


There's a very fine line between AI futurist and best-guess scifi writer. Most "AI thought leaders" are scifi writers, not technical researchers. They take preconditions, generate a story, think how it could happen given plausible technology, then market that as soon-to-be-fact.

It's an entertaining society and endlessly fun to read, but still complete fiction based on internal brain states of individuals and not necessarily based on real world interactions.

Also see: Eliezer Yudkowsky — great writer, fun to read, but largely scifi thought experiments masquerading as research.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: