Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Some of my personal skepticism boils down to: well, what are we going to do about it? There are only really two options:

(1) The methods to create strong AI will become known to us before we actually build something dangerous. At that point, since we will better understand the nature of the potential threat, it will actually be feasible to put safety restrictions in place.

(2) Someone will stumble upon strong AI in secret or on accident. I don't see how this is preventable, unless we issue a moratorium on AI-related research, which just isn't going to happen outside of scenario 1.

And so the answer becomes: let's wait and see.

That said, I don't believe there's anything unbearably harmful about the current level of speculation and "fear-mongering".



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: