I'm seriously so sick of hearing about how machine intelligence is going to spell the end of humanity. The amount of gears that would have to fall into place is never mentioned. We aren't close to SMI. Its much more likely that we humans are excelling at dreaming up apocalyptic scenarios, much like we have always done.....
The "sloppy, dangerous thinking" is the aversion these types of articles create within the general population to artificial intelligence. We don't need to fear AI, we need to understand and control it...
I don't consider myself particularly alarmist about too many things, but I have to admit I'm a little worried about machine intelligence on one front:
What happens when most people have no salable skills due to the combination of robotics and AI? We're essentially going to have to live w/ income supports for the 90+% of Americans, and worse for the countries to which we've exported eg electronic device construction and clothing manufacture. I think there's a nonzero chance society essentially tears itself apart during the transition period. It is now the Republican party position that not all people deserve healthcare, housing, or enough food to eat. What happens when their hated segment of the populace gets much bigger in a job market that doesn't need cashiers, janitors, gardeners, cooks, taxi drivers, car washers, many farmers, or most menial labor?
Also, I would note that creating AI that requires less control makes it more useful. So in some sense the development of AI itself fights against controls.
I think this is a more legitimate concern than the fear of the "Matrix outcome" that some people seem to have.
But, what you're describing is the process of people being replaced by technology. Generally speaking, this will probably not be a problem for a free-market economy, although it will certainly result in some unemployment.
The key point is that replacing humans with machines does not only cause unemployment, but it also reduces cost of production, which stimulates capital investment and/or reduces prices in the industry in question. In the general economy, this reduced cost and increased production should offset the lost "purchasing power" from the now-unemployed parties. The stimulus reduces costs in other industries which promotes job growth.
The end result is likely to be that the same number of people are employed, but they are employed in a more efficient manner of production. The cost of labor will decrease relative to the amount of production, however because this is tied to a decrease in cost of production, the real value of the labor (in terms of things you can buy) should not decrease.
Of course there will be some disenfranchised individuals, especially those who have particular skills that are replaced my mechanization. However, this is more likely to affect skilled laborers (like cooks) than those who are not paid for their skills (like janitors or cashiers).
In the end, I guess my point is that a free-market economy naturally balances these factors due to the relationship between supply and demand. However, it's possible that we will reach a point, especially if we truly do hit a Singularity, where we will have to reconsider the use of a scarcity-based economy at all, as production becomes completely divorced from any human action. Hopefully though, at that point the cost of goods will have naturally fallen to such a degree that the transition can be performed peacefully.
I'm not an economist, but my understanding of the massive social impact observed in the Industrial Revolution was not so much that it happened at all, but rather the rapidity with which it happened. We ultimately reached a new, stable equilibrium, but until various social forces and trends, government policy, etc. caught up, there was massive disruption.
People like Jeremy Howard believe that we are in for a similar wave of disruption. I have no doubt that there is a new, stable equilibrium which we _could_ eventually reach, but if the change is so sudden and the shock strong enough, perhaps there could be permanent or semi-permanent negative consequences before the new equilibrium is reached.
I think you're right, that does seem like a possibility. It seems unlikely to me that our wage-paying jobs will be phased out by automation that rapidly, especially if you consider the whole global economy. Of course, I could be completely wrong -- I guess a true Singularity could invalidate almost all labor in a matter of years or even months, depending on what form the AI takes and what it invents.
Just for starters, consider how an unemployed walmart cashier, strawberry picker, or janitor becomes a doctor. They don't. Replacing humans with machines exactly causes unemployment, as could be easily seen by eg the last 40 years of economic history. Unemployed people can't buy anything.
You could also consider comparing the approaching wave of robotic mechanization with the first wave. Again, there was a lot of violence and it took many decades for increased standards of living to reach the working class.
I have an experience to share with you: I taught English a a car factory for a year once, and the director of the paint shop was one of my students. He was describing the process for painting a car, and telling me that the hardest part was painting the inside, as you had to open the doors so that the robot paint head could get inside and do its job. So the body rolls into position, 2 guys open the doors, the robot painter goes in and does its thing, comes out and the guys close the doors again. Rinse and repeat for 20 hours a day.
I asked him why there isn't a robot to open and close the doors. He said that there is, but in this country its much cheaper to pay people to do it. (I think he said the cost of maintaining the door opening robots to be about $1m a year approximately.
So until we have a world where every country has the same salaries as USA/Western Europe/Japan, I think you will always be able to find work.
Even if that work should really be done by a robot
That's only true if the price of robots doesn't decline rapidly. Which (I believe) it is, and like virtually any other technology, will continue to do.
Even if not, we don't need that many people to open doors. So perhaps an office building will have 2 human janitors and 10 robot janitors; it doesn't really change the problem caused by souped up roombas putting the vast majority of janitors out of work.
Exactly. A lot of these doomsday scenarios involve people remaining ignorant, stupid, helpless bags of meat with no ability to improve themselves or contain potential threats.
It's like people freaking out that "superhuman strength machines" would spell the end of the world since why, if an electric motor is so powerful, would you need manual labor for anything?
SMI is another tool and the interplay between "machine intelligence" and "human intelligence" will be complicated and nuanced.
For example, biotech is filled with ferociously complicated problems that may take machine intelligence to solve. Once solved, these could lead to genetically engineered humans that are intrinsically smarter or better able to deal with the machines.
This doesn't even touch on the fact that the distinction between machine intelligence and human intelligence might become quite blurred.
Already I've noticed that people are "stupider" without their phones, they've offloaded a lot of cognitive functions on a device that's pretty much omni-present. A person with a smart phone today could be considered of superhuman intelligence since they're able to draw on significant resources a person without one doesn't have. A seven year old kid can tell you the capital of Tajikistan and the last ten presidents of Micronesia without breaking a sweat.
The concerns about AGI are very real and none of these comments address any of the arguments made about them. I feel like someone in the 1930's trying to warn people about nuclear weapons. Everyone automatically assumes it's absurd and can't happen, that it's fear mongering etc.
Fortunately nuclear weapons didn't destroy the world, but AI almost certainly will. No amount of smarthphone apps or genetic engineering is going to make humans anywhere near the level of superintelligent machines.
I'm confused. You mention nuclear weapons, which everyone was convinced would destroy the world and didn't, then go and claim that AGI, with the same potential, will assuredly do it.
Just as nuclear weapons radically transformed the world, dramatically reducing the amount of armed conflict, AGI may have a similar transformative effect.
I see no signs that this is going to lead to destruction. Is it really the sign of an intelligent machine to go all Skynet on us?
Even that doomsday scenario had machine intelligences fighting for us. I think your pessimism is confusing the relative probability of the outcomes.
I'm not being pessimistic, I'm being realistic. I absolutely want a positive outcome, where we build machines millions of times smarter than us, and they magically develop human values and morality and decide to help us.
But making that happen is very very hard, and its far more likely they will be paperclip maximizers. There's no reason they would care about us any more than we care about ants.
This is an ugly situation because usually the geeks defend technology from the luddites, but in these cases the geeks are the luddites. I find the more someone sees themselves as an intellectual the more they are afraid of AI. I guess a cynical explanation is that AI will knock them off that intellectual pedestal. Personally, I welcome something smarter than us. We've just been tip-toeing through endless warfare, poor economics, poor social policy, and occasionally skirting with nuclear destruction.
Give the AI's a chance to contribute, especially if the solutions to problems we can't crack are because of human cognitive limits. This situation reminds me of how the Apollo landings couldn't be done without computers. There's just no way a person can do those calculations on paper. AI as a contributor of economic, technological, or social policy seems to be a similar step.
It has been my experience that the more a particular person attempts to understand and control machine intelligence, the more she grows to fear it and its potential.
The only people who claim that machine intelligence is dangerous are the ones on the outside looking in. Everyone who actually works in on AI and understands it (hint, it's just search and mathematical optimization) thinks the fear surrounding it is absurd.
> Everyone who actually works in on AI and understands it thinks the fear surrounding it is absurd.
This isn't true. Please don't state falsehoods. Stuart Russell, Michael Jordan, Shane Legg. Those are just the ones mentioned elsewhere in this thread.
How many of those AI researchers are actually working on AGI though? As you mentioned, most of them are in fact just developing search and optimisation algorithms. Personally, I believe the fields of neuroscience/biology are more likely to produce the first AGI. People who claim machine intelligence is dangerous are not scared of k-means clustering or neural networks, they are scared of an hypothetical general intelligence algorithm which hasn't been discovered yet. One could argue that the fear is absurd because AGI is not likely to happen within our lifetime but it's hard to argue that it will not happen eventually and be a potential threat.
Militaries are developing AI-controlled guns and mobile gun platforms. They have already accidentally killed humans. http://www.wired.com/2007/10/robot-cannon-ki/ Another incident like this with more intelligence and mobility could kill a lot more people.
The "sloppy, dangerous thinking" is the aversion these types of articles create within the general population to artificial intelligence. We don't need to fear AI, we need to understand and control it...