Skip to main content

Verified by Psychology Today

Intelligence

Should People Fear Artificial Intelligence?

Computers may someday outthink and control people, but not soon.

Stephen Hawking and Elon Musk have recently described artificial intelligence as a major threat to humanity. Their concern is that rapid improvements in the intelligent performance of computers will make them as intelligent as humans. Human-level machine intelligence could then quickly lead to computers that are much more intelligent than us. That leap is plausible because computers have advantages over us with respect to speed of processing, storage, access to huge amounts of information, and ease of transfer between computers. Once this kind of superintelligence exists, it may turn out to have interests and actions that run counter to those of humans, to our detriment and possibly even our demise. How concerned should people be about this problem?

I am currently teaching a course that systematically compares intelligence in machines, humans, and other animals. For humans, I’m using what I think is the best current theory of intelligence: Chris Eliasmith’s semantic pointer architecture. For machines, the class is looking at leading examples of current artificial intelligence programs, including IBM's Watson, Google’s driverless cars, CYC, Apple’s Siri, and Google translate.

This comparison shows that there are still huge gaps between human intelligence and artificial intelligence. IBM's Watson is very impressive in answering questions well enough to beat excellent human players in the TV game Jeopardy. And it is even beginning to show some abilities for creative problem-solving when Chef Watson generates new recipes. Moreover, it looks like Watson is going to make valuable contributions to many other areas such as business and medicine. Nevertheless, Watson for the foreseeable future lies far inferior to human abilities of dealing with perceptual representations, imagery, emotions, consciousness, learning, language, and the full range of creative problem solving that humans can accomplish. Other current AI programs share similar limitations.

Therefore, I think that human level artificial intelligence is more distant in the future than many people suppose. The idea that machine intelligence can result by simply downloading people’s neural connections into a computer is extremely naïve about the complexities of the human brain, which include not just electrical connections but also a vast array of chemical processes involving neurotransmitters, hormones and glial cells. Artificial intelligence has made impressive advances in the last 60 years, producing machines that can play chess and navigate the surface of Mars. But I bet that it will be at least another 60 to 100 years before machine intelligence begins to approximate human intelligence, making AI a far less pressing threat to humanity than global warming, pandemics, and mounting inequality leading to social conflicts.

A more immediate concern about AI is to ensure that the kinds of artificial intelligence becoming adopted by groups like the US military, Google and Facebook be used to benefit human beings. A recent open letter signed by Hawking, Musk, and leading AI researchers makes a strong and sensible plea that artificial intelligence be put to use for human gain.

Although I'm not concerned about machines supplanting humanity in the near future, there is a lot of plausibility to the claim that, once achieved, human level artificial intelligence could quickly produce superintelligence which may in fact be a threat to humanity. The jump from human level to superintelligence could happen rapidly because of its likely ability of to expand at a rate much faster than human intelligence does. Computers can avoid our limitations with respect to processing speed, learning rate, and transmissibility of information. Superintelligence really is scary because there is no reason at all to believe that it would operate in accord with human ethical principles.

You might think that you could program ethical principles into the computer, but any smart program could reprogram itself to eliminate the rules that were provided to it. I doubt that the superintelligence will have the drive to ethical thinking that comes to almost all human beings through our emotional capacity to care about each other. The most influential views in philosophy have tried to make ethics a matter of reason, for example through Kantian rights and duties or through utilitarian calculations of the greatest good for the greatest number. I find more plausible the view of Hume and some feminist ethicists that emotions and caring are the basis of our ethical judgments. Machine intelligence cannot be expected to have the same ethical basis because emotions are partly the result of physiology, not just cognitive appraisal of situations. John Hoagland once said that the trouble with computers is that they just don't give a damn. From the perspective of the long-term benefit of AI for humanity, the problem is that they just won't give a damn for us.

advertisement
More from Paul Thagard Ph.D.
More from Psychology Today