Skip to main content

Verified by Psychology Today

Artificial Intelligence

Can AI Be Evil?

Artificial Intelligence may be misdirected; it is also prone to error.

Domestic appliances may serve the ulterior motives of manufacturers and marketers, not to mention stalkers. Can electronic devices follow independent objectives that are hostile to all human interests? Can they pursue malevolent agendas?

Darwinian Notions of Evil Intent

To a Darwinian, the root cause of a desire to harm others concerns the competition to survive and reproduce. This is practically important to humans because most violent crime in modern life derives from reproductive competition. Violent criminals are mostly young men in their peak dating years and victims crowd into the same category.

Electronic devices don't experience reproductive competition. Yet, it is not hard to imagine scenarios where AI-enabled machines seek to destroy humans, or each other. Perhaps the most obvious scenario is that of robot warriors designed to kill humans while evading detection and neutralization.

Robot Warriors

Developers of robotic warriors must ignore the usual prescription that their product should avoid doing harm to people. This moral imperative is taken very seriously by most developers of AI who foresee a time when intelligent systems can easily outwit humans. They are even willing to delay development in the interest of safety.

Killer robots are nevertheless under development in several countries, including the U.S., China, Iran, Israel, and South Korea. Critics fear their use by pariah regimes and terrorists.

Electronic glitches could be an even bigger threat. In the worst case scenario, code designating specific enemies gets damaged and the robot warriors go after all humans.

Error has always been the biggest nightmare scenario in relation to nuclear weapons and is a powerful argument for getting rid of them. This reasoning suggests that the use of autonomous weapons systems should be halted on humanitarian concerns given the devastating potential of electronic glitches and software bugs. As it is, we are at the mercy of numerous AI systems that could malfunction with devastating effects.

Benign Devices Gone Awry

The task of keeping track of scores of planes circling a busy airport at the same time is beyond human intelligence. For that reason, most of the hard work of air traffic control is done by computers, with humans looking on to spot potential anomalies. Similarly, much of the labor of controlling an aircraft in flight is performed by automatic pilots

These AI systems are tried and tested and are generally highly reliable, performing to a much higher standard than human operators. If the systems were to develop errors, it is possible that many planes would collide in midair, raining debris and corpses over busy cities. So far, this disaster has played out only in fiction.

However, in one example, Malaysia Flight 237, a flight carrying hundreds of passengers veered off course as it transitioned from one flight control zone to another. Neither the plane, nor the passengers, have ever been recovered. All we know for sure is that the plane's transponders identifying its location were either turned off, or failed.

This horror story is often attributed to nefarious actions, whether by the flight crew, by terrorists, or governments. Yet, it could be attributable to some low-level glitch in intelligent flight-control systems—a boring explanation that does nothing for conspiracy theorists.

Benign systems that exercise a great deal of control over our well-being and survival can malfunction with consequences every bit as deadly as might be produced by a hostile agent. That is why we need to be careful about how much control we give to AI, whether it serves military objectives, autonomous driving, or even industrial processes.

The Great Human Wipe-Out

While AI may threaten our livelihoods, and even come for our creativity, we live at a time when machine intelligence promises to liberate us from a great deal of drudgery, whether it is writing emails at work, or driving children to sports events. These prospects have inspired an investment craze.

Even as businesspeople tout the potential of AI to improve our lives, many are afraid for the future. This theme was the subject of an art exhibition in San Francisco, the epicenter of digital innovation. The exhibition was framed as the product of AI in the form of an apology for wiping out most of the human population.

One statue of two people is made of paperclips based on a thought experiment where AI adopts the mission of making as many paperclips as possible, eventually getting carried away and converting people to paperclips.

advertisement
More from Nigel Barber Ph.D.
More from Psychology Today