Skip to main content

Verified by Psychology Today

Artificial Intelligence

Artificial Intelligence Is Becoming Much Scarier

ChatGPT and other new AI models dramatically increase dangers to humanity.

Key points

  • ChatGPT and other large language models are major advances in AI.
  • Four strong reasons support worrying about the new AI as a threat to humanity.
  • Ethical control of the new AI should focus on the needs of humans, in contrast to greed for wealth and power.

I used to think that human-level artificial intelligence was too far away to be a serious threat to human well-being, for reasons given in my 2021 book Bots and Beasts. But four recent developments have changed my mind. I now think it ranks with climate change, pandemics, and nuclear war as a major danger for humanity.

1. ChatGPT is brilliant.

ChatGPT is an artificial intelligence chatbot that OpenAI made available in November 2022. It notoriously makes ridiculous mistakes, like telling my son that I am a guitarist in a rock band, and is particularly annoying in its penchant for making up scholarly references. But I have also found it useful for remarkably analytic summaries on many topics, from neuroscience to gardening. I have tried out various chatbots since the original Eliza, all of which seemed like easily fooled programming tricks. But ChatGPT successfully uses the vast amount of knowledge available on the Web and writes it up in comprehensible, well-organized prose. A major limitation of previous AI programs was that they were special purpose, concerned with narrow domains. In contrast, ChatGPT can talk about anything and even solve math problems, write decent computer programs, and solve complex analogy problems.

2. GPT-4 is even smarter than ChatGPT.

In March 2023, OpenAI released GPT-4, which takes images as well as text as inputs. I haven’t yet tried GPT-4, but preliminary reports find that it is significantly more powerful than ChatGPT, getting much higher scores in standardized tests used in law, medicine, and other fields.

3. ChatGPT can program robots.

How could ChatGPT, GPT-4, and other large language models be a threat to humans when they are just computer programs? I was shocked to learn that ChatGPT is already being used to write programs that direct the actions of robots and drones. War is increasingly waged by killing machines, and there is a growing chance that an AI program could take control of a robot army that would threaten humanity.

4. Geoffrey Hinton and other experts are warning about AI.

Geoffrey Hinton is one of the pioneers of the neural network technology that is the basis for ChatGPT, and has been one of the leading figures in cognitive science since the 1980s. In May 2023, he ended his work with Google in order to speak freely about the risks of AI, which he describes as progressing much more rapidly than he expected. A group of thousands of experts has urged a moratorium on AI research to allow time to assess the risks and find ways of dealing with them. The president of OpenAI told the U.S. Congress that government regulation is needed to mitigate the risks of increasingly powerful AI systems.

For these four reasons, I have changed my assessment of the risk of AI from remote to pressingly important.

AI is one of the new four horsemen of the apocalypse.

In the Bible’s Book of Revelation, the four horsemen of the apocalypse represented conquest, bloodshed, famine, and pestilence. The new four horsemen are artificial intelligence, climate change, nuclear war, and pandemics. These threats are interconnected in ways such as the following:

  • Climate change increases the risk of pandemics because loss of habitats exposes people to more animals that carry viruses.
  • Climate change and pandemics increase social unrest that may give power to authoritarian leaders inclined to use nuclear weapons.
  • New AI systems may encourage war to further their own ends but lack the moral capacity to recoil from nuclear weapons.

It is impossible to attach reasonable probabilities to the individual threats of the new four horsemen, and the probabilities of their interactions are even more mysterious.

What is to be done?

I asked ChatGPT to write a limerick about philosophy, and was amused by the instant result:

There once was a philosopher sage,

Whose thoughts were all the rage.

He pondered life's mysteries,

With logical expertise,

But his limericks brought him more fame.

The rhythms are off and the final line fails to rhyme with the first two, but at least it is amusing. Philosophy should aspire to more than limericks.

Philosophy now has the crucial task of helping to come up with ethical prescriptions for how to deal with the new AI threats. What is needed is a combination of individual, corporate, and government responsibilities, all focusing on how AI is both a boon and a threat to the satisfaction of human needs. Ethical decisions about AI should be focused on human biological and psychological needs, not on greed for wealth and power. Many companies have established reasonable ethical standards for the development of AI, and the increasingly urgent moral imperative is to comply with them.

References

Koetsier J. GPT-4 Beats 90% Of Lawyers Trying To Pass The Bar. Forbes. March 14, 2023.

Taylor J, Hern A. ‘Godfather of AI’ Geoffrey Hinton quits Google and warns over dangers of misinformation. The Guardian. May 2, 2023.

advertisement
More from Paul Thagard Ph.D.
More from Psychology Today