Skip to main content

Verified by Psychology Today

Intelligence

The Great Danger With Advances in Artificial Intelligence

A Personal Perspective: Understanding where the necessary antidote starts.

The recent dramatic appearance of large language artificial intelligence (AI) models has further driven conversations about the dangers and benefits accompanying the contributions of artificial intelligence. The potential benefits are considerable, and I don’t wish to minimize them. But along with many of the best AI thinkers, I think the dangers are very real. As a futurist and psychiatrist, I also think that we have tended to miss—or at least failed to fully appreciate—the danger.

Some of those who are now speaking up emphasize the dangers of disinformation and bias. Others express concern that bad actors on the world stage could wage machine learning-based attacks. And others still are even more cataclysmic in their warnings, pointing toward how a kind of universal general intelligence could out-compete us and essentially take over the world. I think the danger that puts us most at risk is more basic—more psychological and societal.

We see its initial manifestations today with compulsive usage of electronic devices. Increasingly, our devices are designed to capture our attention, pretty much whatever it takes to do so. And machine learning plays an increasing role in how they accomplish this. The mechanisms of compulsive device use are similar to those that produce the attraction of addicting drugs. Our devices create artificial stimulation that substitutes for bodily feedback that normally tells us something matters. Today, machine learning algorithms often compound those mechanisms, supporting the creation of ever more powerful digital designer "drugs"—with increasingly destructive results.

Perhaps surprisingly, this outcome requires no ill intent. Simply give a program the instruction to maximize “eyeballs”—which we naturally do—and with time, it will create the most distracting and addictive content possible. This would be a problem at any time, but it is of particular concern in ours. I’ve written extensively about confronting a crisis of purpose, something we see manifest with today’s growing prevalence of depression, suicide, degenerative diseases, and gun violence. Distraction and addiction only take us away from what engaging real purpose requires of us.

And we confront the critical fact that this mechanism is inherently self-amplifying. Once started, there is really nothing to stop it—at least of a technical sort. It may well be that there is no way to stop it. Of all the dangers that could be our ultimate undoing—nuclear inhalation, pandemic, climate change, and environmental destruction—it is this that I think is most likely to succeed.

In my most recent book, Intelligence’s Creative Multiplicity, I argue that any possibility of avoiding the most cataclysmic of potential outcomes with AI lies with us. I also argue that the place it must start is with understanding how fundamentally different machine intelligence is from human intelligence. In fact, they have little to do with one another.

A first recognition is both basic and radical. There is an important sense in which human intelligence is not just more complex than machine learning in its considerations. It is inherently purposeful. Human intelligence is “designed” to engage us in questions of value and meaning. This in itself takes a long way toward an antidote. Engage human intelligence deeply; anything that distances us from our felt sense of human purpose—which addictive dynamics directly do—is experienced as a violation.

The book draws on creative systems theory’s picture of intelligence’s multiplicity to take this kind of recognition further. Creative systems theory describes how human intelligence, with its multiple aspects, is specifically structured to support and drive our toolmaking, meaning-making natures. And it goes further to delineate how effectively engaging all aspects of intelligence is essential to thinking with the sophistication the future will require. What the theory calls culturally mature understanding requires that we draw not just on our rationality (in which we take appropriate pride) but also on the world of feelings and emotions (that informs human relations), on the language of imagination (that inspires art and myth), and on the intelligence of the body (that provides a foundation for all the rest). We are, by nature, reflective, creative beings.

In contrast, while artificial intelligence can be almost infinitely complex, at best, it mimics one aspect of human intelligence—the rational—and even that only imperfectly. (Our rationality works in much more nuanced ways than we tend to appreciate.) In the end, it is machine intelligence. This recognition is critical if we care to avoid calamity. Certainly, it is essential when it comes to confronting today’s crisis of purpose. Machine learning is a tool, and one with great potential for good. But in contrast with human intelligence, there is nothing in it that makes it inherently purposeful—or even simply good.

Today, we easily miss these critical distinctions. Indeed, because we so readily idealize the technological (in effect, make it our god), we can get things turned around completely. Caught in techno-utopian bliss, we can make machine learning what we celebrate. And that is just a start. In an odd way, machine learning becomes what we emulate. As attention spans grow shorter and shorter and we give up more and more of our attention to our devices, cognitive changes are taking place in response. Arguably, today, it is less that our machines are coming to think more like us than that we are coming to think more and more like our machines.

We let this happen at our peril. Our ultimate task as toolmakers is to be sure that we use our ever-more amazing tools intelligently and wisely. That starts with being able to distinguish ourselves and our tools clearly. Machine learning—and the ever more complex and often amazing forms it will surely take in times ahead—will provide a particularly defining test of this essential ability, one on which our survival may depend.

advertisement
More from Charles Johnston MD
More from Psychology Today