Skip to main content

Verified by Psychology Today

Artificial Intelligence

The Psychology of Artificial Intelligence

Why are we so fascinated by (and worried about) AI?

Key points

  • Artificial intelligence (AI) has deep roots in the history of the future.
  • Both utopian and dystopian scenarios have been a staple in scenarios related to AI.
  • People's feelings about emerging technologies have always been intense and ambivalent.

While it’s making headlines today, and deservedly so, artificial intelligence—the theory and development of computer systems able to perform tasks that normally require human intelligence—has deep roots in the history of the future. In fact, the ability of machines to mimic and possibly surpass human capabilities has long served as a major trope within futurism. And just like today’s predictions about where AI may take us, both utopian and dystopian scenarios have been a staple in those scenarios.

What are the psychological underpinnings of thinking in general about the future, i.e., that which is yet to be? For David Remnick, as he explained in 1997, the future consists of “stories we tell to amaze ourselves, to give hope to the desperate, to jolt the complacent,” implying that thinking about tomorrow really serves the needs of today. The future is indeed “always about the present,” Remnick continued, a catharsis for “what confuses us, what we desire, what we fear.” Likewise, “prophecies and predictions tell us little or nothing about what will happen,” David A. Wilson argued in his 2000 History of the Future, but rather “tell us a great deal about the fears, hopes, desires, and circumstances of the people who peer into their own future and imagine what it will be like.”

Much of this psychology is embedded in our ambivalent feelings toward what would be called artificial intelligence. The journey of AI has certainly been an interesting one, beginning in the 1920s when robots emerged as a ubiquitous symbol of the future and served as that era’s definitive expression of the cross-pollination between humans and machines. Our tense relationship with automatons could be detected even before Czech playwright Karel Copek first coined the word “robot” in 1921, however, raising the persistent question of whether these anthropomorphic machines would be our slaves or our masters.

A generation later, some of the best minds of the day were obsessed with the idea of ever-advancing automation. Marshall McLuhan, arguably the leading post-World War II authority when it came to the dynamic between humankind and machine, proposed that in an automated world, people in the future would have abundant leisure time, so much so that it would be a challenge to fill our days productively.

Meanwhile, Buckminster Fuller believed that more automation, specifically made possible by the new vending machine-sized computers being used, was nothing less than a key turning point in the history of mankind. “That the machine is to replace man as a specialist, either in craft, muscle, or brain work, is an epochal event,” he thought, something that would redirect the trajectory of our species.

The conviction that the merging of humankind and machines would lead us to an uncertain future continued to gain traction through the latter 20th century. In the 1980s, with a new century and millennium in sight, our love-hate relationship with technology became more complex as the basic concept of AI materialized.

It was that decade’s computer revolution that encouraged the sense that artificial intelligence represented something entirely new and unknown. “In contrast with earlier decades of invention, man stands at the dawn of the Age of Insight,” Gene Bylinsky observed in 1988, defining this weighty concept as “a new era of understanding how things work and how to make them work better.” The Age of Insight would be a fertile breeding ground for everything from artificially intelligent computers to diagnostic machines, much like Dr. McCoy’s in Star Trek, he predicted, representing a quantum leap in science and technology because of the evolution of “smart” capabilities.

As we drew closer to and then crossed over the new century and millennium, visionaries pointed out both the potential benefits and dangers of emerging artificial intelligence. Steven Levy, author of the 1992 Artificial Life, for example, claimed that in the next century, “we’ll relate to our machines as we now relate to domestic animals.” He envisioned a self-replicating, mobile robot that could find its own sources of energy—a rather sanguine view of the merging of humankind and machine.

Soon after that, however, innovator Ray Kurzweil warned that we were on the cusp of an era so radical that we couldn’t really grasp its implications. Since the 1960s, Kurzweil had been traipsing through the new frontier of artificial intelligence, inventing such things as the flatbed scanner, electric piano, and large-vocabulary speech recognition software.

In his 2005 bestseller The Singularity is Near, Kurzweil presented his concept of the “Law of Accelerating Returns,” in which he argued that the social effects of technology were expanding at an exponential rate. By 2027, computers would be smarter than humans, he predicted, and, in another 20 or so years, the point of “Singularity” would be reached, a critical time because people wouldn’t be able to understand technology as it will be so much more intelligent than us.

As AI continues to move from theory to reality, history tells us that both our faith in and fears of the humanization of machines (and the mechanization of humans) are likely to intensify.

References

Wilson, David A. (2000). History of the Future. Toronto, CA: McArthur and Company.

Levy, Steven. (1992). Artificial Life: A Report from the Frontier Where Computers Meet Biology. New York, NY: Pantheon.

advertisement
More from Lawrence R. Samuel Ph.D.
More from Psychology Today