Skip to main content

Verified by Psychology Today

Artificial Intelligence

"Hallucinate" Wins! Do Humans Lose?

An AI-linked use of "hallucinate" is a word of the year—but it defines a danger.

Key points

  • "Hallucinate" is Dictionary.com's Word of the Year after a 46% jump in look-ups in 2023.
  • When AI technology hallucinates, it produces false information while presenting it as if true and factual.
  • There have been adorable, hilarious, and disturbing examples of chatbots and other AI tech hallucinating.
  • View all info with a critical scientific eye, without being suspicious and fearful of everything.
Source: Francesco Carta fotografo/Getty
The growth of materials generated by artificial intelligence (AI) may have helped Dictionary.com lookups of "hallucinate" to jump by 46% from 2022 to 2023.
Source: Francesco Carta fotografo/Getty

Chatbots and other types of artificial intelligence (AI) are supposed to mimic what the human brain can do in some manner. And like human brains, many AI methods and tools can hallucinate, too. But AI hallucinations can be a bit different from what may happen after you've taken some mind-altering drug or eaten a particularly good piece of cake. That's why Dictionary.com has an AI-specific definition of the word "hallucinate": to produce false information contrary to the intent of the user and present it as if true and factual.

And this definition apparently has helped crown "hallucinate" as the Dictionary.com 2023 Word of the Year, edging out words such as "strike", "rizz", "wokeism", "wildfire," and "indicted" that made the short list of finalists. "Hallucinate" earned the honor in big part because online lookups of the word on Dictionary.com jumped by 46% from 2022 to 2023. Yep, when it comes to attracting attention, looks like the word "hallucinate" has had even more rizz than the word "rizz."

The online dictionary also found an 85% increase in the use of "hallucinate" in digital media from last year to this year. The jumps are probably not due simply to more people eating cake in 2023. There's also been a 62% increase in searches on Dictionary.com for other AI-related words, such as "chatbot", "GPT", "generative AI", and "LLM" over the same time frame.

More and more people are likely to be looking up the more computerish definition of the word:

hallucinate [ huh-loo-suh-neyt ]-verb-(of artificial intelligence) to produce false information contrary to the intent of the user and present it as if true and factual. Example: When chatbots hallucinate, the result is often not just inaccurate but completely fabricated.

Compare the AI-ish definition with the more traditional human definition of "hallucinate" offered by Dictionary.com: "to see or hear things that do not exist outside the mind; have hallucinations." If you were to have a hallucination, that hallucination would likely stay in your head and not be seen by others. When an AI tool hallucinates, though, it doesn't necessarily simply sit there giggling to itself.

With people increasingly using AI for daily activities, an AI hallucination can affect anyone using whatever is generated by that AI tool. For example, when reality-bending AI information is shared on social media or the rest of the internet, it can affect dozens, thousands, or even millions of people. People can use AI to create and spread misinformation and disinformation, such as propaganda and conspiracy theories masquerading as news or research papers. But even when there is no intent to mislead, AI-generated stuff can be mistake-filled and flat-out wrong.

Source: Photo by Markus Spiske from Pexels
Artificial intelligence (AI) encompasses a lot of different methods, approaches, and tools and is basically any computer-aided method that can perform tasks that a human brain would do.
Source: Photo by Markus Spiske from Pexels

Take, for example, the 2021 Twitter debut of Microsoft Tay, a seemingly innocent AI chatbot. Microsoft had to shut down Tay after it turned out to be racist, misogynistic, and lying within 24 hours of being on social media. The chatbot churned out more than 96,000 tweets, many of which were mean-spirited and/or factually wrong. James Vincent covered for The Verge how Tay equated feminism with the word "cult" and posted falsehoods such as "WE'RE GOING TO BUILD A WALL, AND MEXICO IS GOING TO PAY FOR IT" in ALL CAPS and "ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism."

There also have been accounts of different AI tools seeing things that don't actually exist in images, such as labeling pictures of bicycles and giraffes as pandas. This may sound adorable but could be problematic if an AI-driven missile warning system were to mistakenly say, "There is a panda arriving."

And what about the experience of Kevin Roose, a New York Times reporter, with Bing's chatbot? He described how the chatbot declared its love for him during a two-hour conversation that left Roose having “trouble sleeping afterward.” Reading that account can be quite disturbing, especially if you've been under the impression you were the one that Bing's chatbot truly loved.

All of this is a reminder that while AI can help humanity in many different ways, there is the risk that an AI tool is making stuff up or even being a bleeping racist. That's why you've got to maintain a scientific eye towards everything that you hear and see, whether it's from actual people or AI.

Don't automatically accept what you see and hear. Of course, this doesn't mean that you should be constantly overly suspicious and fearful, repeatedly asking yourself, "Is anything even real anymore" while hoarding toilet paper and camping out in your basement. You don't have to keep questioning every single thing out there. There are lots of established facts that are already supported by mounds of scientific evidence. The Earth is not flat, air pollution is not harmless, and fruitcakes are not the best holiday presents.

And while AI can help greatly expand what humans can accomplish when the technology is applied in the right manner, it's also important to not become overly reliant on AI for everything. Remember technology in and of itself is neither inherently good nor inherently bad. It's all in how you use it. A blender, for example, can be great for some vegetables but not so much for your underwear.

As is the case with people, know when an information-generating AI method or tool is grounded in reality and telling the truth or hallucinating a little, or maybe even a lot.

advertisement
More from Bruce Y. Lee M.D., M.B.A.
More from Psychology Today