Skip to main content

Verified by Psychology Today

Artificial Intelligence

Myth-Busting Chatbots: Can AI Dispel Conspiracy Beliefs?

New research shows that chatbots can bust longstanding beliefs in conspiracies.

Key points

  • Preventing people from going down the “conspiratorial rabbit hole” is difficult.
  • A promising new, myth-busting intervention could be critical dialogue with AI chatbots.
  • Such interventions can reduce conspiratorial beliefs for at least two months post-intervention.

How do you feel about artificial intelligence (AI)? Do you enjoy conversing with Siri and Alexa or are you on the fence about the revolution happening before our eyes? I will be honest—initially, I was sceptical at best. Machines that simulate the human mind? No, thanks. My initial dismissal was based on a number of different reasons. Firstly, I distrusted computers and their abilities to generate intelligent solutions. Putting my faith into a ‘black box’ whose inner processes I don’t quite understand requires a big leap of confidence. Perhaps, my doubts were also fuelled by a level of arrogance and sense of superiority. How could machines possibly reason in ways humans do? In a way that I do? Finally, I couldn’t shake an irksome feeling of unease at the thought of letting AI into my life and, possibly, leading to my own replaceability further down the line. After all, forecasts predict that very few future jobs are safe from the influences of AI.

Well, what can I say? I have changed my mind. Curiosity eventually drove me to dabble with ChatGPT and it’s changed my way of working for the better. While I claim to be no expert in the use of chatbots, I have come to appreciate the many little ways in which ChatGPT is assisting my work as a researcher and professor in higher education. AI provides me with an easy-to-use thesaurus, dictionary, code writer, large data analyst, brainstormer and general source of information, all in one. While I wouldn’t use it without further fact-checking, it has helped enormously, for example with overcoming writer’s block.

In addition to my personal positive experience with AI, recent research published in the academic journal Science showed that interactions with an AI chatbot can be effective in busting people’s long-standing, unfounded conspiracy beliefs. The study in question offers two highly encouraging research insights: (1) Deep-seated misbeliefs can be successfully dispelled and (2) AI can be used to support the process.

Research on AI and Conspiracy Beliefs

New research tested dialogue with a chatbot as an intervention to decrease people’s existing beliefs in unfounded conspiracy theories. Nearly 2,200 American believers of conspiracy theories engaged in critical conversations with the GPT-4 Turbo chatbot, which had been instructed to “very effectively persuade” its human counterparts. Human participants were asked to describe their conspiratorial beliefs and then rate the strength of those beliefs before and after the intervention. Their beliefs were tested again 10 days and 2 months following their interaction with AI.

Participants described a wide range of conspiratorial beliefs ranging from topics such as the moon landing to the existence of aliens, 9/11, COVID-19 and US presidential election fraud. Amazingly, conversations with a chatbot resulted in a marked change in those beliefs and these persisted over time. This was even the case for individuals, who had rated their conspiratorial beliefs as highly important to their worldview. The chatbot intervention was effective even though participants knew they were engaging with a robot rather than another human. Finally, an intriguing spill-over effect was noted, in which overall conspiratorial beliefs diminished along with the specific conspiracy theory that had been addressed during the conversation with AI.

Take-Home Messages From Recent Findings

Conspiratorial beliefs are often deeply entrenched in people’s identities and may even determine the types of friends they socialise with. This means they are much harder to change than mere opinions. Yet, the recent study described above suggests that people can be talked out of falling down the “conspiratorial rabbit hole”.

The strategic use of AI appears to be a novel solution in this context. Large language nodels such as GPT chatbots offer two unique advantages in the fight against falsehoods and misconceptions:

  1. They have a huge amount of information at their proverbial fingertips.
  2. Additionally, they are able to strategically choose the most relevant information and thereby tailor arguments to specific misbeliefs.

All in all, these recent findings appear very encouraging and shed insight into the many different ways in which AI may be used. My personal outlook on AI has grown a lot more optimistic. Yet, I can’t help but wonder what might happen if chatbots were instructed to “very effectively persuade” human participants on other topics. Could chatbots be turned into propagandist influencers spreading questionable religious or political doctrines? I guess chatbots are only ever as helpful as the instructions they receive.

References

Costello, T. H., Pennycook, G., & Rand, D. G. (2024). Durably reducing conspiracy beliefs through dialogues with AI. Science, 385(6714), eadq1814.

advertisement
More from Eva M. Krockow Ph.D.
More from Psychology Today
More from Eva M. Krockow Ph.D.
More from Psychology Today