Skip to main content

Verified by Psychology Today

Artificial Intelligence

Could Psychology Make ChatGPT Irresistibly Persuasive?

ChatGPT explains its power and creates references that don't exist.

Key points

  • ChatGPT and other AIs produce well-written and convincing prose, but we have yet to determine all of its potential or dangers.
  • ChatGPT generated a persuasive essay on using psychological principles to produce arguments readers couldn't resist.
  • ChatGPT supported its arguments with "hallucinated" (non-existent) references.

Like many professors, I've been bemused by the potential of AI to produce clear, coherent text.

AI is everywhere. I first noticed it when I started researching bonsai. Casual internet searches produced many sites with great opening paragraphs followed by repetitive information and poor logical flow. Aha! I thought. Bots.

Then ChatGPT exploded on the scene. Colleges ran workshops warning faculty about how AI's ability to produce high-quality text on any topic would undermine academic integrity and university grading schemes.

Again, I wasn't concerned. I started thinking about how to use ChatGPT in class and how to teach students to harness its potential ethically. Its usefulness as a research assistant seemed enormous.

Then I read an interesting parable in a New York Times op-ed by Ross Douthat. Taken from Nick Bostrom’s 2014 book, Superintelligence, it describes a flock of sparrows trying to harness an owl to do heavy repetitive labor. One sparrow thinks the flock should know how to tame the owl before bringing it into their home. Its fellows do not. You can imagine the results. Douthat argues that we are at a cusp much more dangerous than sparrows awaiting the arrival of an owlet. At least the sparrows knew what an owl was and what it could do. With AI, we don't. It is hard to envision how a future AI will transform our public and private lives when we barely know what it can do.

ChatGPT and Psychology: A Frightening Mix

That got me thinking. All good writers use psychology to make their work more convincing. But human writers are uneven in our application of psychological principles and have only limited time and information. Following Douthat, I thought, ChatGPT does not have these limitations. Could it write articles so persuasive that its arguments would be impossible for readers to resist?

I decided to ask ChatGPT.

My prompt: "Write a convincing article about how ChatGPT could use psychological principles so its arguments are so convincing people would have a hard time resisting them."

The response by ChatGPT begins:

ChatGPT is already a powerful language model that can generate text, answer questions, and hold a conversation with human users. But what if it could be even more persuasive? By incorporating psychological principles into its arguments, ChatGPT could become an even more effective tool for influencing people's beliefs and actions.

One key psychological principle that ChatGPT could use to make its arguments more persuasive is social proof. Social proof refers to the tendency of people to conform to the actions and beliefs of others in their social group. ChatGPT could leverage this principle by presenting evidence and arguments that are supported by a large number of people or trusted authorities. . . .

It goes on to list three more principles and concludes:

. . . Of course, the use of psychological principles in persuasive communication raises ethical concerns. ChatGPT should be careful to use these principles in a responsible and transparent manner and ensure that its users are aware of the techniques being used to persuade them. Additionally, ChatGPT should always prioritize accuracy, honesty, and fairness in its arguments and avoid manipulating or deceiving its users.

. . . ChatGPT must use these techniques responsibly and transparently to ensure that it is not manipulating or deceiving its users. With careful use of psychological principles, ChatGPT could become a powerful force for positive change in the world.

(A full copy of the ChatGPT text is available here.)

Reading the text, I found it interesting that ChatGPT conveyed that it should be ethical and transparent and prioritize accuracy, honesty, and fairness. It never says that it does those things. As I always do when using ChatGPT, I refined my query to see if I could get a better response. Specifically, I asked ChatGPT to "regenerate [its] response, making it more persuasive, writing for a college audience, and relying more heavily on references." I found the new text even more powerful and persuasive.

One of the scariest things about ChatGPT is that it writes like it is human, but is not. An ethical code does not guide it. It does not evaluate its sources. It spits back what is on the web–for better or worse. And it "hallucinates," combining information in nonsensical ways. That became apparent as I was writing this post. I was looking for a citation to support the idea that there is a great deal of evidence that propaganda is effective. I asked ChatGPT to summarize that literature for me, citing its sources. What I got follows, at left.

Source: Nancy Darling
ChatGPT's summary of the literature on propaganda
Source: Nancy Darling

I then asked for the Journal of Social and Political Psychology reference that ChatGPT based its argument on. It supplied:

Jost, J. T., Langer, M., & Badaan, V. (2017). Affect, ambivalence, and persuasion: The case of Nazi propaganda in “conflicted” societies. Journal of Social and Political Psychology, 5(1), 189-222. https://doi.org/10.5964/jspp.v5i1.612

Source: Nancy Darling
The third source ChatGPT used to support its summary. Again, I could not locate this paper.
Source: Nancy Darling

Sounds good. Except there is no such paper. ChatGPT seemingly created a reference that looked plausible but did not exist—what is technically called a "hallucination." When I told ChatGPT that the reference was incorrect, it apologized, providing another reference with another author–also non-existent. And then a third (below, at left). ChatGPT provided me with summaries of each article it had cited. As far as I can tell, none of the references were valid.

It is possible that the three articles ChatGPT referred me to are out there somewhere (although a search of the internet and the alleged authors' vitae did not reveal them). So it seems as if ChatGPT fabricated incorrect references. And if I hadn't checked, I would never have known.

That is what makes me nervous: The text was convincing. Well written. Able to accurately apply psychological principles. But also unbound by ethics or judgment. And wrong.

Afterward

I was interested in how ChatGPT would title this article. I provided my title and asked it to make it more attractive to readers. It suggested:

  • Could This One Psychological Tactic Make ChatGPT Impossible to Resist?
  • What Does Psychology Have to Do With ChatGPT's Ability to Persuade?
  • Unlocking the Secrets of ChatGPT's Persuasiveness: A Must-Read for Marketers and Communicators
  • Boost Your Own Persuasion Skills with These Insights From ChatGPT and Psychology
  • The Surprising Way Psychology Could Make ChatGPT More Convincing Than Ever Before
  • The Psychology Trick That Could Boost ChatGPT's Persuasiveness by 50%

Does that look like your social media feed? It sure looks like mine.

References

Alkaissi H, McFarlane SI. Artificial Hallucinations in ChatGPT: Implications in Scientific Writing. Cureus. 2023 Feb 19;15(2):e35179. doi: 10.7759/cureus.35179. PMID: 36811129; PMCID: PMC9939079.

advertisement
More from Nancy Darling Ph.D.
More from Psychology Today
More from Nancy Darling Ph.D.
More from Psychology Today