Skip to main content

Verified by Psychology Today

Trust

Are the Critics of Cognitive Immunology in Denial?

The new science of misinformation is generating intemperate pushback. Why?

Key points

  • The emerging science of mental immunity points to real solutions to our misinformation problem.
  • Critics argue that misinformation is indistinguishable from real news; science shows otherwise.
  • Detractors fault the disease model, but empirical research shows that misinformation spreads like a virus.
  • Misinformation has distinct patterns we can learn to spot, thereby boosting our "misinformation immunity."
geralt / Pixabay
Challenging ideas have always faced criticism
Source: geralt / Pixabay

by Andy Norman and Lee McIntyre

A promising new paradigm is taking hold in cognitive science. It employs concepts borrowed from immunology to illuminate the way our minds handle information—especially misinformation. We call it “cognitive immunology” and think it can transform our understanding of extremism, polarization, and ideological rigidity. Already, the approach is helping democratic nations around the world combat digital influence operations.[1]

Unfortunately, some are reacting to this work in ways that are not constructive. Take University of Sussex philosophy lecturer Dan Williams’ review of Sander van der Linden’s book Foolproof in Boston Review. [2]

In this review, Williams purports to dismantle the central tenets of van der Linden’s framework for fighting misinformation.[3] A careful look, though, reveals this critique to be specious. Williams spins a narrative in which today's misinformation researchers are in the grip of an irrational "panic" that "took off in 2016," when Brexit passed and Donald Trump was elected president. According to Williams, this led to a "frenzied search for solutions" and a hasty embrace of the idea that misinformation is akin to a mental virus.

The real story, though, is quite different, for the spread of misinformation really is relevantly similar to the spread of disease. [4] And as far back as 1961, William McGuire showed that minds can be “inoculated” against unwanted information. [5] In van der Linden's book, we encounter a rigorous empirical account that updates these insights for the digital era.

Van der Linden argues that misinformation and online manipulation have discernible “fingerprints” that people can learn to spot—just as harmful pathogens have biochemical markers that our immune systems can learn to spot. For example, a fake news outlet might use the technique of discrediting to induce distrust of the “mainstream media.” (Van der Linden highlights six tactics: Discrediting, Emotion, Polarization, Impersonation, Conspiracy, and Trolling, and he offers the acronym DEPICT as a mnemonic.) Learn to spot such tactics, suggests van der Linden, and your mind becomes more resistant to manipulation.

Williams replies, reasonably enough, that it is not just unreliable sources that employ discrediting. Reliable ones use it, too—to put unreliable sources in their place. For example, the Economist might do an expose of irresponsible reporting at Fox News. Put differently, discrediting isn’t just used to push falsehoods; it’s also used to serve truth. Hence, concludes Williams, it is not a reliable marker of misinformation.

It's true that discrediting is not a perfect indicator of misinformation. Nor is it a sure-fire indicator of manipulative intent. But Williams wants us to accept something far more radical: that misinformation has no “fingerprints” at all. So he tacks on a sweeping philosophical argument: “There are no intrinsic differences between truth claims and misinformation for the simple reason that a claim’s truth depends not on identifiable features of the claim but on (features of) the world.[6]

This can look like a telling argument; in fact, it is deeply flawed. For starters, it erects a straw man. Van der Linden is under no illusion that truth is “intrinsic.” Nor does he claim that discrediting is an infallible sign of falsehood. The idea is rather that discrediting is a flag that information manipulation might be happening. If your guru tries to persuade you that your family is not to be trusted, you’d be wise to discount his words rather than theirs: He's probably playing you. The same goes for sources that try to discredit “the mainstream media.” “Intrinsic” falsehood is a red herring of Williams’ devising, as any charitable reading of Foolproof will confirm.

Second, Williams employs a false dichotomy. When he states that “a claim’s truth depends not on identifiable features of the claim but on the world” he implies that it must be either/or. But the truth of an empirical claim depends on both identifiable features of the claim and on the state of the world. For example, the truth of “Our solar system has eight planets” depends on the meaning of “planet” and on the configuration of matter near our sun.

Sometimes, we need to examine the world to settle a truth claim; other times, it pays to examine the claim itself. Either approach can reveal the claim to be more problematic than it appears. The same goes for information more generally: scrutinizing it can bring latent problems to light.

Ironically, the title of Williams's review—“The Fake News About Fake News”—uses a manipulation technique that van der Linden treats at length in Foolproof. He calls it the “You are fake news effect.” Here’s the idea: Scholars and responsible fact-checkers tend to employ careful analysis, judicious reasoning, and neutral language to call out mistakes, as these are signals of objectivity. By contrast, “You are fake news!” has become a cheap way for bad actors to dismiss inconvenient points of view. Williams should know better: Serious scholars shouldn’t stoop to calling one another—or any serious scholarly work—"fake news.”

Another example: “Most Republicans (or Democrats) are evil” employs what van der Linden calls the polarization technique. You don’t need to know anything about the state of the world to understand that such a claim is polarizing. That fact that it evokes strong negative emotions is another sign that it’s manipulative rather than factual.

Consider the claim: “Trump’s racist policies horribly devastated our country.” Although the underlying facts may support the case, you can make the same claim in a more neutral and factual manner: “Trump’s policies have negatively impacted U.S. race relations.” Because the former attempts to play on our emotions, we should assign it less weight.

Williams also objects to the inclusion of “conspiracy” on van der Linden’s list of manipulation techniques. His grounds? Some conspiracies are real, hence “the mere presence of conspiracy theorizing—however we define it—cannot be a distinguishing mark of misinformation.” But real conspiracies discovered through responsible investigation are quite different from the “conspiracy cognition” that van der Linden warns against. The latter, it turns out, involves a rich cocktail of “overriding suspicion,” “incoherence,” “nefarious intent,” and the like.[7] Again, we find distinct markers that can help differentiate reliable from unreliable content.

Van der Linden’s view that misinformation has distinctive “fingerprints” is solidly based on empirical evidence.[8] A study published in a Nature journal, for example, found that misinformation makes use of negative emotions at a rate that is 20 times that of accurate information.[9] Its conclusion? “Deceptive content differs from reliable sources in terms of cognitive effort and the appeal to emotions.” The point is that close examination of a claim can reveal it to be problematic even before one tries to fact-check it.

Examine “All amphibians are slimy, so lizards are slimy” and you’re apt to notice that it assumes—falsely—that lizards are amphibians. Spotting this can neutralize the argument’s power to deceive, and you needn't touch any lizards in the process. Being mindfully attentive to the properties of the information you consume is fundamental to wisdom. Isn't that the point of the Socratic Method? And philosophical inquiry more generally? Surely it makes sense to be alert to manipulative rhetorical tactics.

Bad actors use bits of truth to construct false narratives. To get from one to the other, though, they almost always employ fear-mongering, discrediting, polarizing language, trolling, or the like. Van der Linden offers a practical guide to spotting such techniques—a way to free ourselves from much information manipulation.

Williams does cite one study showing that, sometimes, psychological inoculation doesn’t improve people’s discernment between true and false news.[10] He cites another that seems to indicate that (contrary to van der Linden’s claims) debunking is superior to prebunking.[11] But these results are cherry-picked. The latter didn’t test inoculation theory as described by van der Linden, and a systematic review of the literature shows that prebunking is superior to debunking.[12] Indeed, dozens of studies show that inoculation and prebunking work.[13] Many such findings have been replicated in the lab, and a field study with millions of people on YouTube shows that inoculation can improve people’s “real-world” capacity to distinguish real and fake news.[14]

Williams dismisses one of van der Linden’s findings as an “artifact of experimental design” on the grounds that the “stories used in the study were common knowledge” to test subjects in the U.S. and the U.K. But the very same findings were replicated by independent studies using different headlines about local news from India.[15] Our advice? If you’re going to challenge one of the world's top scientists on an empirical question, you’d better have the receipts.

Finally, Williams dismisses as “hype” the "viral" analogy that runs through Foolproof. In this, he fails to show serious engagement with a remarkably fruitful idea. Mathematical models show that misinformation literally does spread like a virus.[16] Indeed, no serious computational scientist would dispute that epidemiological models also work to describe information diffusion. None of this means that we all have simple, easily infected minds. On the contrary, van der Linden carefully dissects the psychological literature to distinguish when people are more likely versus less likely to be fooled.

In fact, van der Linden devotes an entire chapter to the claim that only a minority of people are impacted by fake news, and carefully takes the reader through the limitations of these studies. And even if it were true that not many people are influenced by misinformation, it’s clear that disinformation can swing elections decided by small margins. Fake news doesn’t have to be widely believed to undermine democracy.

Weaponized information is as old as time. Now, though, bad actors can “micro-target” their messaging and populate millions of social media feeds with content designed to be triggering. They can exploit algorithms that amplify “viral” content, deploy armies of bots, and—coming soon to an election near you—leverage artificial intelligence. Yet Williams would have us believe that “misinformation is not widespread”—that “its causal role in social events is either unsubstantiated or greatly overstated.” No cause for alarm here, folks: Just go about your business.

We are deeply disappointed by Williams's one-sided review. Foolproof is an astonishingly well-researched overview of a persistent societal problem and a highly readable guide to keeping your mind relatively misinformation-free. Cognitive immunology can help us adapt to our brave new digital world, but not if we stick our heads in the sand.

Andy Norman, Ph.D., is the Executive Director of CIRCE and the author of Mental Immunity. Lee McIntyre, Ph.D., is a Research Fellow at the Center for Philosophy and History of Science at Boston University. His book On Disinformation will be released in August 2023. Both McIntyre and van der Linden are CIRCE affiliates, along with dozens of other academic researchers in the field of mental immunity.

References

[1] https://stratcomcoe.org/publications/inoculation-theory-and-misinformation/217

[2] https://www.bostonreview.net/articles/the-fake-news-about-fake-news/

[3] https://wwnorton.com/books/9780393881448

[4] Not only do epidemics and infodemics obey the same mathematical laws, the same evolutionary algorithm that drives biological evolution functions also at the cultural level: https://link.springer.com/article/10.1007/s11406-013-9415-8.

[5] https://doi.org/10.1037/h0048344

[6] From a Twitter thread where Williams summarizes his critique of Foolproof: https://twitter.com/danwilliamsphil/status/1666503931294351361

[7] https://www.argumenta.org/wp-content/uploads/2018/05/2-Argumenta-Stephan-Lewandowsky-Elisabeth-A.-Lloyd-Scott-Brophy-When-THUNCing-Trumps-Thinking.pdf

[8] https://www.nature.com/articles/s41562-019-0632-4

[9] https://www.nature.com/articles/s41599-022-01174-9

[10] https://psyarxiv.com/4bgkd/

[11] https://www.pnas.org/doi/full/10.1073/pnas.2020043118

[12] https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0280902

[13] https://www.tandfonline.com/doi/full/10.1080/03637751003758193

[14] https://www.science.org/doi/10.1126/sciadv.abo6254 ; https://medium.com/jigsaw/defanging-disinformations-threat-to-ukrainian-refugees-b164dbbc1c60

[15] https://onlinelibrary.wiley.com/doi/10.1002/acp.3995

[16] https://www.nature.com/articles/s41562-022-01388-6

advertisement
More from Andy Norman Ph.D.
More from Psychology Today