Skip to main content

Verified by Psychology Today

Deception

The Intent Behind a Lie: Mis-, Dis-, and Malinformation

Can you differentiate among the varieties of untruths you encounter online?

Key points

  • Misinformation has long been used as an umbrella term for online untruths.
  • Researchers argue that language should be more specific about untruths, differentiating based on intention and impact.
  • More specific language can help us to target the most dangerous online inaccuracies.
Gage Skidmore/Wikimedia Commons
Salma Hayek has little in common with purposeful creators of false information. She shared a myth, but with no ill intention.
Source: Gage Skidmore/Wikimedia Commons

Examples of false information are everywhere. Here are three:

  • In a recent interview, when asked why superhero movies are so popular, Salma Hayek suggested that “because we only use from 3% to 10% of our brain, we know there’s a lot more to a human being that we have not discovered. So I think the superhero story makes us wonder if there’s more that we can do.” But in one of our previous posts, we debunked this neuromyth, noting that there actually is activity throughout our brains, even at rest.
  • In our last post, we described the Book of Veles, Jonas Bendiksen’s fake compendium of photos and essays, intended to prank the photojournalist community.
  • We also previously wrote about deepfakes—videos or photos that are so expertly created that it’s almost impossible to uncover them. We noted that deepfakes are sometimes used for revenge porn, the sharing of nonconsensual sexually explicit videos to harass someone, often but not always an ex-partner. For example, Patrick Carey, a Long Island 20-year-old, was recently indicted for reportedly creating sexually explicit deepfakes from social media photos of multiple women, posting them on pornographic websites along with identifying information, and encouraging people to threaten the women.

Salma Hayek, Jonas Bendiksen, and Patrick Carey clearly had different intentions with regard to the false information they shared. Their varied intentions can help us sort these falsehoods into categories.

Categorizing Falsehoods By Intent

Researchers have developed definitions of the three primary categories of false information: misinformation, disinformation, and malinformation (Santos-D’Amorim & Miranda, 2021). Misinformation is simply inaccurate information and is classified as unintentional. It’s often used as a descriptor for all kinds of falsehoods and may result from an error, cognitive bias, or laziness in fact-checking.

Disinformation is classified as inaccurate information conveyed deliberately—with the intention to deceive. Examples might include a misleading, clickbait-y headline or a fake review on Yelp. The goal of sharing the inaccurate information might even be a prosocial one—say, a restaurant review meant to help a friend’s new venue succeed or the mythical Santa Claus said to deliver Christmas presents in many religions and cultures. Indeed, researchers Oberiri Destiny Apuke and Bahiyah Omar (2021) noted that altruistic reasons were the most commonly cited for the sharing of false information related to the COVID-19 pandemic.

Malinformation is classified as both intentional and harmful to others. Phishing to steal a person’s identity and catfishing to develop a fake romance in order to steal money are prime examples.

Applying the Categories

Think back to our earlier examples: Salma Hayek, Jonas Bendiksen, and Patrick Carey. How would you categorize their falsehoods? Hayek’s inaccuracy is likely misinformation. She was sharing a widely believed neuromyth without checking its veracity. She likely didn’t mean to deceive anyone, and was using this myth as a way to explain the popularity of a film genre. Although Hayek’s inaccuracy is not all that dangerous, some unintentional misinformation can cause harm. Much of the misinformation surrounding the COVID-19 pandemic is unintentional, perhaps even shared with positive intentions, but myths surrounding vaccines and other protective measures have likely led to thousands and thousands of deaths.

Bendiksen, however, did intend to deceive with his fake Book of Veles, so his work would be categorized as disinformation. He did not want to cause harm, though, and he even planted hints about his deception in order to get caught once his falsehoods became more widely known. As we wrote in our previous post, “Bendiksen hopes that his foray into intentional disinformation might illuminate the actual hocus-pocus that has infiltrated photography and film, news and social media, and help us to all be a bit more skeptical.” Bendiksen’s disinformation makes for a charming story, but not all disinformation is benign. The satirical news sites The Onion and its conservative counterpart The Babylon Bee have had some of their intentional falsehoods, meant to amuse and to offer perspective, taken literally, which some see as a “problem for democracy.”

Carey not only reportedly intended his alleged falsehoods, but actively wanted to cause harm, explicitly encouraging those who viewed his deepfakes to harass and even threaten his victims. Malinformation, therefore, can be dangerous and even criminal, although the law often lags behind the technology. For example, Massachusetts Governor Charlie Baker recently lamented, after hearing testimony from numerous victims of revenge porn, that his state’s laws weren’t protective enough. He said, “I know that, in other states, you have a framework that provides you with the support and the protection that you don’t just deserve, you’re entitled to. … I sometimes wonder whose side we’re on.”

Clearly, we must fight all types of inaccuracies, but it’s also important to differentiate among the three types. Darrin Baines and Robert Elliott (2020) argue that “‘misinformation,’ as an umbrella term, can be confusing and should be dropped from use.” Instead, they encourage the use of the three categories as described here. It might be a first step toward deciding how to deploy our resources—legal and otherwise—to fight back against online inaccuracies.

References

Apuke, O. D., & Omar, B. (2021). Fake news and COVID-19: Modelling the predictors of fake news sharing among social media users. Telematics and Informatics, 56, 101475. https://doi.org/10.1016/j.tele.2020.101475

Baines, D., & Elliott, R. J. (2020). Defining misinformation, disinformation and malinformation: An urgent need for clarity during the COVID-19 infodemic. Discussion Papers, 20(06), 20-06.

Santos-D’Amorim, K., & de Miranda, M. K. F. d. O. (2021). Misinformation, disinformation, and malinformation: Clarifying the definitions and examples in disinfodemic times. Encontros Bibli: revista eletrônica de biblioteconomia e ciência da informação, 26, 1-23. https://doi.org/10.5007/1518-2924.2021.e76900

advertisement
More from Susan A. Nolan, Ph.D., and Michael Kimball
More from Psychology Today