Skip to main content

Verified by Psychology Today

Personality

Psychology’s Dirty Data Secret and Why You Need to Know It

New research shows the surprising limits on psychology's knowledge about people.

Key points

  • The problem of careless responding makes it hard to know how reliable online research studies are.
  • New methodologies are examining ways to separate people who lie and tell the truth in online surveys.
  • Before you follow the advice of any research-based study, it's worth finding out if more careless responders were eliminated.

Knowledge about psychology comes from many sources, ranging from experiments to clinical case studies. Whether the author of a published study is an established researcher or a new contributor to the field, there are standards that must be upheld in order for the findings to be considered valid. As a result, you should therefore feel confident that what you’re reading has met rigorous criteria of acceptability.

Or should you? Much of psychological research is based on evidence obtained from questionnaires, and increasingly, these questionnaires are administered in an online form. Respondents to these questionnaires typically sign consent forms indicating that they are aware of the risks and benefits of the research as well as the fact that their participation is voluntary. After that point, they begin answering, typically in a rating scale format, with questions that can take around 10 to perhaps 40 or 45 minutes.

The researchers base the questionnaires they administer on the hope that the questions will engage the respondents enough to prompt their honest and thoughtful answers. However, how confident can the researchers be that their respondents are indeed taking the whole enterprise seriously? In part, the investigators are dependent on the goodwill of their respondents who, after all, signed up to help out. However, this goodwill may only go so far.

Research participants can be driven to serve in their role by a variety of extraneous factors such as the promise of reimbursement and, for university students, some form of extra credit for one of their courses. Neither of these conditions requires that participants answer honestly or that they even complete the questionnaires in their entirety. As a result, respondents could put in minimal effort and still be counted as having “participated.”

Getting into the Mindset of the Psychology Test Faker

For decades, researchers have tried to figure out why people don’t engage in thoughtful responding when they complete personality tests. Indeed, the authors of the original Minnesota Multiphasic Personality Inventory (MMPI) published in 1943 built into its very architecture a set of scales designed to trap liars. Such controls were particularly important given the widespread use of the MMPI as a diagnostic assessment tool. The fact that the MMPI’s authors believed its use could be compromised in this way suggests that careless responding or deliberate faking is not a new problem or one confined to online measures.

Similarly, the Marlowe-Crowne Social Desirability Scale (MCSDS), published in 1964, began to receive widespread adoption by personality researchers as a control for the possibility that participants want to create a favorable impression for the researcher. With items such as “I always try to practice what I preach,” the MCSDS can detect these fakers who, if they’re trying to portray themselves in a favorable light, would agree. After all, who “always” practices what they preach? Do you?

Unfortunately, more recent researchers who’ve switched their data-gathering instruments to online survey providers, such as Qualtrics, might not take that extra step of trying to control for either careless or deceptive participants. This is why, when you turn to information based on these instruments, you need to check out whether the researchers eliminated participants who rushed or who gave invalid responses.

According to University of Navarra’s Austin Lee Nichols and Rochester Institute of Technology’ John Edlund (2020), “Careless responding remains a significant issue in social scientific research.” Furthermore, as noted by Wood et al. (2017), online data is more likely to be contaminated by “dirty data” than non-online samples.

Qualtrics does make it possible for investigators to detect how long a participant took to complete the survey, and researchers always have the option of inserting a few validity check items. These could include asking the same question twice but with different wording or simply asking participants if they felt their responses reflected their best effort. The issue is whether researchers take advantage of this online detective work to cut the invalid responders from the dataset or at least investigate their characteristics.

If you really care about the findings of the study, your best option is to try to track down the original article to see what the authors reported in the results. Alternatively, if the source of the research was a reputable scientific journal, you have more assurance that those scientists who reviewed the article before it was published would have scrutinized the methods section before giving it the green light. As stated by Hong et al. (2020), “In order to increase the confidence of a single study, it would be wise for a researcher to provide evidence of sufficient data quality to the general research community.”

Who’s Most Likely to Fake Their Answers?

All of this begs the question of why people bother being so deceptive when, in reality, there’s no gain in it for them. Sure, it’s nice to impress a researcher, but given the anonymity of most studies, why would you care? If you’re being diagnosed, though, you do have a higher stake in the outcome, and so this might tempt you to fake your answers, though this seems counterproductive.

A subset of psychological research has started to grow around the question of not only how to handle careless responders from a statistical standpoint, but also to figure out who’s most likely to provide that “dirty data.” The latest team of investigators to tackle the problem are HEC Montréal’s Melanie Robinson and Concordia University’s Kathleen Boies (2021). The authors note that previous research based on the Five-Factor Model revealed higher rates of carelessness among participants with a certain pattern of traits involving low conscientiousness, agreeableness, extraversion, and emotional stability. People high in openness to experience were more likely to put more effort into their responses, perhaps reflecting their greater willingness to spend time considering their thoughts and feelings.

Using the 6-Factor, “HEXACO” personality model as their basis, the Canadian researchers believed that they could gain additional insight beyond the prior research given that “Honesty-Humility” is one of the HEXACO dimensions. The facets of this trait include sincerity, fairness, greed avoidance, and modesty. As you might imagine, people high in this quality should take their role as research participant more seriously and be less likely to rush or lie.

To test their predictions, Robinson and Boies used a set of validity checks in their online survey, including time to completion, self-report of amount of effort expended and perceived importance of the survey, items that contained instructions (such as select a specific answer choice), and the number of missing responses.

Among their samples of undergraduates and adults recruited through Qualtrics, the research team reported having removed 15 percent of the college sample and 20 percent of the online adult sample. It was necessary to take this step because, ironically, if they didn't, the authors wouldn’t be able to trust their own findings on the untrustworthiness of their participants.

As the authors predicted, participants high in conscientiousness, agreeableness, openness, and extraversion were less likely to respond carelessly than their more lackadaisical counterparts. Only among the online adult sample, though, was honesty-humility related to more effortful responding. From a methodological standpoint, the researchers were also able to contribute to the literature by showing that their subjective effort measures (asking people how hard they tried) aligned with the objective indicators of time taken, extent of following instructions, and answering most if not all questions on the survey.

Which Studies Can You Trust?

These findings show, then, that not everyone cheats or tries to slide through online surveys with the least possible effort. In some ways, this is good news for personality researchers. Furthermore, it seems remarkably easy to detect a careless responder by building in some simple controls into the study, including asking people how seriously they took the whole enterprise. The bad news is that if you’re a researcher studying such qualities as the Dark Triad that includes undesirable personality attributes such as manipulativeness, grandiosity, and psychopathy, you may not get the most reliable data about your participants

To sum up, armed with this knowledge about psychology’s “dirty data secret,” you can learn to be more discerning in the takeaway messages you get from reading about the latest psychology research. It may take more effort yourself, but considering the problem of the careless responder can help give you an understanding of human behavior based on facts, not fakes.

References

Robinson, M. A., & Boies, K. (2021). On the quest for quality self-report data: HEXACO and indicators of careless responding. Canadian Journal of Behavioural Science / Revue Canadienne Des Sciences Du Comportement, 53(3), 377–380. https://doi-org/10.1037/cbs0000251

Bowling, N. A., Huang, J. L., Bragg, C. B., Khazon, S., Liu, M., & Blackmore, C. E. (2016). Who cares and who is careless? Insufficient effort responding as a reflection of respondent personality. Journal of Personality and Social Psychology, 111(2), 218–229. https://doi-org//10.1037/pspp0000085

Bowling, N. A., & Huang, J. L. (2018). Your attention please! Toward a better understanding of research participant carelessness. Applied Psychology: An International Review, 67(2), 227–230. https://doi-org /10.1111/apps.12143

X Hong, M., Steedle, J. T., & Cheng, Y. (2020). Methods of detecting insufficient effort responding: Comparisons and practical recommendations. Educational and Psychological Measurement, 80(2), 312–345. https://doi-org10.1177/0013164419865316

advertisement
More from Susan Krauss Whitbourne PhD, ABPP
More from Psychology Today