Psychology
What Findings Do Skeptical Psychologists Still Believe In?
Scientists share what they consider solid insights on the mind and behavior.
Posted May 31, 2019
How do our minds operate? What are the hidden patterns in our behavior? What we can do to live happier or more meaningful lives? Research psychologists seek answers to questions like these, and much of their work easily commands our attention—which may be useful when what they have to tell us is true.
In recent years, however, high-profile tests of findings in psychology have sent shivers of doubt through the ranks of psychologists. One effort after another to replicate past results has found that many of the studies that replication teams choose to repeat do not yield consistent outcomes.
The resulting “replication crisis”—with its implication that many of psychology’s published findings might not actually reflect real phenomena—has had an impact on scientists who study human behavior. Many are taking steps to change how research is conducted, aiming in part to make findings more likely to hold up under scrutiny. (In the latest issue of Psychology Today, four of these scientists share their views of how the drive for change started and where it stands today.) Some are now hard-pressed, in the absence of thorough testing, to say which findings are reliable or not.
But there are results that they do trust—many of them. We asked scientists who have taken part in efforts to reform psychological research to talk about some examples of important findings (or sets of findings) in which they have faith.
The finding: Personality traits are largely stable in adults.
There are various ways of assessing personality, but many psychologists focus on the so-called Big Five traits: openness to experience, conscientiousness, extraversion, agreeableness, and neuroticism. The finding that, over time, individual adults are fairly (though not totally) consistent in how high or low they rate on these traits “is one of the largest and most robust effects in all of psychology,” according to Sanjay Srivastava, a personality psychologist at the University of Oregon and executive committee chair of the Society to Improve Psychological Science.
As he explains, “We can connect this finding with what we know about how personality relates to outcomes in health, education, and economics. Understanding personality stability helps us make sense of those domains—why we might observe a lot of consistency over time, and why there is nevertheless room for change.”
What, exactly, does personality suggest about other aspects of our lives? Quite a lot. Psychologist Christopher Soto recently published the results of a project to replicate a variety of reported links between personality traits and life outcomes. He found evidence for more than 60 of them—including, for example, a positive correlation between extraversion and one’s sense of well-being; an inverse relationship between neuroticism and occupational commitment; and an association between agreeableness and religious beliefs and behavior. (These links don't necessarily indicate cause-and-effect, and they only show that traits correspond with certain outcomes on average—it’s not the case that every extravert is happier than every introvert.)
The overall finding “gives researchers good reason to be skeptical that narrowly crafted psychosocial interventions might have long-lasting, transformative effects,” Srivastava adds. “If doing a writing exercise or watching a couple of videos could produce enduring change, we would be constantly buffeted around by experiences in our daily lives, and we would be far less consistent than we are. Instead, promoting human development at scale probably means working on broad-based, systemic changes to social structures. I don't think there are any shortcuts around that.”
The finding: People are swayed by what they think most of the group thinks.
Humans are social creatures, and when we make decisions, we often watch others carefully. Researchers have captured this in the form of consensus effects: “For example, people are often more favorable toward an issue when they believe 80 percent of their group supports it (versus opposes it),” explains Alison Ledgerwood, a psychologist at the University of California, Davis who studies biases and preferences.
“These effects are at the heart of social psychology, in the sense of reflecting some of the most basic ways in which an individual is influenced by other people,” she notes. They also suggest real-world consequences: The more acceptable prejudiced comments seem to others, for instance, the more inclined someone will likely be to make them.
Importantly, however, research also suggests that what we imagine others think is related to what we ourselves think. A replication test recently affirmed two iterations of a classic finding called the false consensus effect. Both involved hypothetical stories: In one, the participant could give permission (or not) for video footage of her to be used in a commercial; in another, she could either pay or contest what seemed like a bogus speeding ticket. Participants rated how they thought their peers would respond, then answered themselves. In each case, participants who chose the first option tended to imagine much greater peer support for the first option than did participants who chose the second option.
The findings: People seek, in subtle ways, to confirm their preexisting beliefs. And in hindsight, they overestimate how predictable an event was.
Though distinct, both confirmation bias and hindsight bias are related in that they are “centrally relevant to the reform movement in science,” notes Brian Nosek, a psychologist at the University of Virginia and executive director of the Center for Open Science. “With confirmation bias, we are more likely to seek and interpret information in ways that reinforce our existing beliefs rather than challenge them. This is evident in everyday human activity (see politics, all of it), and in scientific research.” As we read the news or watch political shows, we may unwittingly filter what we’re seeing and hearing based on a belief that a particular idea is brilliant or stupid or that a particular person is innocent or guilty. Scientists are people, too—and when they already believe a scientific hypothesis is true, they are at risk of approaching the evidence in a way that backs up that belief without testing it as rigorously as they could.
Hindsight bias poses a different kind of challenge: “Once we observe an outcome, it is easy to reconstruct our belief that we knew it all along,” Nosek explains. “This occurs in daily reasoning, and in science, it is particularly problematic for reinterpreting exploratory outcomes as if they were predicted, confirmatory tests of a phenomenon.” That is, scientists might observe intriguing results that turned up unexpectedly—possibly due to chance—and then convince themselves and others that those results were predictable.
This capacity we have to fool ourselves is part of the rationale for increasing transparency in psychology research. Greater clarity in the reporting of initial hypotheses, research processes, and data makes it easier for outside researchers to weigh in on what a study shows or doesn’t show.
The finding: Choices are influenced by how the options are framed.
“Marketers don’t say, ‘this meat contains 10% fat’; they say ‘90% fat-free.’ It’s the same, but obviously, it’s not the same,” says Joseph Simmons, professor of Operations, Information, and Decisions at the Wharton School of the University of Pennsylvania. A heightened preference for the same option framed in positive rather than negative terms is one of a suite of phenomena outlined in research by psychologists Daniel Kahneman and Amos Tversky. “Every restaurant menu in a decent restaurant is using tons of this psychology,” Simmons says.
This kind of finding can have significant policy applications, too. Researchers have argued for years that describing fuel efficiency in terms of gallons per mile (as opposed to the standard miles per gallon) would give consumers a better sense of how much a particular car saves in gas and emissions. Framing effects and others detailed by Kahneman and Tversky collectively suggested that humans were less straightforwardly rational decision-makers than they had been presumed to be, even when costs and benefits were quantified.
“I still think so much of what’s being published in our field is not true,” says Simmons, who co-authored an influential 2011 paper on the danger of false findings inherent in common research practices. “I think psychology has a very big problem. But we’ve been around long enough that we have a very big body of literature that is true.”
The finding: Some people will defer to authority even if it means harming a stranger.
In 1963, psychologist Stanley Milgram published the results of an experiment that would become famous. Forty men were instructed to deliver increasingly severe electric shocks to a “learner” in response to wrong answers or non-answers on a memory task. The situation was set up to convince the participants that the shocks could be very painful; after a certain voltage level was passed, the “learner,” sitting in another room, starting banging on the wall. More than half of the men—who were firmly prodded by an experimenter to continue—went along with it, continuing on to the maximum voltage.
The electric shocks that participants were told they were sending were not real. But the fact that the men kept pressing the switches appeared to be evidence of a startling degree of obedience to authority, even in a situation where there were no major repercussions for walking away. “This is a case of social psychology showing us something we don’t know about ourselves, and where, if we don’t know it, we’re at risk of violating our principles,” says Alexa Tullett, a social psychologist at the University of Alabama.
Reanalysis of the research has indicated that only some participants fully believed Milgram’s set-up, so it may be that fewer participants than it seemed were genuinely complying with instructions to harm. But Tullett thinks the essential descriptive finding—a greater level of compliance than we might anticipate—is likely to be true. It has yet to be seriously undermined by a high-quality replication study, she says, and more recent efforts to repeat it have offered some supporting evidence. A study in Poland published in 2017 found that 72 of 80 participants continued delivering “shocks” up to the highest possible intensity level, though it was a lower level than that used in Milgram’s work.
The finding: People may recall seeing something they didn’t.
Memory is far from perfect, and psychologists have supplied evidence that people can be induced to recall invented details of past observations or experiences. Such findings have implications for how we interpret highly consequential memories, such as those of eyewitnesses or those that have supposedly been repressed and recovered.
There are multiple false memory effects “that you can bet your tenure on,” offers Stephen Lindsay, a cognitive psychologist at the University of Victoria and editor of the journal Psychological Science. Though it may reflect a different kind of “false memory” than that involved in inaccurate personal memories, a simple version of false recall is illustrated by the Deese-Roediger-McDermott (DRM) Task. “In one of my standard general-audience talks, I do a DRM demo in which my PowerPoint asks people to raise their hands if they remember each of several words from a previously presented list,” Lindsay explains. That list includes some words that the audience saw earlier (such as nurse and hospital), a totally unrelated word, and a related “lure” word that the audience did not actually see previously (such as doctor).
Many people report that they have already seen the thematically related word, even though they haven’t. “After people in the audience raise their hands,” Lindsay reports, “I hit the spacebar to show my prediction of the percentage who would raise their hands for each of those words, and it is always pretty close.”
The finding: The scientific community has some ability to anticipate which findings will hold up.
Another insight from recent tests of psychology results is that researchers, in aggregate, are better than chance at predicting which ones are most likely to be replicated.
Using prediction markets—in which participants bet points on certain outcomes—Stockholm School of Economics researcher Anna Dreber Almenberg and colleagues have shown that the resulting market values perform fairly well as indicators of which findings will probably hold when studies are repeated. In a recent replication project, such a market correctly predicted 75 percent of the replication outcomes. An example of a finding that both received the market’s vote of confidence and passed a replication test was one originally described in a 2014 paper: Participants were more inclined to donate to a charity when they knew that costs for administration and fundraising were already covered.
Markets have been used in this way several times now, “evidence of some wisdom of the crowd, and also that there is something systematic about results that replicate versus those that do not,” Dreber Almenberg explains.
For more: Psychologists recently weighed in on the question of robust findings on Twitter.