Skip to main content

Verified by Psychology Today

Diet

The Crisis in Squishy Science and Trouble for Journalists

What should we make of conflicting findings?

I should be embarrassed. I’m a social psychologist and my field seems to be in a heap of trouble these days. All of the squishy sciences are getting battered.

“Squishy” isn’t an insult. To me, it is more of a term of endearment. I use it to refer to all of the sciences that try in some way to study humans. The first time I taught a course in introductory psychology, a group of chemistry majors sat in the second row and lobbed disdainful questions at me. I wanted to tell them that they had chosen an easy discipline. If they wanted a real challenge, they should try studying the kinds of subjects who think and plan and scheme and push back on people who try to figure them out.

The squishy sciences are great at generating findings that are exciting and counter-intuitive. Journalists love those results. The trouble is coming from the growing realization that the innovative discoveries can turn out to be all too wobbly. Other scientists fail to replicate them, or even worse, they produce results that are contradictory.

At the Chronicle of Higher Education, Tom Bartlett’s article, “Power of Suggestion,” led with this tease: “The amazing influence of unconscious cues is among the most fascinating discoveries of our time – that is, if it is true.” David Freedman told a similar cautionary tale at the Columbia Journalism Review in “Survival of the Wrongest.” His story explained “how personal-health journalism ignores the fundamental pitfalls baked into all scientific research and serves up a daily diet of unreliable information.”

As ominous as all this sounds, I am not embarrassed. In fact, I am relieved. It is about time that all of us – squishy scientists, the journalists who write about squishy science, and all of our readers – face up to our formidable assignment. What should we make of the messiness of our enterprise?

I. The Cacophony of Conflicting Findings

A thicket of contradictory findings can be exasperating. It can tempt suspicions of evil, self-serving, or at least careless scientists. Sometimes, those nefarious explanations are true. But it is possible to amass a messy-looking pile of scientific papers, even when everyone is a competent scientist conducting good research in good faith. Here are just two of the reasons that can happen.

#1 Humans are more complicated than cupcakes; studying them is, too.

Think about the process of preparing your favorite sweets – brownies, for example. They are just brownies, but they can be fickle. Leave them in the oven for just a few extra minutes, and instead of getting yummy, gooey treats, you have brownie rocks.

People are even more temperamental than brownies. Two experimenters in different labs may think they are replicating the essence of the experiment in question, but perhaps something else is different. Something they didn’t think much about, that has unanticipated implications for the psychology of participants. It could be something subtle about the way the experimenter interacts with the participants, or the ways certain questions are asked, or differences in participants’ expectations about who is observing them or who could learn about their responses, or… well, the possibilities are endless.

Freedman made a similar point in a discussion of confounding factors. Happily, many potential confounds can be eliminated or minimized by good research practices. For example, experimenters should remain unaware of which participants are in which conditions. In a study of dieting, for instance, experimenters can’t be more supportive of the participants on the diet they believe in if they don’t know which particular diet a participant is on.

#2 Even if a finding is true, it will not necessarily show up in every study.

Consider a fact from the world of baseball that we know to be true: Joe DiMaggio was a better hitter than his teammate, Jerry Coleman. DiMaggio’s lifetime batting average was .325; Coleman’s, .263. On any given day when both men were in the line-up, though, DiMaggio would not always get more hits than Coleman. Do those particular games undermine the conclusion that DiMaggio was the better hitter? I’d say no, because you need to consider the totality of their careers – all of the games they played.

I like to think of each baseball game as akin to an individual social science study. Studies comparing, say, one particular diet to another may sometimes show one diet winning, others times show the other diet winning, and still other times show no difference at all. What matters (if all of the studies are equally sound, methodologically) is the cumulative effect. If one diet really is superior to another, then the weight of the evidence – when the evidence derives from rigorous research – will support the superior diet. Freedman recognized this when he advised journalists, “Look at the preponderance of the evidence.”

But what about the publication bias that Freedman mentions? Won’t some studies get preferential treatment at the hands of eager editors? That may be so. Plus, studies that show no statistically significant differences rarely see the light of the published day. That raises the potential problem that readers are only getting to know about the studies that did show differences, while remaining oblivious to the stacks of unpublished studies showing no differences.

There is, though, a statistical way of addressing this “file drawer problem.” Though the procedure is not flawless, it is possible to calculate the number of limp studies that would need to be lurking in people’s file drawers in order to wipe out the cumulative effect of the known and published studies.

The need to establish the replicability of findings is becoming more widely recognized. One consequence is that there may also be more opportunities than there were in the past to pull those studies out of their dusty file drawers and make them readily available to others. Online sites, without the same costs as print journals, are especially promising. Some, such as the Open Science Framework and Psych File Drawer, are already in the works.

II. Compounding the Problems: Emotional Investments and Thumbs on the Scale

In a telling passage, Freedman expresses his exasperation with all of the sets of sharply contradictory findings in personal-health research:

“To cite just a few examples out of thousands, studies have found that hormone-replacement therapy is safe and effective, and also that it is dangerous and ineffective; that virtually every vitamin supplement lowers the risk of various diseases, and also that they do nothing for these diseases; that low-carb, high-fat diets are the most effective way to lose weight, and that high-carb, low-fat diets are the most effective way to lose weight…”

As frustrating as these inconsistencies are, they are also graced by an appealing symmetry. Some scientists, and perhaps some science writers, are invested in the low-carb, high-fat diets, whereas others would like to rest their thumbs on the scales of the high-carb, low-fat diets. The two sides get to fight it out.

For more than a decade, I have been doing research and writing on a topic in which just about all of the emotional investment is on one side of the argument. I study marital status, with an emphasis on the single side of the equation. Just about all thumbs are coaxing the scales to tip toward marriage.

Are married scientists invested in demonstrating their own superiority? I’m not sure. But many other groups, including religious and political organizations, have a stake in the supposed advantages of marriage over single life, and some are well-funded and politically active.

The scientists, together with the activists, have been insisting that getting married results in lasting improvements in happiness and health and many other important outcomes. They have been at it for so long, and with so little critical scrutiny, that their conclusions have become part of the conventional wisdom.

Back when I was just practicing single life, rather than also studying it, I assumed that the empirical support for such conclusions was probably about as strong as it could be, considering the challenges of trying to investigate phenomena (staying single, getting married, getting unmarried) that cannot be controlled or manipulated in a laboratory.

When I first started reading the original research reports that were the basis of so many of the claims in the media, I was stunned. By their very design, the studies could not possibly support the headlines I was reading in the press. There was a causality implied in many of the claims (get married, be happier) that no study could ever demonstrate. Even if the methodological problems were not so glaring and the results could be believed, the findings were not nearly as strong or as consistent as cultural conversations would lead us to believe.

I spelled out the problems in detail in Singled Out: How Singles Are Stereotyped, Stigmatized, and Ignored, and Still Live Happily Ever After and in much of the writing I have done since then. But the marriage apologists are dug in, and they have emotional fervor and organizational clout behind them.

The replication problem in research on marital status is distinct: There is a widespread perception that “getting married makes people happier” (or healthier or sexier or more successful parents or any other positive outcome you want to posit) is a finding that has been successfully replicated many times over. In fact, such a finding has never has been definitively demonstrated – not even once – and it never will be. We cannot randomly assign people to get married, stay single, or get unmarried.

Because this very basic point is so often missed, even – sadly – by seasoned social scientists, I will write more about it in a separate piece. There, I will offer some advice to fellow researchers for describing their results more accurately, both in their academic writings and in their popular writings and conversations with journalists.

Here, I’ll end with some advice for journalists. I’m not one, so I don’t know if my suggestions are reasonable or if they would help. I’d love to hear from people in the know.

III. What’s a Journalist to Do?

I think there are some steps journalists can take to improve the accuracy of their social science and personal-health reporting.

#1 Become a Rigorous Methodological Thinker

With so many pressures on journalists to do good work with ever-diminishing resources and opportunities, I wish I did not want to suggest something that entails even more time and effort. Still, I’ll say it: I think top-notch methodological training is a must. If it is not already a requirement in journalism programs, I think anyone who writes about social science or health research should take a graduate level course in research methodology or its equivalent. Research courses in psychology are most likely to cover the kinds of methodological issues that arise in studies of human behavior; or maybe I just think that because I am a social psychologist. In any case, journalists need to be astute methodological thinkers who know how to assess research, so they are not left merely to take the words of the people they interview.

#2 Read the Original Research Yourself Before Reading Other Descriptions of the Research and Before Conducting Any Interviews

Has a press release come across your screen announcing some intriguing research? Don’t finish reading it. Go to the original study being touted, read it, and critique it. Then go back and finish reading the press release.

#3 Don’t Print Press Releases

The most disheartening line in David Freedman’s piece is the report of a finding I’ve long suspected. Referring to a study of 500 stories about health in major newspapers, Freedman noted:

“In the survey, 44 percent of the 256 staff journalists who responded said that their organizations at times base stories almost entirely on press releases. Studies by other researchers have come to similar conclusions.”

Don’t be part of the 44 percent. Read the study for yourself. Ask your own questions. Find your own sources.

#4 Ask How the Results of the New Study Fit into the Existing Literature

All too often, media reports seem to treat individual studies as entities existing apart from all of the other research. Ask questions such as: Are the results consistent with previous studies on the topic? If not, why should we believe these results instead? There may be a good reason. Find out.

#5 Keep Tabs on Original Sources

If you aren’t already doing so, sign up for content alerts from the relevant journals. See what’s really out there, as opposed to just what’s getting promoted with press releases.

#6 Read Contrarians

Dip into the writings of social and health scientists who are not echoing the same lines as everyone else. Contrarians such as the Freakonomics guys won’t always be right, but maybe they will stir things up a bit. Then, perhaps, the stories you write will sound less like everyone else’s.

If you follow all of these guidelines, maybe your stories will also be more accurate than everyone else’s.

[Note. Readers who are new to this blog can learn more about me here and here.]

advertisement
More from Bella DePaulo Ph.D.
More from Psychology Today