Skip to main content

Verified by Psychology Today

Media

John Oliver Hilariously Explains Bad Science Media

Bad science plus bad media reporting equals hilarious news outcomes

Screen Shot John Oliver YouTube Video
Source: Screen Shot John Oliver YouTube Video

Most news shows today are chock full of BRAND NEW SCIENTIFIC FINDINGS. Should you pay attention to these reports?

John Oliver recently did a terrific (as well as stunningly hilarious) job of telling you the tricks of the trade in evaluating what you're being told about those brilliant new findings. You can find Oliver's video clip on scientific studies here.

The overarching theme is this: Whiz-bang results get you noticed, so scientists are under pressure not just to publish their work but to get their work reported on popular media. And that means sometimes reporting sloppy but SENSATIONAL research findings.

Here's what to watch out for:

The Persuasive Power of Press Releases: To draw attention to their work, science labs and scientific journals now routinely send out press releases of new findings. But like the proverbial game of telephone, by the time the study is summarized for the press release and then summarized again by the media, the results are seriously (and sometimes hilariously) distorted.

Oliver gives several hilarious examples of a study that found no difference in preeclampsia rates between pregnant women who ate chocolate with different flavonoid contents. But that very same study was reported on TV news networks as "Eating chocolate while pregnant benefits baby!" He also describes a Time magazine article that claimed a study found smelling farts prevents cancer…except that the study in question mentioned neither cancer nor farts.

Not All Scientific Studies Are High Quality: In order to get published, scientific studies have to pass peer review, that is, the hypotheses, methods, analyses, and results have to be reviewed by experts in the field to ensure that the work was done properly. Scientific journals differ in the levels of standards to which work is held during review. This means that if a paper is rejected at a rigorous journal, the same paper may be submitted to one with less rigorous publication criteria.

P-hacking: This means mining data to uncover patterns that are statistically significant but meaningless. Oliver describes how data mining ended up finding correlations between eating cabbage and having an "innie" belly button, without first devising a specific hypothesis as to the underlying causality.

Single Experiments and Small Sample Size Are Seriously Misleading: People are actually pretty smart when it comes to distrusting results of single experiments, particularly those based on small samples. We intuitively embrace the Law of Large Numbers. Put simply, the Law of Large Numbers states that the results of repeated experiments (or experiments based on large sample sizes) will give more accurate results than single experiments (or those based on small sample sizes).

Suppose someone tells you people on the west coast are taller than people from the Midwest. Which do you think would produce more reliable findings, a study comparing 100 Californians to 100 Iowans, or a study based on 100,000 Californians and 100,000 Iowans? Most people intuitively trust the latter more than the former, just as the Law of Large Numbers indicates we should. We suspect that single studies with small samples are more likely to obtain fluke results that can't be replicated. So we'd justifiably have more confidence if such a study were repeated with other samples from California and Iowa. If the result held up across many replications, we would justifiably have more confidence in it.

The problem is that there is no glory—and no funding—in doing replications. It is extraordinarily difficult to get replications published, so doing them is a sure fire way of making sure you never get tenure. So EXCITING UNEXPECTED RESULTS FROM BRAND NEW STUDY gets reported as scientific fact.

Scientists know not to place too much confidence in individual studies until their results are evaluated in a broader context of all the research taking place in that field. So when you hear a news caster reporting EXCITING UNEXPECTED RESULTS FROM BRAND NEW STUDY, you, too, should think to yourself, "I'd like to see that replicated—many times and in many independent labs."

Putting It All Together

A final hilarious example demonstrates what happens when all of these mistakes are made. A researcher managed to publish results purportedly showing that dehydration impairs driving as much as alcohol does. The study was riddled with problems (including a sample size of 12) and was later retracted. But not before it made the rounds on TV, radio, internet, and print media.

After all of this, you might be thinking that you should never pay attention to any scientific findings. Wrong! Watch Oliver's hilarious video about science studies and become smarter about evaluating scientific news.

Copyright Dr. Denise Cummins May 11, 2016

Dr. Cummins is a research psychologist, an elected Fellow of the Association for Psychological Science, and the author of Good Thinking: Seven Powerful Ideas That Influence the Way We Think.

More information about me can be found on my homepage.

My books can be found here.

Follow me on Twitter.

And on Google+.

And on LinkedIn.

advertisement
More from Denise Cummins Ph.D.
More from Psychology Today