Skip to main content

Verified by Psychology Today

Intelligence

Making Sense of Experts

When scientists disagree, how do we decide what to believe?

You may have noticed while looking into hot topics in the news: experts often disagree, citing scientific studies that seem to prove the exact opposite of each other’s theories. Can contradictory studies both be right? How do we, non-experts on the subject, form an opinion?

Our reliance on experts is real. “In a recent experiment, a group of adults had their brains scanned in an MRI machine as they were listening to experts speak. As they listened to the experts’ voices, the independent decision-making parts of their brains switched off. It literally flat-lined,” explained Noreena Hertz in a 2010 Ted Talk.

Media actively exploits this phenomenon: advertisements and news outlets routinely cite scientists, studies and experiments to sound more credible. But if the decision-making part of our brain is switched off, who will fact check whether “4 out of 5 dentists” do in fact recommend a product?

What’s fascinating is that on those hot topics, most studies are well conducted, the authors are credible and the experts are smart. The flip-flopping between the arguments seems to come from elsewhere. Maybe it’s us.

“This statement is false.”

In mathematics, there’s a proof that says some statements will never be proven — either true or false. Kurt Gödel’s incompleteness theorem tells us why we can’t decide whether “this statement is false.”

It’s a revolutionary proof that shows the power of mathematics. Gödel demonstrates that even if we change mathematics in any way (we could, for example, demand that statements can’t say anything about themselves), we will inevitably find undecidable statements in the new version of math. Using only mathematics as his toolbox, Gödel made math describe its own limitations.

The scientific method offers something similarly important to the science community.

To ensure scientists could review and build upon each others’ work, natural philosophers of the 17th century wanted to invent an objective and replicable framework: the scientific method. It’s what all modern science is built upon, and it will be what takes humans to Mars.

But what good is an experiment if you can’t replicate it?

In the early 2010s, many classic studies were found to be impossible to reproduce, questioning the reliability of important scientific results. Yet, while individual studies may be subject to errors, the scientific method is still going strong. Science can and should be able to change its mind, open to reassessing old views when new evidence comes to light.

Separate the signal from the noise

Part of the appeal of science is that it’s rigorous and methodical. We can assume that studies are generally right about what they claim. But, most studies focus on a narrow scope in order to control all input variables, and therefore the details matter a huge deal.

Real life, on the other hand, is complicated. It has variables that can never be controlled, and the topics that the public is interested in are way too broad for a single study. What’s left out from a study will be just as interesting as what it states.

Whenever you read about a study, keep asking questions such as:

  1. Does it apply? Cars get their safety ratings through crash tests — a relatively small set of controlled “accidents” that manufacturers can design cars around. How much does a crash test score tell us about a real-life crash? As you might have guessed, most models don’t do well in accidents they weren’t optimized for.
  2. How does it fit the bigger picture? We might be interested in the spread of a pandemic. This will encompass findings from all sorts of fields, from virology to network analysis to social sciences. A perfectly credible expert might tell us about whether a single virus can escape a face mask’s thin fabric without an issue — but we’d still need to know how many virus molecules are needed for an infection, or the size of the saliva droplets that virus molecules travel on.
  3. A mathematical model is just an opinion with a spreadsheet. Whenever life gets too complicated, we use models to estimate different outcomes. A city’s mayor might ask, “If we improve our roads and lower bus prices, how many people move into the suburbs?” — and the computer will give its best answer.
    Prediction models are an amazing tool to discuss different options, but they’re more a visualization tool than a proven fact: they leave a lot to human judgment. Only believe the model as much as you believe the human presenting it.

It’s all statistics with storytelling

As it happens, many experiments and studies include a step where analysts collect empirical data, and another step where that data is analyzed. Analyzing data and understanding the results makes use of a great number of techniques from the statistics department.

Much of medicine is statistics as well. One set of people gets a treatment, another group gets nada, and we see who gets better. If we collect enough data, we can predict with some confidence how the next patient will react to the treatment.

Unfortunately for us, there’s no sixth human sense that helps us understand statistics. We keep investing when the prices are high and sell when we shouldn’t. Living through a pandemic shows us just how badly we can calculate probabilities or exponential data.

This is why data science projects usually involve a lot of storytelling. However rigorously data engineers evaluate the results, if people don’t understand them, they might as well have come up with the wrong ones.

Keep your brain switched on

To be a better reader of science journals, the solution is simple. Just keep your decision-making brain switched on.

Remember that experts can be wrong, science often changes its mind, and rational thinking is still the best we can all do. Being more critical about what you read will eventually help science journalism improve too.

advertisement
More from Richard Dancsi
More from Psychology Today