Skip to main content

Verified by Psychology Today

Pattie Thomas, Ph.D.
Pattie Thomas Ph.D.
Health

A Lesson in Cause and Effect

The lively debate from critically reading research leads to truth.

This post is in response to
Health, Not Weight: On Shifting the Conversation

After spending the day reading and writing in the comments at Harriet Brown's Brave Girl Eating, I've come to realize that it is time for me to post a blog entry that I've been contemplating. It is time to discuss causality.

Inevitably in discussions about the war on obesity, both sides start quoting studies. A "my study is better than your study" exchange soon boils down into arguments over evidence.

Critiques of Obesity Research

A number of issues have been raised regarding existing obesity research. I'd like to raise three basic points that I think almost all critiques agree upon regarding the comorbidities and costs of obesity:

Ecological Fallacy: Many of the studies that have been done are population studies about adults at different weights and not following adults who gain or lose weight. Then it is assumed that if the health of the lower weight persons is better than the higher weight persons (or some other combination of lower, middle and higher weights), the loss (or gain) of weight will bring all people into the same state of health. This is a big assumption and is not supported by these studies. This is called an ecological fallacy. It is taking population data and applying it to individual members of the population. Many people would be surprised to find out how lacking the literature is when it comes to studying the effects of weight loss on individuals.

Confounding Data Interpretation: Many of the comorbidities correlated with weight can be explained by other factors and/or other factors have not been considered or ruled out in the studies. A lot of studies assume that all fat people do not exercise and all thin people do exercise. Thus, these factors are confounded with studies of weight making comments about activity levels and studies of activity levels making comments about weight control. Diet gets confounded in the same way. BMI has become a short-cut not only to assessing someone's health, but to assessing someone's health practices. But in studies where such factors as activity level, consumption of certain kinds of food, social factors such as socioeconomic levels and stress levels have been accounted for, weight becomes an almost non-existent factor.


Scales are tipped by money.

Money sometimes tips the scale against truth.

Biased Funding:

So, why in the face of the two above points, does the science get so misreported and misunderstood? Money. And that is the third contention. Much of what the media reports is not science at all, but is reported as if it is science. "Studies have shown..." are magic words in our public discourse. But much of what is reported comes from press releases by people with vested interests in the public believing certain things. Knowing who funded what is an important component in judging the accuracy of findings. Biases exist in all research. That doesn't mean that all research is bad. It means that an informed reader of the research needs to know the biases in order to judge the usefulness of the information. This is especially true of the so-called "cost analysis" that has been done. Digging into these studies about how much obesity is costing the United States and you will find companies like Allergan who doubled their market based on that panic alone.

Scales are tipped by money.

Money sometimes tips the scale against truth.

Biased Funding:

So why are these points important? Are scholars who raise these points just ignoring important correlations by repeating their own magic words "correlation is not causation"?

Establishing Causation

An education in cause and effect might help put this in perspective. Very few things are proven as causing other things. We take some things for granted as causes, but in science one makes a case for cause, one does not prove (except in very limited ways in laws of physics, for example). Issues of reliability and validity are important in making these cases. Reliability means that the study is replicable and can be conducted repeatedly in the same manner as before, preferably by other people to reduce bias. Validity means that the study is actually measuring what it is assuming it is measuring.

Reliability and validity are extremely difficult to achieve in human studies. Unlike chemical and biological processes that can be controlled within laboratories, studying humans has the added complication that the humans can figure out they are being studied and shift results. Yes, studying cells and chemical reactions in human bodies is easier than studying behavior, but there are still problems, given the extent to which human contact with the environment and the human aging process changes those chemical and biological processes constantly.

But even if a strong case can be made for reliability and validity, three conditions must be satisfied to demonstrate cause and effect (essentially to strengthen the case for it). These conditions are all necessary but no one of them is sufficient:

  1. The cause has to occur in time before the effect.
  2. Changes in the cause has to create a corresponding change in the effect.
  3. NO OTHER EXPLANATION for the relationship can be present.

Timing

This sounds basic and easy to demonstrate, but if you think about it, especially in regards to humans, timing is difficult. For example, if fatness were to cause these comorbidities, the fatness has to occur in time before the diabetes, high blood pressure or heart disease. But when exactly did these medical conditions occur? Not at the point of diagnosis because symptoms usually are present before a diagnosis is sought. Not at the point of symptoms because often people realize they have been sick longer than they knew. What if the case can be made that there is a genetic component? Can the disease said to have started in the womb? What if the person loses and gains weight multiple times? When in time was the weight a factor? This complexity is frequently ignored in studies, making almost every study done problematic in making a case for cause and effect.

Correlation

This is the darling of the media mostly because it has numbers that lend a false sense of precision. I remember as a reporter keeping several calculations in a file to be used when discussing taxes or other such topics because it was important to report the numbers in a specific way that would tantalize rather than bore. It's tricky in reporting. Sensational numbers are better than small, hard to understand or overly large, beyond comprehension numbers. Percentages work better than totals. Statistical assessments of correlation are easily reported in percentages and thus often make the first paragraph or even the headline.

Correlations are a necessary part of demonstrating a cause and effect but they are not sufficient and as such it is important to seriously review correlations to understand what they do and do not mean. I know of no one in HAES who denies that correlations between weight and certain medical conditions exist. No one is denying correlation or ignoring it. To the contrary, it is important to understand exactly what these correlations mean. Were they arrived at with good data? Are they reliable? Do they measure what they suggest they are measuring? These are the questions that other scientists and scholars must ask when confronted with such important data. Journalists do not ask these questions. Journalists report sensational numbers and rely upon the researcher who came up with the number to tell them what it means. Thus, reporting of correlations are instantly biased in two ways -- towards the sensational and towards the producer of the research. In peer review journals, it is not the researcher who interprets or reviews the data, it is his or her colleagues. This reduces bias.

Alternative Explanations

This is the biggest point of contention and it should be. This is where the lively debate that leads to truth is made. It is up to anyone who reads an assertion of cause and effect to dissect it and come up with an alternative explanation. One great movie moment that demonstrates this principle is in the move Contact when Ellie (Jodie Foster) first hears the alien transmission and runs into the telescope control room screaming "make me a liar." Everyone who reads research critically starts with skepticism. Is there a problem with the data? Is this just a coincidence? Is there an important factor that makes the difference? Is there a factor that was misused or invalid? Is there other research that sheds light on the findings of this study? What unanswered questions need to be addressed? Are there more primary causes that explain the relationship? Are these factors being controlled by another factor that explains everything? Does research bias affect the results? How does the funding, design and publication of the findings affect the interpretation of the findings?

No Conclusive "Proof" That Fatness "Causes" Anything

The three main arguments in regards to the obesity research is much more complex than "correlation does not mean causation." The ecological fallacy, misinterpretation of data and biased research funding call into question these correlations by suggesting that incorrect conclusions are drawn from the data, factors are often missing from the equation and data and/or its interpretation is often tainted by corrupting influences. These do not "ignore" the correlation. These assertions provide critique of the correlations.

One More Way Obesity Research Fails

I often hear obesity research compared to smoking research. But there is a problem with the general mainstream reporting of smoking research as well. Smoking is implicated in many cancers and other health complications. There are thousands of studies now that lend strength to these connections. But NO ONE has PROVEN that smoking CAUSES cancer. People who smoke do not get cancer. People who do not smoke do get cancer. These two facts weakens the otherwise strong case for cause and effect.

The difference between obesity research and smoking research is that there are hundreds of studies that demonstrate a strong connection between quitting smoking and improving health. Again, a strong case, not proof. No such parallel exists with weight loss. The vast majority of people cannot keep more than moderate weight loss off for more than 5 years. There are hundreds of thousands, if not millions, of smokers who have quit for well past the five year mark. Quitting is hard to do, but it can be done successfully and most people report health improvement. It is easy to study the effects of quitting because it is easy to know whether one smokes or not.

Weight loss studies rarely go over six months and even the best follow people for only two years. Many of the people who lose weight have complications from either the attempt to lose weight or from the weight loss itself, so asserting that it improves health is a mixed result, not a strong one. Most of the studies that supposedly demonstrate that weight loss works are doing so from assumptions, not direct data: if the smaller population are healthier, it is assumed that making the larger people smaller will give them the same health results.

In short, smoking may have similar correlations to weight with health conditions, but a whole body of research is missing that demonstrates weigtht loss improves health the way that quitting smoking improves health. Missing research is just as important in understanding phenomena as critiquing existing research.


1 + 1 does not equal 3

Reality counts

Final Thoughts

1 + 1 does not equal 3

Reality counts

Final Thoughts

The purpose in research for making strong cases for causation is obviously to create effective solutions to real problems. If the data is problematic, the treatment will be ineffective and sometimes harmful. This lively debate is necessary to tease out and strengthen existing bodies of literature. Emotionally charged topics often miss these basics, but they are important nonetheless.

I will conclude with stating that I do not hold that this is the only measure of information. Personal experiences, observations, social contexts, empathy and theory do count. But building upon this solid of a foundation will help in seeking the truth. Critically assessing research is an important step in understanding our world and our bodies.

advertisement
About the Author
Pattie Thomas, Ph.D.

Pattie Thomas, Ph.D., is a medical sociologist and author of Taking Up Space: How Eating Well and Exercising Regularly Changed My Life, a sociological memoir.

More from Pattie Thomas Ph.D.
More from Psychology Today
More from Pattie Thomas Ph.D.
More from Psychology Today