Skip to main content

Verified by Psychology Today

Diet

The Blind Leading the Blind: Medications, Gluten, & Violins

How controlled experiments avoid the bias of expected results

The Parable of the Blind, Pieter Bruegel

“Can the blind lead the blind? Shall they not both fall into the pit?” Luke 6:39

While “the blind leading the blind” has become an often-used idiom of incompetency and ineffectiveness, the “randomized, double-blind, controlled trial” is, on the contrary, regarded as the gold standard for clinical studies.

Two recent studies, which have nothing to do with psychiatry in any direct way, help to illustrate why we value blinded observations in scientific studies involving human subjects and investigators. But before we talk about those experiments, let’s review some basics of study design.

A randomized, controlled trial compares the effects of one condition (e.g. a medication) to another “control” condition (e.g. a different medication, or a placebo), by randomly assigning subjects to one of two (or more) treatment groups, observing the responses of those subjects, and then comparing the average responses between the groups. A “double-blind” study means that neither the subjects nor the experimenters are aware of a subject’s group assignment while the observations about response are made. Only later, during data analysis, does the experimenter learn the details of group assignment in order to determine whether a statistically significant difference has been found.

I remember learning the importance of blinding firsthand when I was conducting a clinical trial exploring whether a medication approved for the treatment of excessive daytime sleepiness might also be helpful for the treatment of patients with schizophrenia. In the study, half of the subjects were randomly assigned to treatment with active medication, while the other half were assigned to receive a placebo. Each week, these patients would come in to report how they were feeling and I, as the investigator, would administer questionnaires (“rating scales”) to translate their verbal reports into quantifiable data.

Sometimes patients would report that they were feeling better, leading them to believe that they had been assigned to active medication. I would remind them that they might actually be taking placebo and might be feeling better for other reasons not related to the study medication. Maybe they were just having a good day. Maybe they were just feeling hopeful that the study medication might help. Or maybe they were just trying to please me.

As I rated patients’ responses, I would then have to remind myself the very same thing, tempering my own enthusiasm and desire for a positive end-result for the study when patients reported feeling better. I therefore took great care not to confuse those subjective reports with actual improvements in specific symptoms as measured by the rating scales. For example, a patient might say they felt more energetic, but then report no change in the amount of sleep or activity level when specifically asked about such things. At the end of my study, when the blind was removed and the data were analyzed, there were few meaningful differences between treatment groups and there was barely a budge in the level of objectively measured symptoms.

While my study results were “negative” with little change in either treatment group, often clinical trials are negative because of significant changes in both the active treatment and the placebo condition, but no differences between the two. Likewise, patients taking placebos often report side effects, even though they aren’t receiving any active medication (modern placebos are inert substances, not “sugar-pills” which might actually have an effect). Such commonly observed effects remind us that in a clinical trial, being assigned to a placebo isn’t the same as receiving “nothing” – rather, being treated in the placebo condition means being subjected to all of the things that might produce an effect other than the active medication, including the clinical care a subject receives in the study and the hope and expectation of a response.

With that background, let’s talk about two recently published studies that reveal how blinding helps to reduce the bias of our expectations.

The first study is reminiscent of the famous Paris Wine Tasting of 1976 in which French judges participating in a blind taste test of California and French wines gave the highest ratings to bottles from Napa Valley for both red and white wines (needless to say, this was not the expected finding of the event’s host or its judges, putting Napa “on the map” as a top winemaking region ever since).1 In this case however, the subject of investigation was violins, examining the commonly held beliefs that antique Italian violins made by the likes of Stradivarius and Guarneri del Gesu in the 17th and 18th centuries are “tonally superior” to newly made instruments and that any experienced player can tell the difference.

In a study published in Proceedings of the National Academy of Science, Claudia Fritz and colleagues used a blinded design to compare six antique Italian violins to six control violins crafted anywhere from 20 years to just a few days prior.2 Seasoned, award-winning violin soloists were allowed to use their own bows to play the instruments in a rehearsal room and again later in a concert hall, but were required to wear welding goggles in both instances making it impossible to identify the instruments by sight. The soloists then self-rated different aspects of each instrument (e.g. loudness, projection, tone quality, playability, clarity, and overall preference/quality) in order to select their favorite.

When the data were analyzed, the top two most-preferred violins were newly made instruments; one of the Stradivari placed third (this third place violin was also rejected as unsuitable on four occasions). When soloists were asked to guess whether an instrument was old or new, their wrong guesses were slightly more frequent than correct guesses, suggesting that the ability to tell the difference was no better than chance. Overall, in terms of preference, new violins outscored old violins by almost 6 to 1.

As with the Judgment of Paris, these results were highly unexpected by the soloists and seem to give lie to, as the authors put it, the “near-canonical beliefs about [the superiority of] Old Italian violins.” Still, despite the objectivity of this study, in a recent interview for the podcast Planet Money, the modern-day virtuoso Joshua Bell stated that he refuses to believe that, if blindfolded, he couldn’t tell the difference between a Stradivarius – his own instrument of choice – and a new violin.3

Indeed, it is that very conviction that necessitates blinding in controlled studies. If I had known which of my study patients were on active medication, it is likely that I would have been inclined, whether consciously or unconsciously, to rate them as improved. If violinists who have spent the kind of money it takes to own a Stradivarius were aware of what instrument they were playing, it would be hard for them to say that they preferred a relatively cheap replica.

The second recent blinded experiment with provocative findings was a study of the effects of gluten and gluten-free diets on people with self-reported non-celiac gluten sensitivity (NCGS). The apparent spike in “food allergies” such as gluten intolerance in recent years, accompanied by the now ubiquitous availability of gluten-free products in the supermarket, has raised questions about whether this represents a new medical epidemic, a simple variant of irritable bowel syndrome, or a kind of mass hysteria.

Last year, Jessica Biesiekierski and colleagues published the results of a trial that compared the effects of a high gluten diet, a low gluten diet, and a gluten free diet containing whey protein on individuals with symptoms of irritable bowel syndrome (IBS) and NCGS based on their own reports of previous improvement on gluten-free diets.4 Study subjects were first fed a gluten-free diet low in fermentable, oligo-, di-, monosaccarides, and polyols (FODMAPs, which are thought to worsen IBS) during a 2-week “wash-out” period and then randomized to one of the study diets for a week. After that, the subjects were put back on the low FODMAP diet for 2 weeks before being put on a second study diet for another week (this “cross-over” design allows subjects to try more than one experimental condition and serve as their own controls). On average, during the wash-out low FODMAP diet, gastrointestinal symptoms such as pain, bloating, nausea, fatigue, and flatulence all improved for study subjects. Then, during the blinded experimental diets, symptoms worsened, but there was no significant difference between the gluten and gluten free diets.

In a subsequent rechallenge trial, some subjects returned to take a diet low in FODMAPs that was also dairy-free and low in food additives for 3 days, followed by sequential cross-over exposure to two different 3-day diets including the high gluten diet and the gluten-free whey protein diet from the first part of the study, as well as a new diet with no gluten or additional protein. Once again, there was no significant difference in reported symptoms between the three diets. Furthermore, the responses to diets in the 7-day and 3-day phases of the study were not consistent for individual subjects. In other words, those who reported symptom worsening on gluten during the 7-day exposure didn’t always report the same experience during the subsequent 3-day exposure (and vice-versa).

What does this study tell us? Given the lack of differences in symptoms between the various diets, it indicates that some people with self-reported NCGS may not experience improvements on gluten-free diets when administered under blinded conditions. This is important because people often self-diagnose NCGS by performing a “gluten challenge” in which they keep a gluten-free diet for a week, followed by another week of a gluten-laden diet. If they feel worse during the gluten diet, this is taken as evidence of having NCGS. At the very least then, this study suggests that performing a gluten challenge under blinded conditions would probably yield more reliable results. More provocatively, it also suggests that many people who have self-diagnosed NCGS may not actually be sensitive to gluten at all.

I recently told a friend, who has diagnosed himself with intolerance to dairy, sugar, and gluten about this study. Echoing Joshua Bell, he replied, “I don’t care about studies. I care about my own experience.”

And therein lies the point about what all of this has to do with psychiatry of everyday life. Patients who suffer from psychotic disorders like schizophrenia often have delusions – unsubstantiated beliefs that are maintained despite conflicting evidence, such as paranoid concerns that someone is trying to harm them or that they have magical powers. But all of us, delusional and otherwise, are guilty of holding beliefs based less on evidence and more on intuition – how something feels to us. And for many issues, when we look to prevailing opinion for guidance, we’re surrounded by conflicting viewpoints, whether about brand preferences, nutrition, climate change, or politics. In that sense, we’re all blind to a certain degree.

So, how can we make sense of the world? To start with, we have to acknowledge that our intuitions and expectations, no matter how right they feel, are often wrong. And in some cases, a blindfold can help us see the truth.

1. Taber GM. Judgment of Paris. New York: Scribner; 2005.

2. Fritz C et al. Soloist evaluations of six Old Italian and six new violins. PNAS 2014; 111:7224-7229.

3. Episode 538: Is a Stradivarius just a violin? Planet Money, May 9, 2014. http://www.npr.org/blogs/money/2014/05/09/310447054/episode-538-is-a-st…

4. Biesiekierski JR et al. No effects of gluten in patients with self-reported non-celiac gluten sensitivity after dietary reduction of fermentable, poorly absorbed, short-chain carbohydrates. Gastroenterology 2013; 145:320-328.

advertisement
More from Joe Pierre M.D.
More from Psychology Today