Skip to main content

Verified by Psychology Today

Bias

Bamboozled by Bad Science

The first myth about "evidence-based" therapy

Media coverage of psychotherapy often advises people to seek "evidence-based" therapy.

Few outside the mental health professions realize that the term “evidence-based" has become a form of marketing or branding (see my previous blog). It refers to therapies conducted by following instruction manuals, originally developed to create standardized treatments for research trials. These pre-scripted or "manualized" therapies are typically brief, highly structured, and almost exclusively identified with cognitive behavioral therapy (CBT).

myth of evidence-based therapy

Academic researchers routinely extoll the “evidence-based” therapies studied in research settings and denigrate psychotherapy as actually practiced by most therapists. Their comments range from the hysteric (“The disconnect between what clinicians do and what science has discovered is an unconscionable embarrassment.”–Professor Walter Mischel, quoted in Newsweek) to the seemingly cautious and sober (“Evidence-based therapies work a little faster, a little better, and for more problematic situations, more powerfully.”–Professor Steven Hollon, quoted in the Los Angeles Times). Even former American Psychological Association president Alan Kazdin jumped on the bandwagon, telling Time magazine that psychotherapy is “overrated and outdated,” and lamenting that it is hard to find referrals for “evidence-based treatments like cognitive-behavioral therapy.”

One might assume from such comments that strong scientific evidence shows that “evidence-based” (read manualized) therapy is superior to psychotherapy as practiced by most clinicians in the real world.

Does scientific evidence really show this?

Myth #1: “Evidence-based” therapy is more effective than other psychotherapy

Nearly all the evidence supporting “evidence-based” therapy comes from studies that comparing “evidence-based” therapy to no therapy, or to control groups that receive sham therapies that serve as foils and are not intended to be serious alternatives.

This research tells us only that “evidence-based” therapy is better than doing nothing (or doing something not meant to be a serious alternative). It does not tell us how "evidence-based" therapy compares to real-world psychotherapy that a person would receive from a qualified mental health professional.

What about studies comparing “evidence-based” therapies to legitimate alternative therapies? Such studies are scarce but their results are clear and consistent: they show no advantage for “evidence-based” therapies. An analysis published in the prestigious Clinical Psychology Review explored the topic in depth. As control groups more closely approximate legitimate psychotherapy provided by qualified mental health professionals (any kind of legitimate therapy), any apparent advantage for “evidence-based” therapy vanishes. Writing in careful academic language, the authors conclude: “There is insufficient evidence to suggest that transporting an evidence-based therapy to routine care that already involves psychotherapy will improve the quality of services.”1

The same article offers a truly disturbing glimpse into how psychotherapy research trials are typically conducted. Interventions provided to control groups and labeled “Treatment As Usual” by the researchers “were predominantly ‘treatments’ that did not include any psychotherapy.” In other cases, so-called “Treatment As Usual” means no treatment at all, or hobbled pseudo-therapy where therapists are prevented from providing the treatment they normally provide. The authors expressed their frustration with these research practices in, again, understated academic tones: “Training therapists to prevent them from using certain therapeutic actions that are typically employed in their practice cannot logically be classified as a Treatment As Usual.”

Another way to evaluate how “evidence-based” therapies compare to real-word therapy is through naturalistic studies. These studies follow patients treated by ordinary clinicians in day-to-day practice. The patients are evaluated before and after treatment to measure improvement, or effect size. The effect size can then be compared to effect sizes for “evidence-based” therapies in published research trials.

An especially rigorous naturalistic study, reported in the Journal of Consulting and Clinical Psychology, followed 5,704 depressed patients who received real-world therapy from licensed clinicians covered by their health insurance plans.2 The clinicians were not specially trained or qualified; they were ordinary practitioners with master’s degrees or higher in psychology, marriage and family therapy, clinical social work, psychiatry, or psychiatric nursing—not a “high power” group by any means. The results obtained by the real-world clinicians did not differ from those for “evidence-based” therapies in controlled research trials. Five published studies used similar methods to evaluate real-world therapy. Not a singe one showed an advantage for “evidence-based” therapy.

Even these studies overestimate the benefits of “evidence-based” therapy, because published effect sizes for "evidence-based" therapy are skewed by publication bias: favorable research findings tend to get published and unfavorable findings tend to be suppressed. Publication bias is not unique to psychology. It plages many areas of research and creates the impression that treatments work better than they really do.

In research on “evidence-based” therapy, the level of publication bias is shocking: an analysis in the British Journal of Psychiatry calculated that published effect sizes for CBT are exaggerated by 60% to 75% due to publication bias.3 In other words, the real benefits are just a fraction of what the research literature portrays. If “evidence-based” and real-world therapy are compared on a level playing field by adjusting for publication bias, real-world therapy appears more effective.

Reality:

Claims that "evidence-based” therapy is more effective than real-world therapy lack scientific basis. Academic researchers are selling a myth—one that enhances the careers of academic researchers but not necessarily the well-being of patients.

It is not just my conclusion that the therapies promoted and marketed as "evidence based" confer no special benefits. It is the official scientific conclusion of the American Psychological Association, based on a comprehensive review of psychotherapy research by a blue-ribbon expert panel. This conclusion is spelled out by the American Psychological Association in an official policy resolution.

Jonathan Shedler, PhD practices psychotherapy in Denver, CO and online by videoconference. He is a Clinical Associate Professor at the University of Colorado School of Medicine. Dr. Shedler lectures to professional audiences nationally and internationally and provides clinical supervision and consultation by videoconference to mental health professionals worldwide.

Visit and "like" my Facebook page to hear about new posts or ask about this one. If you know others interested in this topic, please share it. See my other blog posts here.

Note: For readers who want more in-depth information about misunderstandings surrounding “evidence-based” therapy, I am providing a list of key scholarly articles, below. They provide the background to evaluate the research literature for yourself:

Wachtel, P.L. (2010). Beyond “ESTs”: Problematic assumptions in the pursuit of evidence-based practice. Psychoanalytic Psychology, 27, 251-272.

Parker, G. & Fletcher, K. (2007). Treating depression with the evidence-based psychotherapies: a critique of the evidence. Acta Psychiatrica Scandinavica, 115, 352–359.

Westen, D., Novotny, C.M., Thompson-Brenner, H. (2004). The empirical status of empirically supported psychotherapies: Assumptions, findings, and reporting in controlled clinical trials. Psychological Bulletin, 130, 631–663.

Beutler, L.E. (2009). Making science matter in clinical practice: Redefining psychotherapy. Clinical Psychology: Science and Practice, 16, 301-317.

American Psychological Association (2013). Recognition of Psychotherapy Effectiveness. Psychotherapy, 50, 102-109.

Duncan, B.L. & Miller, S.D. (2006). Treatment manuals do not improve outcomes. In J.C. Norcross, L.E. Beutler, R.F. Levant (Eds.), Evidence-based practices in mental health: Debate and dialogue on the fundamental questions (pp. 140-149). Washington, DC: American Psychological Association.

______________
1Wampold, B.E., Budge, S.L., Laska, K.M., Del Re, A.C., Baardseth, T.P., Fluckiger, C., Minami, T., Kivlighan, D.M., Gunn , W. (2011) Evidence-based treatments for depression and anxiety versus treatment-as-usual: A meta-analysis of direct comparisons. Clinical Psychology Review, 31, 1304–1312.

2Minami, T., Wampold, B.E., Serlin, R.C., Hamilton, E.G., Brown, G.S., Kircher, J.C (2008). Benchmarking the Effectiveness of Psychotherapy Treatment for Adult Depression in a Managed Care Environment: A Preliminary Study. Journal of Consulting and Clinical Psychology, 76, 116–124.

3Cuijpers, P., Smit, F., Bohlmeijer, E., Hollon, S. D., & Andersson, G. (2010). Efficacy of cognitive– behavioural therapy and other psychological treatments for adult depression: Meta-analytic study of publication bias. British Journal of Psychiatry, 196, 173–178.

© 2013 by Jonathan Shedler

advertisement
More from Jonathan Shedler Ph.D.
More from Psychology Today