Skip to main content

Verified by Psychology Today

Michael F Steger Ph.D.
Michael F Steger Ph.D.
Therapy

How Deep is the Divide between Therapy and Science?

What is the role of science in directing psychotherapy practice?

Is there a Mackenzie Phillips-sized family secret lurking in the heart of psychology?

In an inflammatory new opinion piece for Newsweek, Sharon Begley says, 'Hell yeah!' - "It's a good thing couches are too heavy to throw, because the fight brewing among therapists is getting ugly. For years, psychologists who conduct research have lamented what they see as an antiscience bias among clinicians, who treat patients. But now the gloves have come off."

I for one have begun pumping iron to improve my couch-hurling abilities in preparation for the upcoming sofa melee!

(I made that picture myself!)

Ms. Begley is talking about a new article, set to appear in the journal Psychological Science in the Public Interest, in which Timothy Baker, Richard McFall, and Varda Shoham argue that too many practicing clinical and counseling psychologists ignore the huge amount of research identifying successful and effective ways to do therapy. This debate has been around for a while, and raises hackles on both sides. I think it's a matter of values, and as is the case in most battles over values, there is probably not an easy solution. At the same time, this debate is absolutely critical for the health of the field and the economic and psychological health our populace!

Here is how Baker and colleagues begin their article (currently available only in an 'in press' form here): "The principal goals of clinical psychology are to generate knowledge based on scientifically valid evidence and to apply this knowledge to the optimal improvement of mental and behavioral health." This highlights the first values conflict:

♦ Is the goal of clinical and counseling psychology to create knowledge through research and translate that into helping people? OR is the goal to try to help people and later use research to understand how (or indeed whether) that therapy works?

People whose values lie in the first question say, given that we can verify that several empirically effective treatment options exist, why choose an untested product?

People whose values lie in the second half of that question say, given that the therapies that are being tested were generally drawn from the experience and experimentation of clinicians in the first place, why should we wait for research to ratify each and every approach (when it gets around to it)?

There is a second value at work as well:

♦ Can therapy be dismantled, with critical elements isolated, delivered in calibrated doses, with effects reliably measured against meaningful comparisons with other approaches, then reconstructed and implemented by clinicians? OR is the interplay of myriad client and therapists characteristics and behaviors across countless discrete interactions in attempts to address multiple, co-existing disorders and concerns too dynamic and complex to be replicated within a laboratory setting?

Simply put, some people think that empirical approaches to testing therapy can inform us about what works best, and some people don't.

Why do I think this is a values issue? Well, in addition to using war metaphors (fight brewing, gloves come off), it is common to see people tackle this issue by creating "straw men," or absurd and extreme examples of their opponents, to attack. For example, Ms. Begley insinuates that millions of psychologists use ridiculous-sounding approaches like dolphin-assisted therapy with their clients. As awesome as that sounds, it is absurdly untrue (where am I going to get a dolphin in Fort Collins, Colorado?). On the other side of the "battle" I frequently hear people assert that researchers are trying to turn therapy into a cookbook-driven series of tricks that a monkey, robot, or child could perform. This is obviously absurd as well.

All of the rhetorical histrionics that this issue attracts distract from the real issue: How can we show that we give our clients effective services? Following closely on the heels of this question is: How can our clients and consumers assure themselves that they are getting effective services?

This reminds me of a fascinating Malcolm Gladwell article, in which he describes "The Quarterback Problem." Indulge me if you would, in a football sidebar (after all, my 2009 Minnesota Vikings team is clearly the greatest team since the 1972 Dolphins!). Essentially the quarterback problem is that it is insanely hard to figure out which college quarterback will be a great NFL quarterback. Scouts, coaches, personnel directors, managers, and media figures spend thousands of hours poring over statistics, videotapes, and game performances on practically every eligible college quarterback. The result of their mind-boggling time investment is that no one has any idea who will be an NFL MVP and who will be a pitiable bust. As Mr. Gladwell frames it, the rub lies in the incredible increase in complexity and speed of the pro game compared to the college game. Although they're both playing football, it's not really the same game.

In some ways, although scientists are "doing" therapy in their research, it's not necessarily the same game that practitioners are playing with their clients.

Some people take this notion and run with it, maintaining that research can't tell us which therapies are effective and which are not. That's ridiculous of course. Even the worst scout for the worst NFL team doesn't tell the team to draft a punter, offensive tackle, or unicorn. They're pretty good at ruling out awful, and even mediocre talent. Occasionally, an undrafted QB makes a big splash (Kurt Warner and Tony Romo come to mind), but the system - as riddled as it is with disconnects between the performance it's assessing and the performance it's trying to predict - does a brutally good job of getting rid of junk.

You can add to the quarterback problem another complicating factor. When researchers compare bona fide therapies - in other words therapies most professionals would expect to work - it is fairly uncommon to find notable differences in the outcomes clients achieve. That is to say, that the large majority of people report that therapy helps reduce their distress, and research often enough finds that specific approaches to therapy yield similarly good results (typically better than medications). Researchers like Bruce Wampold argue that this is explained by factors that are common to successful therapy ("common factors"), like establishing a good working relationship and the degree to which clients are actively engaged in their own healing.

Unfortunately, for too many practitioners, the values of help-first-research-later and it's-too-complex combine with research showing a good degree of equivalence among therapy approaches to provide an excuse to simply do whatever they feel like doing. I think it's a pretty small number of people, not really the "unconscionable embarrassment" Walter Michel labels it. After all, in contrast to the silly assertion that Ms. Begley makes about clinical psychologists not being exposed to science training, current accreditation standards for clinical and counseling psychologists require significant training. There are some reasons to wonder whether developing a new accrediting board, as Baker and colleagues promote, as opposed to pressuring existing ones to more rigorously support scientific training, would be the most efficient way to develop better therapists (see one opinion on this here). Although the criteria they lay out are appealing to me, personally, many programs do an excellent job within the existing system already.

For example, in our counseling psychology program at Colorado State University, students are required to complete multiple research methods and statistics courses, conduct empirical thesis and dissertation research projects among other additional grounding courses and experiences in the science of psychology.

I teach a course to every one of our doctoral students specifically focused on empirically supported treatments and evidence based practice. All of our students learn what works, the basics of how to implement those approaches, and how to critically consume and integrate findings from emerging research.

However, all these great things come with some boulders of salt.

First, it is far too rare for psychotherapists to evaluate their own effectiveness as therapists. No matter what one's values or how persuaded one is by the common factors debate, there really is no excuse for not using the tools of science to evaluate whether one's clients are getting better!

Second, I think it is clear by now that we have amassed a convincing amount of empirical evidence that does, in fact, support using some specific therapies. Training-to-competency in these already identified approaches should be mandatory, in my mind. I am not convinced that the evidence is solid enough that today's list contains the only therapies psychologists should be allowed to use - after all, that list is ever-growing, and contains significant gaps in what we know about treating people with certain specific disorders, multiple disorders, some potentially important cultural or level-of-functioning differences, and serving people across the lifespan. Psychologists striving to treat difficult cases often need to improvise and innovate based on their expertise and experience, and often the results benefit us as a field.

Advancing the effectiveness of psychotherapy is absolutely critical, and central to researchers, clinicians, and the people they serve. Doing the therapy that comes most easily, regardless of whether there's any evidence that it works, can do real harm. Regarding anyone who sees some ambiguity or gaps in what we know as being and angry, irrational Luddite can do real harm, too. Both sides need to consider who receives this harm, though. It's not us psychologists, it's the people we serve.

© 2009 Michael F. Steger. All Rights Reserved.

advertisement
About the Author
Michael F Steger Ph.D.

Michael F. Steger, Ph.D., is the Founder and Director of the Center for Meaning and Purpose at Colorado State University.

More from Michael F Steger Ph.D.
More from Psychology Today
More from Michael F Steger Ph.D.
More from Psychology Today