Skip to main content

Verified by Psychology Today

Career

Why Interventions Work Is as Important as If They Work

Your evaluation shouldn’t settle for just a yes/no answer.

Tech. Sgt. Ryan Crane/Keesler
Source: Tech. Sgt. Ryan Crane/Keesler

Many of us are judged not by the richness of our ideas but by the quality of our results. How many students have you recruited? How much money have you saved the college? How many grants have you received? Whatever metrics define your professional success (mine, luckily, is not how many people read this blog), questions like these narrow our focus to whether new programs and innovations work, at the neglect of why they do or do not work. Why, however, is more than an academic question, and can have multiple benefits when conducting an evaluation. This advice is especially true in today’s zeitgeist of nudging—low-cost, scalable interventions that can be easily implemented and iterated upon—where the simplicity of these strategies can obfuscate the complex psychological responses that they elicit from the people we want to support. Asking why a program works can help us to expand the scope of our efforts, design complementary solutions, and diagnose problems when our efforts fall flat.

Case Study: Changing Energy Habits

A now-famous set of studies showed how social proof can curb household energy usage. Most of us don’t know how much energy use is normal, so if we were to find out that we use more than average, we might be motivated to bring our consumption closer to the norm. The Opower conservation program, therefore, sent to its customers bills that revealed how much energy they used compared to their neighbors. Lo and behold, customers who used more energy than average reduced their consumption over time, effects that persisted rather robustly for years.

So why did energy consumption go down? Two different analyses of Opower data came up with differing explanations:

#1. Social proof changes habits. When people received their first bill, their daily energy use would drop steeply. Their consumption would slowly creep back up until the next bill arrived, at which point energy use would fall off again. By the fourth bill, however, people’s energy usage tended to stabilize at a lower level than where it started, so there was no drop and no rebound. According to these researchers, new habits (e.g., turning off lights when leaving a room; readjusting the thermostat) had taken hold. Or had they?

#2. Social proof motivates home improvement. A competing analysis examined what happened to household energy consumption after a house was sold. The new occupants were never exposed to social proof, so they should still be naïve about what constitutes normal energy use. Yet energy consumption in resold homes didn’t rise. Why? The researchers argue that the original homeowners, the ones who did receive social proof, made improvements such as programmable thermostats and Energy Star appliances. This explanation suggests that four months was not how long it took for new habits to form, but rather how long it took people to make their homes more energy efficient, an impact that stayed with the house even after the original homeowners departed.

One result, two explanations… as long as social proof works, who cares? But imagine rolling out this conservation program in a low-income neighborhood. If social proof changes people’s energy habits, then it should work regardless of household income. But if social proof primarily nudges people to upgrade their homes, that may not be possible among a more disadvantaged population. Likewise, people already living in energy-efficient homes would only benefit from social proof if it engenders new habits because they have no other improvements to make. As you can see, the why is extremely important in planning to scale a program such as this one to new localities.

What Can ‘Why’ Do For You?

The Opower example reveals one key reason to examine why a program works: scalability. Often when we introduce a new program or technology into higher education, we test it with a specific group of students, but with plans to introduce it widely should the pilot go well. Understanding why, and not just if, something works will help us to predict how new populations will respond. For example, if a new intervention for first-generation students living on campus increases academic performance primarily by motivating students to seek tutoring, it might not translate to commuter students, working adults, and parents who may not have extra time in their lives to dedicate to help-seeking. If that intervention operates by creating more efficient study habits, however, it is more likely to help a wider array of students. By understanding the underlying mechanism, we have a better idea of who else could benefit from our intervention.

Asking why a program works can also help us to develop complementary solutions for student challenges. Let’s say we pilot a successful well-being initiative, and through surveying discover we had a large impact on improving students’ sleep habits. We can now devise other ways to improve sleep in order to maintain and bolster those gains, as well as consider how best to scale our findings around sleep. But let’s say that even though our initial pilot was successful, we really only helped women to sleep better, not men. Now we know to target men in a new way and to examine whether changing their sleep habits has a commensurate impact on their overall well-being.

Finally, examining why something works may also help us to diagnose why something doesn’t work. Often when a pilot fails, the baby goes out with the bathwater and we start over. But asking why may reveal to us the fatal flaw that we can fix in the next iteration. For example, imagine implementing a utility-value intervention for introductory STEM students that did increase time spent studying, yet had no significant impact on GPA. Perhaps the effect was too weak (i.e., students need to study even more before their grades will improve), or the mechanism of change was insufficient and you instead need to target students’ study habits or help-seeking behaviors. Exploring the why gives you the opportunity to see your pilot from new perspectives and make informed improvements to it without starting from scratch.

Conclusion

It can be easy in this world of nudging to implement a simple intervention and get fast results, but that doesn’t mean you shouldn’t look under the hood to figure out what makes that intervention go. There’s no doubt that asking why can be difficult: It often involves additional data collection such as surveys, interviews, and behavioral tracking, all of which add cost and burden to your evaluation. But the payoff in terms of scalability, development, and diagnosis may be well worth the extra investment it takes to ask the all-important question of “Why?”

References

Allcott, H., & Rogers, T. (2014). The short-run and long-run effects of behavioral interventions: Experimental evidence from energy conservation. American Economic Review, 104(10), 3003-3037.

Brandon, A., Ferraro, P. J., List, J. A., Metcalfe, R. D., Price, M. K., & Rundhammer, F. (2017). Do the effects of social nudges persist? Theory and evidence from 38 natural field experiments (No. w23277). National Bureau of Economic Research.

advertisement
More from Ross E O'Hara, Ph.D.
More from Psychology Today