Skip to main content

Verified by Psychology Today

Coronavirus Disease 2019

Can I Trust Psychological Research of the COVID-19 Pandemic?

Sixth in a series about criticism of mental health research during COVID-19.

Source: Paolodefalco75/Wikimedia Commons
Source: Paolodefalco75/Wikimedia Commons

Are you one of those people who believe that humans generally are resilient to psychological stress? If we believe pandemic modeling experts at the Centre for Mental Health in the U.K., boy, you are mistaken. According to their Chief Economist, “Nationally, in England, the model predicts that up to 10 million people (almost 20% of the population) will need either new or additional mental health support as a direct consequence of the crisis” (O’Shea 2020). That’s 10 million people just in England!

I’ve been reporting in my series of posts that predictions like this from the Centre for Mental Health are almost certainly wrong. So which is it? Is there a psychological pandemic or not?

It’s hard to tell if you just listen uncritically to the media and science journals. The notion of a psychological pandemic gets endorsed by both sides in the many arguments surrounding the pandemic depending on what argument they’re trying to make. The notion has been sort of the all-purpose wild card you can play to get attention for your team’s argument. If you’re in favor of lockdowns, the argument is, “We need more lockdowns now so we can end all lockdowns later and relieve the massive mental suffering.” If you’re against lockdowns, the argument is, “We need to end lockdowns now to relieve the massive mental suffering immediately.”

After reviewing dozens of studies on mental health issues during the pandemic, we should be able to come to some conclusions now. The studies have covered the general population, infected individuals, and healthcare workers. A smaller number have used individuals with pre-existing psychiatric conditions. The studies came from more than a dozen countries, with China leading the way.

Most of the studies have been cross-sectional studies with poor quality. If we believe these cross-sectional studies, which are what often get reported in the fire-alarm articles and used by modeling experts, the rates of mental disorders skyrocketed because of the pandemic. The rates of anxiety disorders doubled compared to pre-COVID time, depression disorders tripled, and PTSD increased anywhere from double to five-fold.

My conclusion, after pouring over much of the evidence, is that almost none of that is true.

What’s wrong with the studies?

I don’t believe the poor-quality, cross-sectional research. There are two key components for the definitions of psychiatric disorders. One is the presence of symptoms and the second is the presence of functional impairment. You can be stressed and show some symptoms, but if you don’t show any functional impairment in your day-to-day life, it is, by definition, not a psychiatric disorder.

In my five preceding posts, I explained the flaws of these studies. The flaws include:

1. Self-administered questionnaires are prone to inflation of symptoms

2. Respondents were self-selected which introduces sampling bias.

3. Nearly all of the studies were cross-sectional designs with no causal explanatory power.

4. The few longitudinal studies show little to no long-term psychological impacts.

5. All of the studies I have seen lacked measures of functional impairment.

6. Many of these poor-quality studies have been published rapidly with relaxed peer review by journals, which raises doubts about all of the considerations that go into the conduct and reporting of research.

The poor-quality, cross-sectional research conflated distress with disorder. The idea that substantially more individuals than typical are feeling relatively more stressed than usual is probably true. But we live with stress every day. Feeling stressed is not the same as a psychiatric disorder.

More rigorous research has clearly shown that self-report questionnaires tend to artificially inflate the rates of psychiatric disorders compared to interviews. For example, Silove and colleagues found 47% of adults met the cutoff for PTSD on a self-report questionnaire, but only 20.3% when using the more accurate interview method (Silove et al., 2007). Similar patterns have been found for anxiety (e.g., Andrews et al., 2006) and depression (e.g., Lincoln et al., 2003).

The real-time systematic review by the DEPRESSD Project gave up trying to track these poor-quality, cross-sectional studies because it was a waste of time (see my post from 11/6/20). Instead, they have kept their project focused on more rigorous prospective longitudinal studies, which do not indicate a psychological pandemic crisis. With a possible exception of students for whom social relationships are highly valued, prospective longitudinal studies “generally show either small increases or negligible changes in anxiety, depression, and other mental health functions” (The DEPRESSD Project, 2021 1 31). The DEPRESSD Project did not address the validity issue of self-report questionnaires versus interviews, but at least they addressed the cross-sectional versus prospective design issue.

Why so little skepticism?

On the medical side of the pandemic, sloppy and weak studies have been heavily criticized (e.g., Alexander et al., 2020; O’Riordan, 2020). We need to be equally critical of the psychological side of things.

Except for rare instances of skepticism (e.g., van Overmeire, 2020), there has been hardly any criticism that I am aware of from within the profession about the quality of these studies, which is troubling. It struck me that psychology journals seemed happy to be part of the pandemic discussion as a status signal, and would publish as much of this as they could get their hands on. Even the meta-analyses, which are supposed to be more independently rigorous reviews that typically assess the quality of the research, not just the overall pattern of research, paid scant attention to the flaws of the research (e.g., Deng et al., 2020).

Poor quality, cross-sectional research was conducted in massive amounts that was never designed to critically test the alternative hypothesis that there is no psychological pandemic. Journalists and policymakers who depend on scientists deserve better. Consumers deserve better. We deserve better than this.

Actually, the cross-sectional research itself is not the main problem; I don’t expect researchers to conduct perfect studies. The problem has been the authors of the research who either are not able to understand the flaws or were not willing to acknowledge the flaws and report their results more cautiously. The peer-review system was either absent or weak.

The DEPRESSD Project was great for noting poor quality after the fact, but projects like that are too late, and will always be too late. Editors are the only science police we have at an organized level. The mob of peers can read and publish criticism after something is published, but once something is published, it exists forever. And who has time to police everyone else’s work when you’re hustling to keep your own career on track? Old dogs like me who have flexibility with their time can do some critiquing (like this blog), but we are too few and uncoordinated; we are outmanned and outgunned.

Conclusion

Certainly, there are many people who have suffered beyond normal distress due to the medical pandemic, but there is no evidence of a massive psychological pandemic. There may be a pandemic of stress, but not a pandemic of psychiatric disorders.

The vast majority of research has been mostly misleading and often worthless. Is it political? Is it unconscious bias? Is it just that this type of research is too easy to conduct by poorly trained researchers in a publish-or-perish environment under a review system and publishing model that was not designed for pandemics? In a future post, I’ll have some thoughts on these issues.

References

Alexander PE, Debonoa VB, Mammend MJ, Iorioa A, Aryala K, Deng D, Brocard E, Alhazzani W (2020). COVID-19 coronavirus research has overall low methodological quality thus far: case in point for chloroquine/hydroxychloroquine. Journal of Clinical Epidemiology 123 (2020) 120e126

Andrews B, Hejdenberg J, Wilding J (2006). Student anxiety and depression: Comparison of questionnaire and interview assessments. Journal of Affective Disorders 95 (2006) 29–34

Deng J, Zhou F, Hou W, Silver Z, Wong CY, Chang O, Huang E, Zuo QK (2020). The prevalence of depression, anxiety, and sleep disturbances in COVID-19 patients: a meta-analysis. Ann. N.Y. Acad. Sci. xxxx (2020) 1–22, doi: 10.1111/nyas.14506

Lincoln NB, Nicholl CR, Flannaghan T, Leonard M, Van der Gucht E (2003). The validity of questionnaire measures for assessing depression after stroke. Clinical Rehabilitation 2003; 17: 840–846.

The DEPRESSD Project. https://www.depressd.ca/research-question-1-symptom-changes (accessed 1/31/2021).

O’Riordan M (September 4, 2020). COVID-19 Blamed for Weaker Research Published by Top-Tier Journals in 2020. TCTMD: Cardiovascular Research Foundation. https://www.tctmd.com/news/covid-19-blamed-weaker-research-published-to… (accessed 10/4/20).

O’Shea N (10/1/2020). Forecastings needs and risks in the UK. https://www.centreformentalhealth.org.uk/publications/covid-19-and-nati… (accessed 12/29/20).

Silove D, Manicavasagar V, Mollica R, Thai M, Khiek D, Lavelle J, Tor S (2007). Screening for Depression and PTSD in a Cambodian Population Unaffected by War: Comparing the Hopkins Symptom Checklist and Harvard Trauma Questionnaire With the Structured Clinical Interview. Journal of Nervous and Mental Disease 2007;195: 152–157.

van Overmeire R (2020). The Methodological Problem of Identifying Criterion A Traumatic Events During the COVID-19 Era: A Commentary on Karatzias et al. (2020). Journal of Traumatic Stress, October 2020, 33, 864–865. DOI: 10.1002/jts.22594

advertisement
More from Michael S. Scheeringa M.D.
More from Psychology Today