Skip to main content

Verified by Psychology Today

Daniel Voyer, Ph.D.,
Daniel Voyer Ph.D.
Neuroscience

Does Neuroimaging Provide the Ultimate Answers?

Should we rely so much on neuroimaging to answer our research questions?

In the many years I have been reviewing papers and grant applications in neuroscience, I have seen an evolution of our thinking, from a complete reliance on behavioral and clinical data to a nearly blind reliance on neuroimaging data. When I talk about neuroimaging here, I have in mind a narrow definition that includes only functional methods of imaging that allow a clear spatial resolution. This means mostly functional magnetic resonance imaging (fMRI), magnetoencephalongraphy (MEG) and positron emission tomography (PET), but excludes electro-encephalogram (EEG) as it is known to have poor spatial resolution (Srinivasan, 1999).

What I often find is that the developments in neuroimaging have resulted in many researchers losing sight of the distinction between empirical evidence and theory. To illustrate this point, let us consider an empirical finding suggesting that the ventral anterior cingulate cortex “lights up” under neuroimaging when women but not men are performing mental rotation (Butler et al., 2007). Now, someone might be tempted to take this a step further and build a theory in which the ventral anterior cingulate cortex is involved in mental rotation for women but not men. After a while, we might forget that this theory is based on one (as far as I know) unreplicated fMRI finding. Thus, we might lose sight of the distinction between theoretical and empirical evidence.

This issue is compounded as most researchers seem to overlook the fact that many findings obtained with neuroimaging have been difficult to replicate across labs and tasks. One only has to consider the meta-analysis on neuroimaging with mental rotation tasks conducted by Zacks (2008) to see an illustration of this point. The appendix in his paper is particularly useful in showing the lack of agreement between studies in terms of the coordinates where activation is obtained under mental rotation. Of course, we should expect that several areas are involved. However, what is more problematic is that there is much variability between studies. This would be due in part to the variety in methodological details of the tasks used and the fact that most researchers do not bother validating their tasks with a larger sample before proceeding to a neuroimaging study (Voyer et al., 2006). After all, a neuroimaging study is so much sexier than a validation study! This last point is particularly sad as some journal editors now seem to believe that the only way to study the brain is by doing a neuroimaging experiment. I guess some people have also lost sight of the fact that all behaviors arise from the brain. Therefore, any task is a study of the brain at some level!

Perhaps one of the most neglected aspects of neuroimaging methods is that the way in which significant activation is determined relies often on multiple tests of significance. The pretty colors shown on the typical figure found in a neuroimaging paper typically reflect a t-test result or a significance level. If we are lucky, the authors might say that a correction for multiple comparisons was applied, although it is rarely specified beyond that. Essentially, if we only compute a pixel-wise comparison of activation across experimental conditions (e.g., baseline versus experimental) and we focus on an area that is 100 x 100 pixels, we would compute 10,000 t-tests (100 x 100). With a Bonferroni correction, we would then consider significant at p < .05 any test where p < .000005. Is that what researchers are actually doing? It is not clear. However, I was recently reviewing a paper in which the authors used p = .001 as their significance level for any comparison. If they used a 100 x 100 pixels area, they would still have a humongous risk of errors in statistical hypothesis testing. This can also be compounded by the fact that some researchers include as many regions of interests as possible to improve their chances of getting significant findings. If you ever read a paper in which this kind of fishing expedition is implemented, you should be wary! There should always be at least an empirical basis from past research to select specific regions of interest.

As you read this post, you might be thinking that I hate neuroimaging. Actually, this is not the case! I do think that neuroimaging is the way to go for the future of neuroscience as it has potential to lead to a high level of understanding of how the brain works. However, the point of my post is to warn you to be critical when reading this kind of research. After all, like everything we do in psychology, neuroimaging research is based on probabilities and experimental manipulations. Issues of measurement reliability and validity apply to neuroimaging just like they do for other methods of inquiries, along with the proper application of statistical methods. Thankfully, experts in neuroimaging are quite aware of its statistical shortcomings and there are plenty of good people out there working on solutions. I am not concerned with these people. What scares me is when people take neuroimaging studies for granted and use them to promote their own agenda (see Halpern et al. 2011 for criticisms of an example of such misuses). If we all start considering research results critically (neuroimaging or otherwise), such abuse will be minimized!

advertisement
About the Author
Daniel Voyer, Ph.D.,

Daniel Voyer, Ph.D., is a professor at the University of New Brunswick in Canada.

More from Daniel Voyer Ph.D.
More from Psychology Today
More from Daniel Voyer Ph.D.
More from Psychology Today