Skip to main content

Verified by Psychology Today

Neuroscience

How “Magic” Led to MIT Innovation in AI for Neuroscience

Deep learning data synthesizer method automates brain scan image segmentation.

mohamed_hassan/Pixabay
Source: mohamed_hassan/Pixabay

At last week’s Conference on Computer Vision and Pattern Recognition, a team of researchers from Massachusetts Institute of Technology (MIT) presented an innovative artificial intelligence (AI) system that can learn to segment anatomical brain structures from a single segmented brain scan image along with unlabeled scans—automating neuroscientific image segmentation.

This novel AI system for neuroscience originated from a very distant genre of smartphone and gaming. Amy Zhao, a graduate student in the Department of Electrical Engineering and Computer Science (EECS) and Computer Science and Artificial Intelligence Laboratory (CSAIL), and the first author on the research, initially sought to create an app using convolutional neural network technology that could provide detailed information in real-time about cards from the game “Magic: The Gathering” based on a picture taken on a smartphone.

The challenge is that this computer vision task would require a data set of photos that contains not only each of the 20,000 cards, but also many more images of each card with variation in appearance and attributes such as lighting. Creating such a data set manually would be painstakingly tedious and extremely time-consuming. Zhao set out to automate the creation of the data set by synthesizing realistic “warped” versions of any card in the data set.

A convolutional neural network (CNN), a class of deep learning algorithms with artificial neural network architecture that is somewhat inspired by the visual cortex of the biological brain, was trained with a small subset of data. Using 200 cards with 10 photos of each card, the CNN was trained to learn how to manipulate a card for various positions and appearances such as brightness, reflections and photo angles—resulting in the ability to synthesize realistic warped versions of any card in the data set.

Zhao realized this warping could be applied to magnetic resonance images (MRIs). The pattern-recognition capabilities of AI deep learning, a subset of machine learning, is helping neuroscientists perform complicated analysis of brain images. However, training machine learning algorithms can be a costly, labor-intensive challenge.

For neuroscience studies, training machine learning often requires manual data labeling of the anatomy in each brain scan by neuroscientists. Image segmentation is the process of labeling image pixels based on shared characteristics. MRIs have three-dimensional pixels called voxels. Neuroscience researchers often perform image segmentation manually by separating and labeling voxel regions based on the anatomical structure.

Zhao, along with MIT postdoctoral associate Guha Balakrishnan, professor Frédo Durand, professor John V. Guttag, and senior author Adrian V. Dalca, automated the neuroscience image segmentation process using a single labeled segmented brain MRI scan, along with a set of a hundred unlabeled patient scans.

Two convolutional neural networks were used by the researchers. First, a convolutional neural network learned from a hundred unlabeled scans variations in brightness, contrast, noise, and spatial transformation flow fields that model voxel movements between scans.

In order to synthesize a new labeled scan, the system generates and applies a random flow field to the labeled MRI scan to match an actual patient MRI from the unlabeled scan data set. The learned variations for brightness, contrast and noise were then applied in random combinations. As a last step, labels are mapped to the synthesized scan based on the flow fields of voxel movements.

Because spatial and appearance transformations were modeled independently, the researchers were able to synthesize a range of combined effects for more realistic scenarios. In turn, these synthesized scans were then input into a separate convolutional neural network in order to train it to learn how to segment new images.

The research team tested their image segmentation system on 30 types of brain structures on 100 test scans in comparison to both existing automated and manual segmentation methods. The results demonstrated significant improvements over state-of-the-art methods for image segmentation, especially for smaller brain structures such as the hippocampus.

“The segmenter out-performs existing one-shot segmentation methods on every example in our test set, approaching the performance of a fully supervised model,” wrote the researchers in their paper. “This framework enables segmentation in many applications, such as clinical settings where time constraints permit the manual annotation of only a few scans.”

The MIT researchers demonstrated that realistic and diverse labeled examples can be synthesized via machine learning of independent models of spatial and appearance transformations from unlabeled brain scan images. Additionally, the produced synthetic examples can be used to train a segmentation model with performance that meets or exceeds current image segmentation methods. That is how the game of “Magic: The Gathering” led to the creation of a new way to train AI deep learning algorithms used for analyzing brain scans that may benefit health care clinicians and neuroscience researchers in the future.

Copyright © 2019 Cami Rosso All rights reserved.

References

Zhao, Amy, Balakrishnan, Guha, Durand, Frédo, Guttag, John V., Dalca, Adrian V.” Data augmentation using learned transformations for one-shot medical image segmentation.” arXiv. April 6, 2019.

Matheson, Rob. “From one brain scan, more information for medical artificial intelligence.” MIT News. June 19, 2019.

advertisement
More from Cami Rosso
More from Psychology Today
More from Cami Rosso
More from Psychology Today