Skip to main content

Verified by Psychology Today

Artificial Intelligence

Brain-Computer Interface Recreates Bird Song from Brainwaves

UCSD’s brain machine interface is a step towards AI-enabled vocal prosthesis.

Key points

  • Scientist studying how brain machine interfaces (BMIs) and brain-computer interfaces (BCIs) and AI may produce a vocal prosthesis.
  • Birdsong shares a number of unique similarities with human speech.
  • Neuroscientists demonstrate a BCI for complex communication signals in birds, accelerating AI-enabled vocal prothesis for humans.
GDJ/Pixabay
Source: GDJ/Pixabay

Neuroscience researchers create brain-computer interfaces (BCIs) or brain machine interfaces (BMIs) with the aim of restoring impaired motor functions to the human body using the brain. A new study uses BCI in hopes of restoring communication using the brain. Neuroscience researchers at the University of California San Diego reproduced bird songs by translating their brain activity using artificial intelligence (AI) with a brain-computer interface and published their study in this month’s Current Biology.

Why Use Birdsongs to Understand Human Speech

“Our approach also provides a proving ground for vocal prosthetic strategies,” the wrote the study authors Timothy Gentner, Vikash Gilja, Daril Brown II, Shukai Chen, and Ezequiel Arneodo. “While birdsong differs in important ways from human speech, the two vocal systems have many similarities, including features of the sequential organization and strategies for their acquisition, analogies in neuronal organization and function, genetic bases, and physical mechanisms of sound production. The experimental accessibility, relatively advanced understanding of the neural and peripheral systems, and status as a well-developed model for vocal production and learning make songbirds an attractive animal model to advance speech BMI, much like the nonhuman primate model for motor BMI.”

Brain-Computer Interfaces for Those With Brain Injuries

Using brain machine interface as a vocal prosthesis may one day help those unable to communicate due to brain injury. There are over two million Americans and 250,000 people in Great Britain living with aphasia according to the National Aphasia Association (NAA). Aphasia is a disorder that impairs the expression and understanding of language that may result from brain injury due to stroke, head trauma, aneurysm, brain tumor, neurological disorders such as Alzheimer’s disease and dementia, and other causes.

The brain-computer interface market is projected to reach USD 3.7 billion by 2027 with a compound annual growth rate (CAGR) of 15.5 percent during 2020-2027 according to U.S.-based Grand View Research. Brain-computer interface startups by trailblazing entrepreneurs include Bryan Johnson’s Kernel and Elon Musk’s Neuralink.

Recording Brain Activity Directly

The researchers collected neural activity from the sensorimotor area (premotor nucleus HVC) of the brain that controls the muscles for singing from four singing adult male zebra finches (Taeniopygia guttata) using implanted electrode arrays (Si probes with either 16 or 32 channels) and recorded the extracellular voltages simultaneously. The brain recordings were manually curated to exclude noise and Kilosort was used to detect and sort the spikes.

“The strength of our approach lies in the ability to find a low-dimensional parameterization of the behavior in a manner that it can be driven with the activities recorded from relatively small samples (by tens) of neurons,” wrote the researchers.

Biomechanical Models and AI Machine Learning

To efficiently map the brain activity to sound patterns and reduce the dimensionality, the researchers trained AI machine learning algorithms to map the brain activity to mathematical equations that model the physical changes that occur in the finches’ syrinx (vocal organ) when singing rather than map neural activity to the songs.

“We employ a biomechanical model of the vocal organ that captures much of the spectro-temporal complexity of song in a low-dimensional parameter space,” the researchers reported. “This dimensionality reduction, compared to the full time-frequency representation of song, enables training of a shallow feedforward neural network (FFNN) that maps neural activity onto the model parameters.”

The team then used AI to create synthetic vocalizations that sound like the actual finches.

The researchers reported that the study has "yielded general insight into multiple mechanisms and circuits behind learning, execution, and maintenance of vocal motor skill," and that the biomechanics of song production in bird "bear similarity to those of humans and some nonhuman primates.” With this new study, the researchers have demonstrated that complex high-dimensional behavior can be synthesized from ongoing neural activity directly using brain-computer interfaces and artificial intelligence technologies.

“We have demonstrated a BMI for a complex communication signal, using computation blocks that are implementable in real time in an established animal model for production and learning of complex vocal behavior,” the researchers reported.

Copyright © 2021 Cami Rosso All rights reserved.

advertisement
More from Cami Rosso
More from Psychology Today
More from Cami Rosso
More from Psychology Today