47.3 F
Davis

Davis, California

Wednesday, October 30, 2024

Brain-computer interface enables speaking-impaired patients to talk again

Innovation offers introduces a new method of communication

By ARYAMAN BHATIA — science@theaggie.org

 

UC Davis researchers have published a study explaining their groundbreaking brain-computer interface that enables Casey Harrell — a man with amyotrophic lateral sclerosis (ALS), a nervous system disease that impairs physical functioning — to communicate effectively. This marks a significant advancement in the field of neuroprosthetics.

David Brandman, the co-director of the UC Davis Neuroprosthetics Lab, explained the scientific basis behind the invention.

“A brain-computer interface (BCI) is a device that records brain signals in people who are paralyzed and then translates those brain signals to allow people to communicate,” Brandman said. 

The research team utilized a multiple-unit recording machine to capture neural signals from the brain’s motor cortex. These signals were then translated into phonemes — the smallest unit of sound. Sergey Stavisky, the co-director of the UC Davis Neuroprosthetics Lab and assistant professor in the Department of Neurological Surgery, explained how the BCI works in people with impaired speech.

“In essence, what we’re doing is we’re bypassing the injury,” Stavisky said. “We are recording from the source — from this part of the brain that’s trying to send these commands to the muscle — and we’re translating those patterns of brain activity into the phonemes.”

As the participant in the clinical trial, Harrell had his thoughts converted into speech using a text-to-speech algorithm that utilized past recordings to resemble his actual voice.

Nicholas Scott Card, the lead author of the study, commented on the effects of the technology.

“We were able to predict the words that he was trying to say correctly about 97.5% of the time, and he could even use [the technology] in his day-to-day life to communicate with friends or families,” Card said.

The BCI works through a complex pipeline. An array of tiny electrodes records brain signals from 256 sites within the motor cortex, and these signals are processed by a recurrent neural network, which outputs a series of phoneme probabilities. A sophisticated language model is then applied, correcting potential errors and using the structure of the English language to predict the most likely words or sentences. This language model fills in potential gaps where the neural network might have misclassified a phoneme.

This innovation offers hope not only for ALS patients but for a wide range of people with communication impairments. The team’s efforts were supported by Harrell whose dedication contributed significantly to the success of the project.

“I want to give as much credit as I can to Casey [Harrell] and to his family for deciding to be part of this in the first place, and beyond that, for putting so much time, effort, blood, sweat and tears into this to help make everything possible,” Card said.

 

Written by: Aryaman Bhatia— science@theaggie.org

 

LEAVE A REPLY

Please enter your comment!
Please enter your name here