Thinking Out Loud
A novel system can translate brain activity into speech using a brain-computer interface that is much less invasive than existing methods.
Assistive technologies are revolutionizing the way people with speech disorders communicate. These technologies, including speech-generating devices (SGDs) and brain-computer interfaces (BCIs), are enabling individuals with speech disorders to express themselves more effectively and participate more fully in their communities.
One popular assistive technology for people with speech disorders is the SGD. SGDs are electronic devices that allow users to communicate by typing or selecting pre-programmed phrases on a touch screen. These devices can also be customized with personal phrases and can even be programmed to speak in the user's own voice.
SGDs are not always the ideal solution, however. Requiring input from a keyboard or touchscreen, they can be on the slow side. Moreover, some people have physical disabilities that prevent them from using these interfaces. To address these issues, BCI devices have been increasingly explored by researchers in recent years.
BCIs are a relatively new technology in the field of assistive communication. These interfaces use signals from the brain to control external devices or software, allowing people with speech disorders to communicate through thought alone. For example, a BCI system could be used to translate the user's intended speech into text or speech, bypassing the need for physical speech.
But to date, BCIs have been used very sparingly, and with good reason β present designs require highly invasive brain surgery to implant a large number of electrodes. Some steps forward have just been reported on by researchers at HSE University and the Moscow State University of Medicine and Dentistry. While it is hardly uninvasive, they have developed a novel method to create a speech decoding BCI that requires much less implanted electrodes than existing solutions. Such an innovation could make the device more agreeable to people with less severe disabilities, expanding the reach of this assistive technology.
The system was tested on two patients, one with a single stereotactic electroencephalographic (sEEG) shaft implant with six contacts, and another with a lone electrocorticographic (ECoG) stripe with eight contacts. Implantation of sEEG electrodes can be accomplished via a single drill hole in the skull β the researchers suggest that in the future, this procedure could possibly be accomplished under only local anesthesia.
Using electrical information captured by the electrodes, the team trained a neural network (of the artificial variety) to translate brain activity into intended speech. Initially, the model was trained to recognize the brain activity associated with 26 words. An additional class was included to represent silence, to avoid situations in which the model predicts a word while the user is not attempting speech.
Training data, needed for the neural network to learn from, was collected from the participants while they read six different sentences. Each sentence was repeated 30 to 60 times to ensure that sufficient diversity existed in the sample data. After the model was trained, it was found that it had achieved 55% accuracy when using the sEEG electrodes, and 70% accuracy with the additional electrodes contained in the ECoG strip. While these accuracies do leave something to be desired, they are comparable to existing devices that require electrodes to be implanted over the entire cortical surface.
It is important to keep in mind that this neural network was trained with a relatively small amount of data from only two individuals. Perhaps the less invasive nature of the system will allow data to be captured from a much larger population, which in turn should lead to significant increases in accuracy. In any case, any innovation that makes BCIs less invasive has the potential to make the technology more mainstream and help many more people.