New York, July 15 : US researchers have successfully developed a “speech neuroprosthesis” that has enabled a man with severe paralysis to communicate in sentences, translating signals from his brain to the vocal tract directly into words that appear as text on a screen.
The technology, developed by researchers from University of California-San Francisco (UCSF), was able to decode words from brain activity at a rate of up to 18 words per minute with up to 93 per cent accuracy.
The man, in his late 30s, suffered a devastating brainstem stroke more than 15 years ago that severely damaged the connection between his brain and his vocal tract and limbs. Since his injury, he has had extremely limited head, neck, and limb movements, and communicates by using a pointer attached to a baseball cap to poke letters on a screen.
UCSF researchers surgically implanted a high-density electrode array over the patient’s speech motor cortex and recorded 22 hours of neural activity in this brain region over 48 sessions and several months.
The electrodes recorded his thoughts as brain signals, which were then translated into specific intended words using artificial intelligence.
The team thus created a 50-word vocabulary — which includes words such as “water,” “family,” and “good” — which they could recognise from brain activity using advanced computer algorithms.
“To our knowledge, this is the first successful demonstration of direct decoding of full words from the brain activity of someone who is paralysed and cannot speak,” said Edward Chang, Professor and neurosurgeon at the UCSF.
“It shows strong promise to restore communication by tapping into the brain’s natural speech machinery,” Chang added. The study is detailed in the New England Journal of Medicine.
Further, to test their approach, the team first presented the patient with short sentences constructed from the 50 vocabulary words and asked him to try saying them several times. As he made his attempts, the words were decoded from his brain activity, one by one, on a screen.
Then the team switched to prompting him with questions such as “How are you today?” and “Would you like some water?” As before, the patient’s attempted speech appeared on the screen. “I am very good,” and “No, I am not thirsty.”
“We were thrilled to see the accurate decoding of a variety of meaningful sentences. We’ve shown that it is actually possible to facilitate communication in this way and that it has potential for use in conversational settings,” said lead author David Moses, a postdoctoral engineer in Chang’s lab.(IANS)