fbpx
News Hub

Brain signals translated into intelligible speech for first time

Written by Tue 29 Jan 2019

In a landmark scientific achievement, Columbia neuroengineers show the mental activity of mankind’s most inner sanctum can be sensed and distilled into ordinary speech

Decades of scientific research has shown that the act of speaking – or even thinking about speaking – produces traceable and distinct patterns of activity in the brain. The realisation spawned a whole subset of neurolinguistics determined to decode this activity into intelligible speech.

For the first time, scientists at Columbia’s Zuckerman Institute have offered a glimpse into a future where thoughts no longer remain hidden inside the brain – but are able to be translated into verbal speech at will using neural networks.

I think therefore AI am

Embryonic attempts at achieving this feat targeted the visual representations of sound frequencies known as spectrograms, and analysing them using simple computer models.

After this approach failed to produce anything resembling intelligible speech, the scientists leading the effort – including Dr. Mesgarani – turned to a vocoder, a computer algorithm used by Amazon Echo devices and Apple’s Siri assistant that can synthesise speech after being trained on speech recordings.

Dr. Mesgarani teamed up with epilepsy specialist Ashesh Dinesh Mehta and began asking epilepsy patients undergoing brain surgery to listen to sentences spoken by a variety of people while measuring their brain activity. These neural patterns trained the vocoder.

Next, the patients were made to listen to speakers reciting digits between 0 and 9. Their brain activity was again recorded and run through the same vocoder and the sound produced in response analysed and cleaned up by neural networks.

When the vocoder’s audio was replayed to the patients, they heard their own thoughts relayed back to them in a robotic-sounding sounding voice.

“We found that people could understand and repeat the sounds about 75 percent of the time, which is well above and beyond any previous attempts,” said Dr. Mesgarani.

“The sensitive vocoder and powerful neural networks represented the sounds the patients had originally listened to with surprising accuracy.”

Dr. Mesgarani said he and his team are now setting their sights on more complex words and phrases. They hope the system could be eventually be used to create an implant to help those who have lost their voice due to injury or disease.

“With today’s study, we have a potential way to restore that power. We’ve shown that, with the right technology, these people’s thoughts could be decoded and understood by any listener,” Dr. Mesgarani said.

Written by Tue 29 Jan 2019

Tags:

AI neural networks neuroscience science
Send us a correction Send us a news tip




Do NOT follow this link or you will be banned from the site!