Technology

Neuroscientists have trained the neural network to translate brain signals into articulate speech

Using brainwave scanning technologyactivity, artificial intelligence and speech synthesizer, scientists from Columbia University (USA) have created a device capable of translating a person's thoughts into articulate speech. The results of the study, published in the journal Scientific Reports, represent an important step in improving the brain-computer interfaces. In the future, such devices can be used by people who have lost the ability to speak as a result of injury or illness.

To develop a device that connects opportunitiesspeech synthesizer and artificial intelligence author Neurobiologist Neem Mesgarani, the author of the study, and his colleagues turned to the most recent advances in the field of deep machine learning and speech synthesis technologies. The result of their work was an artificial intelligence-based vocoder, capable of fairly accurately interpreting brain activity directly from the auditory cortex, and then translating it into distinguishable speech. The authors of the work note that the speech in this case is very computerized, but people can recognize the words in most cases.

According to the creators, at the heart of the new device,used to play the resulting speech, the same technology is used that is used in digital assistants such as Alexa, Siri and Google Assistant.

At first, the experts trained the vocoder correctly.interpret human brain activity. For this, scientists invited five volunteers who at that time were undergoing treatment for epilepsy to take part in the experiment. All five were implanted with electroencephalogram electrodes into the auditory cortex.

"We asked patients suffering from epilepsy,who are already undergoing surgical treatment on the brain, listen to the suggestions that are made by different people. At the same time, we analyzed patterns in the brain activity of patients. On the resulting neural models and learned vocoder, "- explains Mesgarani.

Patients were asked to listen to recordings in whichthe actors read out sequences of numbers from 0 to 9. At the same time, scientists recorded brain signals, and then passed them through a vocoder, the signals for which, for increased clarity, corrected the neural network, which analyzed the sounds of the vocoder itself, which produced these signals in response. As a result, you could hear a robotic voice repeating the sequence of spoken numbers. To evaluate the results, scientists invited 11 people with excellent hearing.

"It turned out that people can recognize the wordsin about 75% of cases, which greatly exceeds any previous attempts. The sensitive vocoder and powerful neural networks generated the sounds that the patients listened to with amazing precision, ”comments Messgarini

In the future, the Mesgarani team is going to teachneural network to pronounce more complex words, phrases and even whole sentences. After that, they want to develop an implant that can translate a person’s thoughts into full-fledged speech.

“For example, if the implant owner thinks: “I need a glass of water,” our system counts the signals from the brain and translates them into speech. This will give anyone who has lost the ability to speak due to injury or illness, a new opportunity to communicate with the outside world, ”adds Mesgarani.

You can discuss the development of American scientists in our Telegram-chat.