Research

Found a way to turn thoughts into spoken language. Speaking for this is not necessary

Paralysis is a pretty scary condition, withwhich part, so to speak, of “physiological” functions becomes uncontrollable for a person despite the fact that everything can be in order at the level of the central nervous system. Scientists have been struggling with the treatment of this condition for several years and, quite possibly, a solution has been found to one of the problems associated with the loss of the ability to speak. After all, a method has recently been developed to convert brain impulses into speech signals. And artificial intelligence helped in this.

According to the editors of Science,Researchers from the Netherlands, Germany and the USA, using computational models based on neural networks, reconstructed words and sentences by reading brain signals. To do this, they watched areas of the brain at those moments when people read aloud, made a speech, or simply listened to notes.

“We are trying to develop an artificialneurons that turn on and off at different points in time, reproducing sound. The way these signals are translated into speech is individual for each person, so computer algorithms must be able to “understand” how to do this. ”Said Nima Mesgarani from Columbia University.

In their work, experts relied on datareceived from five people with epilepsy. The network analyzed the “behavior” of the auditory cortex (which is active both during speech and during listening). Then the computer reconstructed the speech data from the pulses received from these people. As a result, the algorithm coped with an accuracy of 75%.

Another team of scientists led byNeuroscientists Miguel Angrik from the University of Bremen in Germany and Christian Herf from the University of Maastricht in the Netherlands relied on data from six people who underwent surgery to remove a brain tumor. A microphone picked up their voices as they read individual words aloud. At this time, the electrodes recorded information from the speech centers of the brain. The network compared the readings of the electrodes with audio recordings. As a result, about 40% of the data was recognized correctly.

Third team from the University of California atSan Francisco has remodeled entire sentences based on read brain activity from three patients with epilepsy who read specific sentences out loud. Some sentences were correctly identified in more than 80% of cases.

Despite the very good results, the systemstill at a very early stage and needs further refinement. But if everything works out, hundreds of thousands of people around the world can once again have the opportunity to speak.

But in order to chat in our chat in Telegram neural networks are not needed. Join now!