New research suggests that artificial speech devices directly connected to the brain may become reality at some point in the future, foreign media reported. It also means that people who are incapable of speaking are expected to speak.
The famous science writer Kelly Servick introduced three papers published in the journal BioRxiv. The three papers were studied by three different researchers, but they all came to the conclusion that human speech can be decoded from the records of neuronal discharge.
In each study, when a brain surgery patient spoke or read aloud words, the electrodes placed directly on the brain recorded nerve activity. In each case, researchers can convert brain brainwave activity into at least understandable sound files.
The first paper, published in BioRxiv on October 10, 2018, described an experiment in which the researchers played a voice recording of an epileptic patient who was undergoing brain surgery. (The nerve activity recorded in the experiment must be very detailed in order to be better explained. Therefore, this level of detail can only be obtained when the brain is exposed to the air and the electrodes are placed directly in the air, such as in brain surgery.)
When the patient listened to the sound recording, the researchers recorded the discharge of some of the neurons in the patient's brain that processed the sound. Scientists have tried a number of different methods to convert neuronal discharge data into language. And the researchers found that computers with deep learning capabilities are more or less able to solve this problem without supervision, and the results are good.
When they played the voice for 11 listeners through a vocoder that synthesizes human voices, these people correctly interpret the words in 75% of the time.
The second paper, published in the BioRxiv journal on November 27, 2018, describes an experiment in which the researchers recorded the neurological activity of patients undergoing brain tumor removal surgery. When the patient read aloud words in a single syllable, the researchers recorded the sounds of the sick and the areas in which speech was produced in their brains.
Instead of in-depth computer training for each patient, the researchers taught an artificial neural network to convert neural records into audio, suggesting that the results were at least reasonable and similar to microphone recordings.
A third paper published in the journal BioRxiv on August 9, 2018, suggests that a part of the brain's neurons determine that specific words that a person speaks are transformed by muscle movement.
Although recordings of the experiment could not be found online, the researchers said they were able to reconstruct the entire sentence (also recorded during brain surgery) and for 83 per cent of the time, People who have heard these sentences can correctly interpret them in multiple selection tests (one of the 10 choices). The experimental approach relies on the recognition of patterns involved in producing a single syllable, rather than the whole word.
The goal of all these experiments was to have an angel who was unable to speak. Explaining a person's neural model simply by imagining a person's speech is much more complex than explaining a person's pattern of listening or producing speech, science reports say. (however, the authors of the second paper said it was possible to explain the brain activity of those who imagined the speech.)
It is also worth noting that these are small-scale studies. The first paper was based on data from only five patients, while the second one was based on six patients and the third on three patients. And none of the neural records lasted more than an hour.
Still, science is moving forward, and artificial speech devices that connect directly to the brain seem likely to become reality at some point in the future.