Overcoming limitations with AI

that converts brain signals into speech

           When was the last time you went to the dentist? Do you still remember the feeling of having your mouth full of dental equipment while you were lying on the chair, unable to help yourself with anything as it was too difficult to communicate with the dentist or their assistant?

           This is a common experience for people undergoing medical procedures, many of which are more serious than having a simple scrape-and-polish at the dentists. Being able to push through this barrier would help increase the efficiency of treatment while giving patients more confidence that they can let the doctor know how they feel at any time.

           Some technologists are looking at how they can apply artificial intelligence (AI) to help solve this problem. One experiment has combined artificial neural systems with deep learning analysis to analyze a part of the brain called the auditory cortex while patients were counting from zero to nine and then synthesize the data into speech. A further test showed listeners could understand 75% of the numbers from the synthesized speech. Another piece of research involved six patients who were undergoing brain surgery. It recorded signals from the parts of their brains that processed speech and controlled the vocal chords as they read single-syllable words out loud. The data were analyzed using machine learning and the signals were converted into synthesized speech, 40% of which was understandable when tested.

           The technology behind these experiments is called a brain-computer interface (BCI). It enables brains and computers to directly communicate with each other. BCI technology works by converting brain signals into commands. Our brain functions are complicated and there is no device that can clearly interpret its signals, so AI is brought in to simulate conditions from the received brain signal data. For example, when a brain signal is converted into a certain sound, it relates the sound to the number the patient is reading. This vocal data is analyzed repeatedly until the sounds can be differentiated, such as a certain pitch means two, a higher pitch means three, or a lower pitch means one.

           Once analyzed, brain signals can be converted into certain commands or tasks. Apart from helping patients communicate with medical staff and improving treatment efficiency, BCI has the potential to support numerous innovations, such as connecting neural systems with prosthetic organs.
BCI technology can be used in many other fields than medicine. Nissan has launched “B2V”, or brain-to-vehicle, a system where a car can read electrical impulses from the driver’s brain though a brain signals detector before interpreting them into orders, so the car will be able to learn their driving behavior. It will also be beneficial for self-driving cars in the future as the systemic control will also be based on analysis gathered from human drivers’ brain signals while driving.

           While BCI technology is in its infancy, achievements to date mark a good start for its development. The technology has broad applications from helping people communicate through the power of thought to applying collected data to drive functions, as with the B2V system. With further studies and proper development, BCI technology might lead humanity to a world where everything can be commanded directly from their brain, making current science fiction a reality.

By |2019-03-01T06:06:10+00:00March 1st, 2019|Blog|