fbpx

Neural network reconstructs human thoughts from brain waves in real time

Researchers from Russian corporation Neurobotics and the Moscow Institute of Physics and Technology have found a way to visualize a person’s brain activity as actual images mimicking what they observe in real time. This will enable new post-stroke rehabilitation devices controlled by brain signals.

The team published its research as a preprint on bioRxiv and posted a video online, showing their “mind-reading” system at work.

To develop devices controlled by the brain and methods for cognitive disorder treatment and post-stroke rehabilitation, neurobiologists need to understand how the brain encodes information.

Loading...

A key aspect of this is studying the brain activity of people perceiving visual information, for example, while watching a video.

The existing solutions for extracting observed images from brain signals either use functional MRI or analyze the signals picked up via implants directly from neurons. Both methods have fairly limited applications in the clinical practice and everyday life.

The brain-computer interface developed by MIPT and Neurobotics relies on artificial neural networks and electroencephalography, or EEG, a technique for recording brain waves via electrodes placed noninvasively on the scalp. By analyzing brain activity, the system reconstructs the images seen by a person undergoing EEG in real time.

“We’re working on the Assistive Technologies project of Neuronet of the National Technology Initiative, which focuses on the brain-computer interface that enables post-stroke patients to control an arm exoskeleton for neurorehabilitation purposes, or paralyzed patients to drive, for example, an electric wheelchair. The ultimate goal is to increase the accuracy of neural control for healthy individuals, too,”

said Vladimir Konyshev, who heads the Neurorobotics Lab at MIPT.

Watch the full video below

In the first part of the experiment, the neurobiologists asked healthy subjects to watch 20 minutes’ worth of 10-second YouTube video fragments.

The team selected five arbitrary video categories: abstract shapes, waterfalls, human faces, moving mechanisms, and motor sports.

The latter category featured first-person recordings of snowmobile, water scooter, motorcycle, and car races.

By analyzing the EEG data, the researchers showed that the brain wave patterns are distinct for each category of videos. This enabled the team to analyze the brain’s response to videos in real time.

In the second phase of the experiment, three random categories were selected from the original five.

The researchers developed two neural networks: one for generating random category-specific images from “noise,” and another for generating similar “noise” from EEG.

The team then trained the networks to operate together in a way that turns the EEG signal into actual images similar to those the test subjects were observing (fig. 2).

Operation algorithm of the brain-computer interface (BCI) system
Figure 2. Operation algorithm of the brain-computer interface (BCI) system. Credit: Anatoly Bobe/Neurobotics, and MIPT Press Office

To test the system’s ability to visualize brain activity, the subjects were shown previously unseen videos from the same categories. As they watched, EEGs were recorded and fed to the neural networks.

The system passed the test, generating convincing images that could be easily categorized in 90% of the cases (fig. 1).

“The electroencephalogram is a collection of brain signals recorded from scalp. Researchers used to think that studying brain processes via EEG is like figuring out the internal structure of a steam engine by analyzing the smoke left behind by a steam train,”

explained paper co-author Grigory Rashkov, a junior researcher at MIPT and a programmer at Neurobotics.

“We did not expect that it contains sufficient information to even partially reconstruct an image observed by a person. Yet it turned out to be quite possible.”

“What’s more, we can use this as the basis for a brain-computer interface operating in real time. It’s fairly reassuring. Under present-day technology, the invasive neural interfaces envisioned by Elon Musk face the challenges of complex surgery and rapid deterioration due to natural processes they oxidize and fail within several months. We hope we can eventually design more affordable neural interfaces that do not require implantation,”

the researcher added.

For reference: The Assistive Technologies project, supported by the National Technology Initiative Fund, was launched in 2017. It aims to develop a range of devices for rehabilitation following a stroke or neurotrauma of the head or spine.

The hardware suite developed under this project includes the Neuroplay headset, a robotic arm exoskeleton, a functional electrical stimulator of muscles, a transcranial electrical stimulator of the brain, the Cognigraph for real-time brain activity visualization in 3D, the Robocom assistive manipulator, and other devices.

The MIPT Neurorobotics Lab was established in 2017 under Project 5-100. Its main line of work is developing anthropomorphic robots and equipment for neuroscience, physiology, and behavior research.

Story source

Moscow Institute of Physics and Technology

Loading...

Ibezim chukwuemerie

EDM freak.. Digital marketer.. In love with human science.. Studies zoology at University of Nigeria.. Founder and chief editor of Ibezims post. Follow him on Twitter and Instagram or like his page on Facebook

What do you think??

Copyright 2019 Ibezims post

Content published here is for information purposes alone.