Researchers at the University of Texas at Austin have developed an artificial intelligence (AI) system capable of interpreting and reconstructing human thoughts.
The scientists recently published a paper in Nature Neuroscience exploring using AI to non-invasively translate human thoughts into words in real time.
According to the researchers, current methods for decoding thought into words are either invasive — meaning they require surgical implantation — or limited in that they “can only identify stimuli from among a small set of words or phrases.”
The team at Austin circumvented these limitations by training a neural network to decode fMRI signals from multiple areas of the human brain simultaneously.
In conducting this experiment, the researchers had several test subjects listen to hours of podcasts while a fMRI machine non-invasively recorded their brain activity. The resulting data was then used to train the system on a specific user’s thought patterns.
After the training, test subjects had their brain activity monitored again while listening to podcasts, watching short films, and silently imagining telling a story. During this part of the experiment, the AI system was fed the subjects’ fMRI data and decoded the signals into plain language in real-time.
According to a press release from the University of Austin at Texas, the AI was able to get things right approximately 50% of the time. The results, however, aren’t exact — the researchers designed the AI to convey the general ideas being thought about, not the exact words being thought.
Fortunately for anyone concerned about having their thoughts infiltrated by AI against their will, the scientists are very clear that this isn’t currently a possibility.
The system only functions if it’s trained on a specific user’s brainwaves. This makes it useless for scanning individuals who haven’t spent hours providing fMRI data. And, even in the event such data was generated without a user’s permission, the team ultimately concludes that both the decoding of the data and the machine’s ability to monitor thoughts in real-time require active participation on the part of the person being scanned.
However, the researchers did note that this might not always be the case:
“[O]ur privacy analysis suggests that subject cooperation is currently required both to train and use the decoder. However, future developments might enable…
Click Here to Read the Full Original Article at Cointelegraph.com News…