A new AI system similar to ChatGPT translates brain activity into written text: Details
US researchers have developed a new artificial intelligence system that can convert human brain activity – while listening to a story or silently imagining it being told – into a continuous stream of text.
Developed by a team at the University of Texas at Austin, the system is based in part on a transformer model similar to Open AI’s ChatGPT and Google’s Bard.
It may help people who are mentally aware but physically unable to speak, such as those weakened by a stroke, to communicate intelligibly again, says the group that published the study in the journal Nature Neuroscience.
Unlike other language decoding systems under development, this semantic decoder does not require subjects to have surgical implants, making the process non-invasive. Also, participants do not have to use only words from a prescribed list.
Brain activity is measured with a functional MRI scanner after extensive decoder training, where the person listens to podcasts in the scanner for hours.
Later, if the participant is ready to have their thoughts decoded, listening to a new story or imagining a story allows the machine to generate a corresponding text from brain activity alone.
“For a noninvasive method, this is a real leap forward from what’s been done before, which is typically single words or short sentences,” said Alex Huth, assistant professor of neuroscience and computer science at UT Austin.
“We get the model to decode continuous language over long periods of time with complex ideas,” he added.
The result is not a word-for-word transcription. Instead, researchers designed it to capture what is being said or thought, albeit imperfectly. About half the time, when the decoder is trained to monitor the participant’s brain activity, the machine produces text that closely (and sometimes accurately) matches the intent of the original words.
For example, in experiments, a participant listening to a speaker says, “I don’t have a driver’s license yet,” but his thoughts were translated as, “He hasn’t even started learning to drive yet.”
The group also addressed questions about the possible misuse of technology in research. The article describes how decoding only worked with cooperative participants who had willingly participated in decoder training.
The results for subjects who had not been trained on the decoder were inconclusive, and if the participants who had been trained on the decoder later objected—for example, by thinking other thoughts—the results were likewise invalid.
“We take very seriously the possibility that it could be used for bad purposes, and we’ve tried to avoid that,” said computer science PhD student Jerry Tang. “We want to make sure that people only use these types of technologies when they want to and that it helps them.”
In addition to allowing the participants to listen to or think about the stories, the researchers asked the subjects to watch four short, silent videos while in the scanner. The semantic decoder was able to use its brain activity to accurately describe certain events in the videos.
The system is currently not practical for use outside the laboratory due to its dependence on the timing of the fMRI device. But the researchers believe this work could transfer to other, more portable brain imaging systems, such as functional near-infrared spectroscopy (fNIRS).
Read all the Latest Tech News here.