Its facial expressions are generated by the same animation technology as The Last of Us II.News 

Paralyzed Woman Utilizes Brain-Computer Interface to Communicate Through Digital Avatar: UC Study

In an unprecedented development, scientists from UC San Francisco and UC Berkeley, in collaboration with Speech Graphics based in Edinburgh, have created an innovative communication system. This groundbreaking technology enables a stroke survivor, who is paralyzed, to express herself effortlessly using a digital avatar controlled by a brain-computer interface.

Brain-Computer Interfaces (BCIs) are devices that monitor the analog signals produced by your gray matter and convert them into digital signals that computers can understand—like a DAC unit on a mixing board, but what fits inside your skull. For this study, researchers led by Dr. Edward Chang, director of neurological surgery at UCSF, first implanted a 253-pin electrode array in the speech center of the patient’s brain. These sensors monitored and recorded electrical signals that would otherwise have controlled the muscles of his jaw, lips, and tongue, and instead relayed them through a cable port in his skull to a bank of processors. In that stack of calculations was a machine learning artificial intelligence that, during a few weeks of training, recognized the patient’s electrical signal patterns for more than 1,000 words.

But that’s only the first half of the trick. Through this AI interface, the patient can now type in their answers, just like Synchron’s system works for people with locked-in syndrome. But he can also speak, in a sense, using a synthesized voice trained on recordings of his natural voice before he became paralyzed—much like we do with our digitally undead celebrities.

In addition, the researchers collaborated with Speech Graphics, the same company that developed the photorealistic facial animation technology for Halo Infinite and The Last of Us Part II, to create the patient avatar. SG’s technology “restored” the necessary musculoskeletal movements that the face would make based on the analysis of the audio input, and then feeds the data in real time to the game engine to be animated with a lag-free avatar. And because the patient’s mental signals were mapped directly onto the avatar, he was able to express emotions and communicate non-verbally as well.

“Creating a digital Avatar that can speak, feel and articulate in real-time, connected directly to the subject’s brain, demonstrates the potential of AI-driven faces beyond video games,” said Michael Berger, CTO and co-founder of Speech Graphics. said in a statement on Wednesday. “The restoration of voice alone is impressive, but facial communication is so intrinsic to being human, and it restores a sense of embodiment and control to a patient who has lost it.”

BCI technology emerged as a pioneer in the early 1970s, and it has developed slowly in the intervening decades. Explosive advances in processing and computing systems have recently helped revitalize the industry, with a handful of well-funded startups currently competing to be the first through the FDA’s regulatory device approval process. Brooklyn-based Synchron made headlines last year when it was the first company to successfully implant a BCI in a human patient. Elon Musk’s Neuralink entered limited FDA trials earlier this year after the company was found to have killed pig test subjects in earlier rounds of testing.

Related posts

Leave a Comment