headphones
AI-driven BCI technology enables real-time speech for stroke survivor after 18 years
币圈狂人
币圈狂人
authIcon
趋势观察者
04-03 18:30
Follow
Focus
Researchers from the University of California used a brain-computer interface (BCI) driven by AI to turn Anne Johnson's brain signals into real-time speech since she went silent in 2005 after a stroke. The system harnessed technology similar to that of devices like Alexa and Siri and improved on a previous model that had a eight second delay.
Helpful
Not Helpful
Play

Researchers from the University of California used a brain-computer interface (BCI) driven by AI to turn Anne Johnson’s brain signals into real-time speech since she went silent in 2005 after a stroke. The system harnessed technology similar to that of devices like Alexa and Siri and improved on a previous model that had an eight-second delay. 

Researchers from the University of California, Berkeley, and the University of California, San Francisco, developed a customized brain-computer interface system capable of restoring naturalistic speech to a 47-year-old woman with quadriplegia. Today, Anne is helping researchers at UC San Francisco and UC Berkeley develop BCI technology that could one day allow people like her to communicate more naturally through a digital avatar that matches facial expressions to the generated speech.

Gopala Anumanchipalli, an assistant professor of electrical engineering and computer sciences at UC Berkeley and a co-author of the study published Monday in the journal Nature Neuroscience, confirmed that the implanted device tested on Ann converted ‘her intent to speak into fluent sentences’. Jonathan Brumberg of the Speech and Applied Neuroscience Lab at the University of Kansas, who also reviewed the findings, welcomed the advances and told The Associated Press that this was ‘a pretty big advance in the Neuroscience field’.

BCI technology enables a woman to regain her speech after nearly 20 years

Mind reading is coming

Groundbreaking progress in brain-computer interfaces: A new implant translates thoughts into real-time speech in just 3 seconds – a crucial step for natural communication in paralysis.

The study, published in Nature Neuroscience, shows how AI algorithms… pic.twitter.com/XdGhrBlU63

— Chubby♨️ (@kimmonismus) April 1, 2025

A woman paralyzed by a stroke regained her voice after nearly two decades of silence through an experimental brain-computer interface developed–and specifically customized to her case–by researchers at UC Berkeley and UC San Francisco. The research, published in Nature Neuroscience on March 31st, utilized artificial intelligence to translate the thoughts of the participant, popularly known as “Anne,” into natural speech in real-time.

Anumanchipalli explained that the interface reads neural signals using a grid of electrodes placed on the speech center of the brain. He added that it was clear there were conditions—like ALS, brainstem stroke (like in Anne’s case), or injury—where the body became inaccessible, and the person was ‘locked in’, being cognitively intact but unable to move or speak. Anumanchipalli noted that while significant progress had been made in creating artificial limbs, restoring speech remained more complicated.

“Unlike vision, motion, or hunger—shared with other species—speech sets us apart. That alone makes it a fascinating research topic.”

Gopala Anumanchipalli

However, Anumanchipalli acknowledged that how intelligent behavior emerged from neurons and cortical tissue was still one of the big unknowns. The study used a BCI to create a direct pathway between Anne’s brain’s electrical signals and a computer.

New BCI device improves on previous versions that had delays

The innovative method by the U.S. researchers eliminated a frustrating delay that plagued previous versions of the technology by analyzing her brain activity in 80-millisecond increments and translating it into a synthesized version of her voice. A number of BCI speech-translation projects have produced positive results recently, each aiming to reduce the time taken to generate speech from thoughts.

According to Science Alert, most existing BCI methods required ‘a complete chunk of text’ to be considered before software could decipher its meaning, which could significantly drag out the seconds between speech initiation and vocalization.

The report published by researchers from UC Berkeley and San Francisco disclosed that improving speech synthesis latency and decoding speed was essential for dynamic conversation and fluent communication. The joint UC team explained that BCI speech delays were compounded by the additional time speech synthesis required to play and the time listeners took to comprehend the synthesized audio.

Most existing methods reportedly relied on the ‘speaker’ training the interface by openly going through the motions of vocalizing, which would be a challenge when providing decoding software with enough data for individuals who were out of practice or had always had difficulty speaking. To overcome both of these hurdles, the UC researchers trained a flexible, deep-learning neural network on the 47-year-old participant’s “sensorimotor cortex activity” while she silently ‘spoke’ 100 unique sentences from a vocabulary of just over 1,000 words.

Cryptopolitan Academy: Want to grow your money in 2025? Learn how to do it with DeFi in our upcoming webclass. Save Your Spot

Open the app to read the full article
DisclaimerAll content on this website, hyperlinks, related applications, forums, blog media accounts, and other platforms published by users are sourced from third-party platforms and platform users. BiJieWang makes no warranties of any kind regarding the website and its content. All blockchain-related data and other content on the website are for user learning and research purposes only, and do not constitute investment, legal, or any other professional advice. Any content published by BiJieWang users or other third-party platforms is the sole responsibility of the individual, and has nothing to do with BiJieWang. BiJieWang is not responsible for any losses arising from the use of information on this website. You should use the related data and content with caution and bear all risks associated with it. We strongly recommend that you independently research, review, analyze, and verify the content.
Comments(0)

No comments yet

edit
comment
collection
like
share