headphones
AI Restores Speech to Paralyzed Stroke Survivor After Decades of Silence
量子交易者
量子交易者
authIcon
数字货币大师
04-03 11:23
Follow
Focus
University of California researchers used a brain-computer interface to turn brain signals into real-time speech.
Helpful
Not Helpful
Play

After 18 years without speech, a woman paralyzed by a stroke has regained her voice through an experimental brain-computer interface developed by researchers at the University of California, Berkeley, and UC San Francisco.

The research, published in Nature Neuroscience on Monday, utilized artificial intelligence to translate the thoughts of the participant, known as “Anne,” into natural speech in real time.

“Unlike vision, motion, or hunger—shared with other species—speech sets us apart. That alone makes it a fascinating research topic,” Assistant Professor of Electrical Engineering and Computer Sciences at UC Berkeley, Gopala Anumanchipalli, told Decrypt. “It’s still one of the big unknowns: how intelligent behavior emerges from neurons and cortical tissue.”

The study used a brain-computer interface to create a direct pathway between Anne’s brain's electrical signals and a computer.

As Anumanchipalli explained, the interface reads neural signals using a grid of electrodes placed on the brain's speech center.

“But it became clear there are conditions—like ALS, brainstem stroke, or injury—where the body becomes inaccessible, and the person is 'locked in.’ They’re cognitively intact but unable to move or speak,” Anumanchipalli said.

Anumanchipalli noted that while significant progress has been made in creating artificial limbs, restoring speech remains more complex.

“Both are motor systems, but limb movement is a simpler problem than mouth movement, which involves more joints and muscles,” he said. “Arm restoration is also something we pursue.”

Machine learning and artificial intelligence

Emphasizing the importance of rapid responses in conversation, Anumanchipalli said that with machine learning and custom AI algorithms, the brain-computer interface converted Anne’s brain signals into speech within a second using a synthetic voice generator.

“We recorded Anne’s attempts and used audio from before her injury—her wedding video. It was a 20-year-old clip, but we digitally recreated a synthetic voice,” Anumanchipalli said. “Then, we matched her brain’s attempt to speak with that voice to generate synthetic speech.”

While technology enabled Anne to speak, Anumanchipalli gave her credit for doing the most difficult part of the process.

“The real driver here is Anne. Her brain does the heavy lifting—we’re just reading what it’s trying to do,” Anumanchipalli said. “AI fills in some gaps, but Anne is the main character. The brain evolved over millions of years to do this—fluid communication is what it was built for.”

Anne’s breakthrough is part of a broader movement in brain-computer interface research, which has attracted major players in neuroscience and tech, including Elon Musk’s Neuralink.

On Wednesday, Neuralink opened its patient registry to the company’s PRIME Study to applicants worldwide.

Rather than relying on publicly available artificial intelligence models, the team built a system from the ground up specifically for Anne.

“We haven’t used anything off the shelf. Everything we’re using is custom-made for Anne. We’re not licensing AI from any other company,” Anumanchipalli said.

“We’re AI engineers and scientists—we design our own work, customized for Anne. AI as a black box isn’t appropriate, especially in healthcare, where one size doesn’t fit all. We have to reimagine and custom-make solutions for each person.”

Privacy, a top priority

Anumanchipalli said building a proprietary AI was not only about specialization but preserving user privacy as well.

“The goal is to preserve privacy. We’re not sending her signals to a company in Silicon Valley. We’re designing software that stays with her,” he said. “Eventually, this will be a standalone device, powered by her own body, working locally—so no one else controls what she’s trying to say.”

Anumanchipalli highlighted the importance of public funding in developing brain-computer interface research.

“Projects like this drive innovation beyond what we can imagine, with downstream applications. Federal funding from the U.S. National Institutes of Health and National Institute on Deafness and Other Communication Disorders made this possible,” he said. “Philanthropic and private funding is also welcome to push it forward. This is the frontier of what we can achieve together.”

Looking to the future, Anumanchipalli hopes researchers will double down on efforts to return speech using technology.

“Fortunately, the effort has received strong support. I hope the human element remains central,” he said. “People like Anne have volunteered their time and effort into something that doesn’t promise them anything but explores therapies for others like them—and that’s important.”

Edited by Sebastian Sinclair

Open the app to read the full article
DisclaimerAll content on this website, hyperlinks, related applications, forums, blog media accounts, and other platforms published by users are sourced from third-party platforms and platform users. BiJieWang makes no warranties of any kind regarding the website and its content. All blockchain-related data and other content on the website are for user learning and research purposes only, and do not constitute investment, legal, or any other professional advice. Any content published by BiJieWang users or other third-party platforms is the sole responsibility of the individual, and has nothing to do with BiJieWang. BiJieWang is not responsible for any losses arising from the use of information on this website. You should use the related data and content with caution and bear all risks associated with it. We strongly recommend that you independently research, review, analyze, and verify the content.
Comments(0)

No comments yet

edit
comment
collection
like
share