×
Token for your thoughts? Brain interface decodes imagined speech with 74% accuracy in paralyzed patients
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Stanford researchers have developed a brain-computer interface that enables people with paralysis to generate spoken words simply by imagining speech, rather than attempting to physically speak. The breakthrough offers a less effortful alternative to existing systems that require users to actively try speaking, potentially making communication restoration more comfortable for paralyzed patients.

How it works: The system uses implanted microelectrodes in the motor cortex to decode brain activity when users imagine speaking words or sentences.

  • Four participants with severe paralysis from ALS (a degenerative nerve disease) or brainstem stroke had electrodes previously implanted for research purposes.
  • Researchers found that brain activity patterns for imagined speech closely resembled those for attempted speech, though the signals were generally weaker.
  • An AI model trained on a vocabulary database of up to 125,000 words could correctly decode imagined words up to 74% of the time.

Privacy safeguards: The team built in security measures to prevent unauthorized access to users’ thoughts.

  • The AI system only activates when users think the password “Chitty Chitty Bang Bang,” which it detected with 98% accuracy.
  • This ensures the interface only captures speech that users intentionally want to communicate.

What participants think: Study participants expressed a significant preference for the imagined speech system over traditional attempted speech interfaces.

  • Users found the new approach faster and less laborious than existing brain-computer interfaces.
  • The reduced effort required makes the technology more practical for daily communication needs.

The limitations: While promising, the system faces several technical and ethical challenges that need addressing.

  • The 74% accuracy rate falls short of interfaces that decode attempted speech, though ongoing improvements to sensors and AI could enhance performance.
  • Experts worry about distinguishing between intended communication and private thoughts that users want to keep to themselves.
  • “We really need to make sure that BCI-based utterances are the ones people intend to share with the world and not the ones they want to keep to themselves,” says Mariska Vansteensel at UMC Utrecht, a Dutch medical center.

What experts are saying: Researchers emphasize this isn’t true “mind-reading” and stress the ethical safeguards in place.

  • “It really only works with very simple examples of language,” notes Benjamin Alderson-Day at Durham University, explaining the system works with single words like “tree” or “bird” rather than complex thoughts.
  • Frank Willett at Stanford stresses that all brain-computer interfaces are regulated by federal agencies to ensure adherence to “the highest standards of medical ethics.”

Why this matters: The research represents a significant step toward more user-friendly communication restoration for people with severe paralysis, potentially reducing the physical and mental effort required to operate assistive technologies while maintaining crucial privacy protections.

Mind-reading AI can turn even imagined speech into spoken words

Recent News

Iowa teachers prepare for AI workforce with Google partnership

Local businesses race to implement AI before competitors figure it out too.

Fatalist attraction: AI doomers go even harder, abandon planning as catastrophic predictions intensify

Your hairdresser faces more regulation than AI companies building superintelligent systems.

Microsoft brings AI-powered Copilot to NFL sidelines for real-time coaching

Success could accelerate AI adoption across other major sports leagues and high-stakes environments.