In a laboratory in San Francisco, California, a woman named Ann sits in front of a huge screen. On it is an avatar created to look like her. Thanks to a brain–computer interface (BCI), when Ann thinks of talking, the avatar speaks for her — and in her own voice, too.
In 2005, a brainstem stroke left Ann almost completely paralysed and unable to speak. Last year, neurosurgeon Edward Chang, at the University of California, San Francisco, placed a grid of more than 250 electrodes on the surface of Ann’s brain, on top of the regions that once controlled her body, face and larynx. As Ann imagined speaking certain words, researchers recorded her neural activity. Then, using machine learning, they established the activity patterns corresponding to each word and to the facial movements Ann would, if she could, use to vocalize them.
The system can convert speech to text at 78 words per minute: a huge improvement on previous BCI efforts and now approaching the 150 words per minute considered average for regular speech1. Compared with two years ago, Chang says, “it’s like night and day”.
In an added feat, the team programmed the avatar to speak aloud in Ann’s voice, basing the output on a recording of a speech she made at her wedding. “It was extremely emotional for Ann because it was the first time that she really felt that she was speaking for almost 20 years,” says Chang.
This work was one of several studies in 2023 that boosted excitement about implantable BCIs. Another study2 also translated neural activity into text at unprecedented speed. And in May, scientists reported that they had created a digital bridge between the brain and spinal cord of a man paralysed in a cycling accident3. A BCI decoded his intentions to move and directed a spinal implant to stimulate the nerves of his legs, allowing him to walk.
“There’s a lot of energy, and it’s super exciting,” Chang says. “I think that we’re going to cross a really important threshold in the next five years: coming out of proof of principles into new therapies.”
Mind-reading machines are coming — how can we keep them in check?
Companies in the field are also making strides: in September the neurotechnology company Neuralink, founded by entrepreneur Elon Musk, invited people living with paralysis to volunteer to be the first recipients of its implantable BCI.
The quest to commercialize BCIs, however, is still in its infancy. So far, systems are tailored to individuals, but commercialization will require robust, reliable and safe BCIs that can be scaled up. “You cannot have a PhD engineer in the home of every single patient with a BCI,” says Tom Oxley, chief executive of Synchron, a BCI company in Brooklyn, New York.
Alongside advances in implantable devices, a parallel commercial ecosystem of wearable brain-reading devices is growing. These measure users’ brain activity — at much lower resolution than implanted devices — to potentially enhance mental health, productivity or sleep, or to transform how people interact with computers.
Together, these advances are accelerating efforts to guide and regulate neurotechnology. This month, for instance, member states of UNESCO — the United Nations cultural and scientific organization — will vote on whether to develop international guidelines and policy recommendations for the use of this technology.
As progress generates headlines, there is no shortage of grand claims. Consumer-targeted bioinformatics company EMOTIV in San Francisco describes its team as “decoders of the human experience”. In 2020, Musk told podcaster Joe Rogan that Neuralink’s BCI “could, in principle, fix almost anything that’s wrong with the brain”.
“We need to have more conversation,” says Chang, “and try to reduce the hype and focus on the things that are actually really relevant.”
Decoding the brain
All brain-reading technologies, whether implants or headsets, operate on the same basic principles: they record neural activity — usually electrical activity — associated with a function such as speech or attention; interpret what that activity means; and use it to control an external device or simply provide it as information to the user.
Implanted BCIs record more information-rich brain signals than do external ones. But these experimental devices are intended only for use by people in whom potential clinical benefits outweigh the risks of, for example, brain injury or infection. Only around 50 people have received such implants long-term.

An attendee at a conference in Beijing demonstrates a brain–computer interface headset.Credit: Wang Yuguo/Xinhua News Agency/eyevine
Most devices worn on the scalp use a common method called electroencephalography (EEG) to detect tiny electrical fields that pass through the skull, reflecting the average firing of many millions of neurons spread over substantial volumes of brain.
EEG is routinely used clinically to monitor epilepsy and sleep, and in the lab to study a range of brain functions. Commercial efforts centre on using EEG signals to monitor psychological states such as focus, calmness, agitation and drowsiness.
Consumer-targeted companies have yet to create a ‘killer app’ — an application so desirable that sales take off drastically. But for implantable