At the age of 30, Ann suffered a brainstem stroke that left her severely paralyzed. She lost control of all the muscles in her body and was unable even to breathe. It came on suddenly one afternoon, for reasons that are still mysterious.
For the next five years, Ann went to bed each night afraid she would die in her sleep. It took years of physical therapy before she could move her facial muscles enough to laugh or cry. Still, the muscles that would have allowed her to speak remained immobile.
“Overnight, everything was taken from me,” Ann wrote, using a device that enables her to type slowly on a computer screen with small movements of her head. “I had a 13-month-old daughter, an 8-year-old stepson and 26-month-old marriage.”

Today, Ann is helping researchers at UC San Francisco and UC Berkeley develop new brain-computer technology that could one day allow people like her to communicate more naturally through a digital avatar that resembles a person.
It is the first time that either speech or facial expressions have been synthesized from brain signals. The system can also decode these signals into text at nearly 80 words per minute, a vast improvement over the 14 words per minute that her current communication device delivers.
Edward Chang, MD, chair of neurological surgery at UCSF, who has worked on the technology, known as a brain-computer interface, or BCI, for more than a decade, hopes this latest research breakthrough, published Aug. 23, 2023, in Nature, will lead to an FDA-approved system that enables speech from brain signals in the near future.
“Our goal is to restore a full, embodied way of communicating, which is the most natural way for us to talk with others,” said Chang, who is a member of the UCSF Weill Institute for Neurosciences and the Jeanne Robertson Distinguished Professor. “These advancements bring us much closer to making this a real solution for patients.”
Ann’s work with UCSF neurosurgeon Edward Chang, MD, and his team plays an important role in helping advance the development of devices that can give a voice to people unable to speak. Video by Pete Bell
Decoding the signals of speech
Ann was a high school math teacher in Canada before her stroke in 2005. In 2020, she described her life since in a paper she wrote, painstakingly typing letter-by-letter, for a psychology class.
“Locked-in syndrome, or LIS, is just like it sounds,” she wrote. “You’re fully cognizant, you have full sensation, all five senses work, but you are locked inside a body where no muscles work. I learned to breathe on my own again, I now have full neck movement, my laugh returned, I can cry and read and over the years my smile has returned, and I am able to wink and say a few words.”
As she recovered, she realized she could use her own experiences to help others, and she now aspires to become a counselor in a physical rehabilitation facility.
“I want patients there to see me and know their lives are not over now,” she wrote. “I want to show them that disabilities don’t need to stop us or slow us down.”
She learned about Chang’s study in 2021 after reading about a paralyzed man named Pancho, who helped the team translate his brain signals into text as he attempted to speak. He had also experienced a brainstem stroke many years earlier, and it wasn’t clear if his brain could still signal the movements for speech. It’s not enough just to think about something; a person has to actually attempt to speak for the system to pick it up. Pancho became the first person living with paralysis to demonstrate that it was possible to decode speech-brain signals into full words.

“Our goal is to restore a full, embodied way of communicating, which is really the most natural way for us to talk with others.”
Edward Chang, MD, chair of neurological surgery at UCSF
With Ann, Chang’s team attempted something even more ambitious: decoding her brain signals into the richness