In 2005, a rare brainstem stroke robbed Ann Johnson of her ability to speak at age 30. The mother of two from Saskatchewan hasn’t been able to communicate naturally with her loved ones ever since.
But last week she was seen making history in a California lab, chatting with her husband thanks to AI-powered technology that can translate her brain signals into speech. Researchers also synthesized a voice that sounds like her own and displayed a digital face that she could use to convey emotions and expressions.
“The goal of all of that is just to restore more full communication,†said Dr. Eddie Chang who is a neurosurgeon at the University of California, San Francisco.
Johnson, who was a teacher at the time of her stroke, was diagnosed with from what’s known as locked-in syndrome, a brain condition that causes paralysis of nearly all muscles except those that control the eyes. She has since regained some movement but cannot speak.
The researchers attached a thin grid of electrodes to the surface of her brain through surgery. It covered the area vital to speech and was connected by a wire to computers trained to decode her brain activity and not only generate text, but also speech and facial expressions.
“We also directly decoded sound, so the sound of what they’re trying to say. Then we actually decoded facial movements to animate an avatar,†said researcher Sean Metzger.
In an exchange captured on video Johnson was able to chat with her husband Bill about baseball. He asked her how she was feeling about the Blue Jays chances of winning that day.
“Anything is possible,†responded Johnson using her synthesized voice.
Bill then commented on her lack of confidence.
“You are right about that. I guess we’ll see won’t we,†said Johnson silently turning to her husband and smiling.
Researchers used audio from a speech she gave at her wedding prior to the stroke to train a computer to synthesize her voice. She has said it’s like hearing an old friend.
The researchers clarified the brain computer interface (BCI) is not reading minds, but interpreting signals from the part of the brain that controls the vocal tract. The user has to actually try and say the words for it to work.
"This isn't about someone trying to imagine saying stuff or thinking about words. But actually trying to say them," said Chang.
The interface can decode about 78 words a minute, which is around half the speed of a typical conversation. It a huge improvement over the 14 words a minute Johnson can produce with her current assistive communication device that uses eye movements.
Johnson has been regularly travelling with her husband from Regina to California to take part in the multi-year clinical trial. has been set up to help them cover travel expenses. On the website of the high school she used to teach at, Johnson has expressed a desire to give back and help others.
The researchers say this is just the starting point. They hope to make smaller devices that could connect wirelessly with computers so it can help others with speech impairments due to stroke, ALS, or other neurological disorders.
They would like to see more people regain the ability to communicate with loved ones.
"I think that this is all going to be possible within the next couple of years," says Chang.