AI app gives voice to those who’ve lost theirs

When Mari Martin returns home from work, one of the questions she asks her husband, Chris, is if he wants to turn on his voice. It’s not a tongue-in-cheek inquiry.

Throat cancer required the surgical removal of Chris’ larynx in 2020, resulting in the loss of his ability to talk. An artificial intelligence Custom Neural Voice app has made it possible for the Holland man to “talk” again — in his own voice. 

It’s a marked improvement from writing on a whiteboard.

“What I love about it is he can share with me his feelings, his emotions, all of that comes through in his real voice,” says Mari. “Writing on the whiteboard expresses his needs, whereas this can express his feelings, his emotions.”

Chris’ background is in radio. His last stint was as general manager for Holland radio station WHTC 1450 AM and 95.7 FM, a job he held for nine years until 1991. He also enjoyed providing the play-by-play at various sports games. He is now retired.

Even so, speech has been the sinew of Chris’ life. And, thanks to the AI app, it is again.

“I can speak to you in my own voice,” he says.

How it works

This is how the app works: A recording of a person’s voice is stored in a remote server, also known as the cloud. The person types in the sentence he or she wants to “say” (there can be a delay in hearing the message, depending on how fast the person can type).

The send button is pressed, transmitting the text over an internet connection to a cloud server. Depending on the connection type and Wi-Fi speed, there can be some delay.

The wave or audio file is sent back to a tablet or cellphone, which plays the audio over its speakers. 

“Any words you type in generate the sound of their voice with any message you want to say,” says Holland resident Charles “Charlie” Elwood, an electrical engineer and a Microsoft Most Valuable Professional who developed the neural voice app.

The app is very different from the voice synthesizer used by the late theoretical physicist Stephen Hawking. 

“That was not his (Hawking’s) voice,” says Elwood. “That was a computer being Stephen’s voice.”

A game-changer

Elwood initially developed the app for a woman named Maria in Puerto Rico, who communicated via sign language but had limited motion in her arms and hands, which made communication challenging.

As Maria was unable to speak, her mother’s voice was recorded. Thanks to the app, Maria could sign in her dialect and speak back to anyone in the world with her mother’s voice.

The app was a true game-changer for Maria.

“Due to her motion constraints, she couldn’t use the full set of signs,” says Elwood. “So I used customizable Vision AI to create a new set of gestures or signs that were in her range of motion. Maria could then create the gestures (in her ‘dialect’ or within her range of motion), and the camera would understand what she was signing. Then the device, using what is called Edge technology, would speak the recognized words/gestures back in her mother’s synthetic voice.”

Mari read about Elwood’s groundbreaking app on his LinkedIn account and contacted him on behalf of her husband.

“I can hear him speak the way he has for years,” she says with a smile about her husband of nearly 40 years.

For future generations

Elwood envisions the app not only for people who’ve lost their voice but also for those who want to leave an audio journal or memoir for future generations to hear.

“If you want to hear from your ancestors, you can now hear what your great-great-grandparents sounded like,” says Elwood, who owns and operates the companies MyAudioBank and SolisMatica, and has worked for the automotive industry as a project manager and systems engineer.

“I think being an electrical engineer gave me the background for solving problems with math and to understand artificial intelligence, how the voice works, and how artificial intelligence works in general,” says Elwood. “There’s really a lot of calculus and understanding the mathematical model behind human language and voice, and that’s all just sound waves. My artificial intelligence synthetic voice applies math in this new role and uses math to solve problems.”

Wider distribution

Elwood says he currently has a handful of people using the app but he’s working to scale it for wider distribution.

“I’m developing the app to make it more readily available,” he says. “Right now, I’m working with companies and investors so we can get this to thousands of people. There are thousands of people who’ve lost their voices and thousands of people who want to preserve their voices for future generations. I see this as growing very, very big. I’m trying to get ahead of that to make this available to those who need it.”

The app works best in a quiet room, the Martins say. Rooms with too much background noise make communication challenging at best.

But when the Martins are home together, the words start to flow.

“Chris used to do play-by-play with sports and so, if we’re watching a game, now Chris gets a chance to do play-by-play and do a little color,” says Mari. “That’s the part I think is so special about this.”

This article is part of a multi-year-long series Disability Inclusion, exploring the state of West Michigan’s growing disability community. The series is made possible through a partnership with Centers for Independent Living organizations across West Michigan.
 
Enjoy this story? Sign up for free solutions-based reporting in your inbox each week.