Menu

Topics

Connect

Comments

Want to discuss? Please read our Commenting Policy first.

Amazon’s Alexa could soon mimic the voice of your dead loved ones

Amazon's Alexa system is getting an update, and the company says that soon your virtual assistant will be able to mimic real voices. Elaine Thompson/AP Photo

Your Amazon Alexa may soon be able to replicate real, human voices — even those of your deceased loved ones.

Story continues below advertisement

The company announced the new feature at it annual Re:Mars conference, which is focuses on innovation in artificial intelligence (AI). The update to Alexa’s system would allow the virtual assistant to mimic the voice of any person based on less than a minute of recording.

To demonstrate, Amazon played a video at Wednesday’s event when a young boy asked “Alexa, can Grandma finish reading me the Wizard of Oz?”

Alexa then acknowledged the request, and switched to another voice mimicking the child’s grandmother. The voice assistant continued to read the book in that same voice.

Amazon began working on this feature as a way to put more “human attributes of empathy and affect,” into the Alexa system to build more trust in its users, according to Rohit Prasad, senior vice president and head scientist for Alexa.

Story continues below advertisement

“These attributes have become even more important during the ongoing pandemic when so many of us have lost ones that we love,” Prasad said. “While AI can’t eliminate that pain of loss, it can definitely make their memories last.”

Prasad says this feature differs from other generated voices the company has developed in the past because it had to be capable to creating a “high-quality voice” without hours of studio recording.

On the current Alexa system, users can switch to celebrity voices like Samuel L. Jackson and Melissa McCarthy as their voice assistant — created through a mixture of studio recordings and AI.

AI re-creations of people’s voices have steadily increased in recent years and is sometimes used in film and TV.

Story continues below advertisement

Three lines of Roadrunner, a documentary about Anthony Bourdain, were spoken using AI mimicking the late chef’s voice, sparking controversy. It wasn’t clear in the film that Bourdain had not actually said the lines. His estate had not approved the use of his voice in that way.

More recently, the film Top Gun: Maverick included AI-generated speech mimicking the voice of Val Kilmer, who lost his voice to throat cancer.

But the idea that an AI-generated voice can be created to accurately sound like a specific human with less than a minute of recording poses questions about privacy and consent that Amazon has left unanswered. Some people, understandably, feel uneasy about this proposed technology.

Story continues below advertisement

Michael Inouye of ABI Research told CNN that Amazon will have to win over its users with this technology, though wider uses of AI are here to stay.

“We’ll definitely see more of these types of experiments and trials — and at least until we get a higher comfort level or these things become more mainstream, there will still be a wide range of responses,” he said.

“For some, they will view this as creepy or outright terrible, but for others it could be viewed in a more profound way such as the example given by allowing a child to hear their grandparent’s voice, perhaps for the first time and in a way that isn’t a strict recording from the past,” Inouye added.

Story continues below advertisement

Amazon’s push comes as competitor Microsoft said earlier this week it was scaling back its synthetic voice offerings and setting stricter guidelines to “ensure the active participation of the speaker” whose voice is re-created. Microsoft said Tuesday it is limiting which customers get to use the service — while also continuing to highlight acceptable uses such as an interactive Bugs Bunny character at AT&T stores.

“This technology has exciting potential in education, accessibility, and entertainment, and yet it is also easy to imagine how it could be used to inappropriately impersonate speakers and deceive listeners,” said a blog post from Natasha Crampton, who heads Microsoft’s AI ethics division.

— With files from The Associated Press

Advertisement
Advertisement

You are viewing an Accelerated Mobile Webpage.

View Original Article