Google’s AI assistant must identify itself as a robot during phone calls: report

Click to play video: 'Google’s new Duplex personal assistant sounds human, but it’s not'
Google’s new Duplex personal assistant sounds human, but it’s not
WATCH: Google is about to introduce its new virtual assistant called Duplex, which uses artificial intelligence. It can make phone calls for you, and has a voice that sounds real, but it's not. Robin Gill looks at the security risks that come with it – May 9, 2018

Google’s AI assistant will identify itself as a robot when making phone calls, the company has confirmed after its announcement of its new machine learning features sparked harsh backlash.

Alphabet Inc’s Google showed off an updated virtual assistant Tuesday that can make calls to restaurants, hair salons and other businesses to check hours and make reservations, holding conversations on a user’s behalf.

WATCH: Google launches job search feature in Canada

Click to play video: 'Google launches job search feature in Canada'
Google launches job search feature in Canada

Google CEO Sundar Pichai drew cheers from crowds on Tuesday as he demonstrated the new technology, called Google Duplex, during the company’s annual conference for software developers.

Story continues below advertisement

The assistant added pauses, “ums” and “mmm-hmms” to its speech in order to sound more human as it spoke with real employees at a hair salon and a restaurant.

The announcement was made at Google I/O, an annual event held since 2008 to share new tools and strategies with creators of products that work with Google software and hardware. It shows how Google is responding to rising competition from big tech companies over virtual assistants, shopping and devices.

However, demonstrations of the feature, which is not yet a completed product, left onlookers feeling uneasy about the human likeness in the assistant’s robotic voice.

WATCH: Google, Goodale react to possible CSIS breach

Click to play video: 'Google, Goodale react to possible CSIS breach'
Google, Goodale react to possible CSIS breach

The company confirmed in a statement to tech publication The Verge that the AI bot would reveal it was a virtual assistant before carrying on a conversation with a human.

Story continues below advertisement

“We understand and value the discussion around Google Duplex – as we’ve said from the beginning, transparency in the technology is important,” a Google spokesperson said in the statement.

“We are designing this feature with disclosure built in and we’ll make sure the system is appropriately identified.

“What we showed at I/O was an early technology demo and we look forward to incorporating feedback as we develop this into a product.”

Google said Tuesday in a post on its official blog that it was designing the Google Duplex to “sound natural” but that “transparency is a key part of that.” There was no initial mention of disclosure in the blog post or in Pichai’s presentation.

“The Google Duplex technology is built to sound natural, to make the conversation experience comfortable.

“It’s important to us that users and businesses have a good experience with this service, and transparency is a key part of that.

“We want to be clear about the intent of the call so businesses understand the context. We’ll be experimenting with the right approach over the coming months,” read the blog post.

Story continues below advertisement

Several experts have expressed concern with the new feature, calling the technology “unethical” and “deceitful.”

Matthew Fenech, who researches the policy implications of AI for the London-based organization Future Advocacy, called the new features “very impressive,” but adds that “it can clearly lead to more sinister uses of this type of technology.”

WATCH: This is what Google knows about you – and what it means for your safety

Click to play video: 'This is what Google knows about you – and what it means for your safety'
This is what Google knows about you – and what it means for your safety

“The ability to pick up on nuance, the human uses of additional small phrases – these sorts of cues are very human — and clearly the person on the other end didn’t know,” Fenech added.

Story continues below advertisement

Fenech went on to explain how nefarious uses of similar chatbots could come about, such as spamming businesses, scamming seniors or making malicious calls using political voices.

“You can have potentially very destabilizing situations where people are reported as saying something they never said,” he said.

-With files from Reuters and the Associated Press. 

Sponsored content