Menu

Topics

Connect

Comments

Want to discuss? Please read our Commenting Policy first.

Scarlett Johansson ‘shocked, angered’ over ‘eerily similar’ ChatGPT voice

WATCH: Actor Scarlett Johansson says she is "shocked, angered and in disbelief" over the newest version of OpenAI's ChatGPT, which includes an assistant named Sky, whose voice sounds eerily similar to Johannsson's. Mike Drolet explains – May 21, 2024

Though she may have once voiced a fictional operating system in the movie Her, Scarlett Johansson said she has no interest in speaking for real-life artificial intelligence (AI).

Story continues below advertisement

On Monday, Johansson said a newly released ChatGPT voice, named “Sky,” sounded “eerily similar” to her. The AI voice “shocked” and “angered” Johansson, 39, who revealed that nine months ago she declined an offer from OpenAI CEO Sam Altman to work on their new voice chatbot.

The voice, alongside four others, was created for the current ChatGPT 4.0 system and was released last week.

The company announced on Monday it would pause the use of Sky after it was widely compared to Johansson. OpenAI did not specify why it opted to silence Sky.

“Last September, I received an offer from Sam Altman, who wanted to hire me to voice the current ChatGPT 4.0 system,” Johansson wrote in a statement, which was shared by NBC News.

“He told me that he felt that by my voicing the system, I could bridge the gap between tech companies and creatives and help consumers to feel comfortable with the seismic shift concerning humans and AI,” she continued. “He said he felt that my voice would be comforting to people.”

Story continues below advertisement

After consideration, Johansson said she declined to work with OpenAI for “personal reasons.”

“Nine months later, my friends, family and the general public all noted how much the newest system named ‘Sky’ sounded like me,” she wrote. “When I heard the released demo, I was shocked, angered and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference.”

On Sunday, OpenAI denied any intentional likeness between ChatGPT’s Sky and Johansson. Rather, the company said Sky and the four other voices (Breeze, Cove, Ember and Juniper) were created using voice actors who received “top-of-market” pay rates.

“We believe that AI voices should not deliberately mimic a celebrity’s distinctive voice—Sky’s voice is not an imitation of Scarlett Johansson but belongs to a different professional actress using her own natural speaking voice,” OpenAI wrote in a release. “To protect their privacy, we cannot share the names of our voice talents.”

Story continues below advertisement

In a statement to NBC, Altman again denied any intentional similarities to Johansson.

“The voice of Sky is not Scarlett Johansson’s, and it was never intended to resemble hers,” he said. “We cast the voice actor behind Sky’s voice before any outreach to Ms. Johansson. Out of respect for Ms. Johansson, we have paused using Sky’s voice in our products. We are sorry to Ms. Johansson that we didn’t communicate better.”

In Johansson’s statement, she pointed to a May 13 post to X — the same day OpenAI demoed ChatGPT 4.0 and its voice chat feature — from Altman that simply read, “her,” seemingly a comparison to Johansson’s role in the film of the same name.

“Mr. Altman even insinuated that the similarity was intentional, tweeting a single word ‘her’ – a reference to the film in which I voiced a chat system, Samantha, who forms an intimate relationship with a human,” Johansson said.

Story continues below advertisement

Johansson said Altman asked her agent to reconsider the offer to work with OpenAI only two days before the May 13 demo.

After the launch, the actor said she was “forced” to hire legal representatives who sent letters to Altman and OpenAI asking for “the exact process by which they created the ‘Sky’ voice.”

“Consequently, OpenAI reluctantly agreed to take down the ‘Sky’ voice,” Johansson wrote.

She said she and her lawyers are looking forward to “transparency” and will work to see that Johansson’s individual rights are protected.

“In a time when we are all grappling with deepfakes and the protection of our own likeness, our own work, our own identities, I believe these are questions that deserve absolute clarity,” she concluded.

The launch of Sky, and its similarities to Johansson, drew widespread attention and mockery online. Even Elon Musk, who was once a board member of OpenAI but has since developed bad blood with Altman, poked fun at the Sky voice. The billionaire owner of his own AI company, called xAI, compared the incident to an episode of the popular sci-fi show Black Mirror. 

Story continues below advertisement

Johansson is far from the only celebrity to have concerns about AI and deepfakes, which are seemingly realistic, albeit fake, images, video or audio created by AI algorithms.

At the start of 2024, sexually explicit AI-generated images of Taylor Swift began circulating on X. The fake photos were shared widely and racked up tens of millions of views before they were removed.

Beyond the world of entertainment, politicians have also been oft-targeted by AI manipulations. In March, a deepfake resembling Prime Minister Justin Trudeau’s likeness was posted to YouTube promoting a financial “robot trader.” The video was removed from the platform, with Google (the owner of YouTube) calling the video a scam.

In December 2023, Canada’s cybersecurity watchdog warned that voters should be on the lookout for AI-generated images and video that would “very likely” be used to try to undermine Canadians’ faith in democracy in upcoming elections.

Story continues below advertisement

Outside of Canada, Italian Prime Minister Giorgia Meloni in March launched a lawsuit against two men who allegedly made pornographic deepfakes of her.

Even regular people have been targeted by deepfakes, often created as “revenge porn” or as a financial scam.

Many of the leading figures in AI development and advancements, including Yoshua Bengio, the notable Canadian computer scientist, in February signed an open letter calling for more regulation around the creation of deepfakes.

Story continues below advertisement

“Today, deepfakes often involve sexual imagery, fraud, or political disinformation. Since AI is progressing rapidly and making deepfakes much easier to create, safeguards are needed,” the group said in the letter.

On Thursday, Jan Leik, the key safety researcher at OpenAI, left his job at the company and cited long-standing disagreements with leadership and concern about the company’s priorities.

“Building smarter-than-human machines is an inherently dangerous endeavor. OpenAI is shouldering an enormous responsibility on behalf of all of humanity,” Leik wrote in a thread posted to X. “But over the past years, safety culture and processes have taken a backseat to shiny products.”

Curator Recommendations
Advertisement

You are viewing an Accelerated Mobile Webpage.

View Original Article