Advertisement

X-rated AI images of Taylor Swift spread on X, spurring calls for crackdown

Click to play video: 'Taylor Swift deepfake images: Why people are concerned over pornographic AI photos'
Taylor Swift deepfake images: Why people are concerned over pornographic AI photos
WATCH: Fake pornographic images of pop superstar Taylor Swift spread across social media this week, sounding the alarm about the rise of advanced artificial intelligence (AI) technology. Mike Drolet reports – Jan 27, 2024

Sexually explicit AI-generated images of Taylor Swift circulated on X (formerly Twitter) this week, highlighting just how difficult it is to stop AI-generated deepfakes from being created and shared widely.

The fake images of the world’s most famous pop star circulated for nearly the entire day on Wednesday, racking up tens of millions of views before they were removed, reports CNN.

Like the majority of other social media platforms, X has policies that ban the sharing of “synthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm.”

Without explicitly naming Swift, X said in a statement: “Our teams are actively removing all identified images and taking appropriate actions against the accounts responsible for posting them.”

Story continues below advertisement

A report from 404 Media claimed that the images may have originated in a group on Telegram, where users share explicit AI-generated images of women often made with Microsoft Designer. The group’s users reportedly joked about how the images of Swift went viral on X.

The term “Taylor Swift AI” also trended on the platform at the time, promoting the images even further and pushing them in front of more eyes. Fans of Swift did their best to bury the images by flooding the platform with positive messages about Swift, using related keywords. The sentence “Protect Taylor Swift” also trended at the time.

And while Swifties worldwide expressed their fury and frustration at X for being slow to respond, it has sparked widespread conversation about the proliferation of non-consensual, computer-generated images of real people.

Story continues below advertisement

“It’s always been a dark undercurrent of the internet, nonconsensual pornography of various sorts,” Oren Etzioni, a computer science professor at the University of Washington who works on deepfake detection, told the New York Times. “Now it’s a new strain of it that’s particularly noxious.”

“We are going to see a tsunami of these AI-generated explicit images. The people who generated this see this as a success,” Etzioni said.

Carrie Goldberg, a lawyer who has represented victims of deepfakes and other forms of nonconsensual sexually explicit material, told NBC News that rules about deepfakes on social media platforms are not enough and companies need to do better to stop them from being posted in the first place.

Click to play video: 'How AI is fuelling the rise of deepfake disinformation'
How AI is fuelling the rise of deepfake disinformation

“Most human beings don’t have millions of fans who will go to bat for them if they’ve been victimized,” Goldberg told the outlet, referencing the support from Swift’s fans. “Even those platforms that do have deepfake policies, they’re not great at enforcing them, or especially if content has spread very quickly, it becomes the typical whack-a-mole scenario.”

Story continues below advertisement
FILE – Taylor Swift performs during “The Eras Tour” in Nashville, Tenn., May 5, 2023. George Walker IV / The Associated Press

“Just as technology is creating the problem, it’s also the obvious solution,” she continued.

“AI on these platforms can identify these images and remove them. If there’s a single image that’s proliferating, that image can be watermarked and identified as well. So there’s no excuse.”

But X might be dealing with additional layers of complication when it comes to detecting fake and damaging imagery and misinformation. When Elon Musk bought the service in 2022 he put into place a triple-pronged series of decisions that has widely been criticized as allowing problematic content to flourish — not only did he loosen the site’s content rules, but also gutted the Twitter’s moderation team and reinstated accounts that had been previously banned for violating rules.

Ben Decker, who runs Memetica, a digital investigations agency, told CNN that while it’s unfortunate and wrong that Swift was targeted, it could be the push needed to bring the conversation about AI deepfakes to the forefront.

Story continues below advertisement
“When you have figures like Taylor Swift who are this big [targeted], maybe this is what prompts action from legislators and tech companies because they can’t afford to have America’s Sweetheart be on a public campaign against them,” he said.

“I would argue they need to make her feel better because she does carry probably more clout than almost anyone else on the internet.”

And it’s not just ultra-famous people being targeted by this particular form of insidious misinformation; plenty of everyday people have been the subject of deepfakes, sometimes the target of “revenge porn,” when someone creates explicit images of them without their consent.

In December, Canada’s cybersecurity watchdog warned that voters should be on the lookout for AI-generated images and video that would “very likely” be used to try to undermine Canadians’ faith in democracy in upcoming elections.

Story continues below advertisement

In their new report, the Communications Security Establishment (CSE) said political deepfakes “will almost certainly become more difficult to detect, making it harder for Canadians to trust online information about politicians or elections.”

“Despite the potential creative benefits of generative AI, its ability to pollute the information ecosystem with disinformation threatens democratic processes worldwide,” the agency wrote.

“So to be clear, we assess the cyber threat activity is more likely to happen during Canada’s next federal election than it was in the past,” CSE chief Caroline Xavier said.

With files from Global News’ Nathaniel Dove

Sponsored content

AdChoices