‘We are not prepared’: Russia uses artificial intelligence, deep fakes in propaganda warfare

Click to play video: 'Fact or Fiction: Russia uses artificial intelligence, deep fakes in propaganda warfare'
Fact or Fiction: Russia uses artificial intelligence, deep fakes in propaganda warfare
WATCH: As Russia's invasion of Ukraine continues, another war is waging online that risks deluding an unprepared audience. – Mar 29, 2022

Warning: some of the content linked within this article may be disturbing to viewers. 

Russia’s war on Ukraine started over a month ago, and the potential for ceasefire remains up in the air.

But those closely watching the Kremlin propaganda machine say there is another battle waging online — a “war of information” that will last far beyond any potential ceasefire.

“This is not new,” said Oleksandr Pankieiev, research coordinator at the Canadian Institute on Ukrainian Studies at the University of Alberta.

“Russia has been working to condition its audience for war with Ukraine and NATO for eight years.”

From using man’s best friend to gauge sympathy, reportedly using actors to frame Ukraine as the assailant, and re-circulating old media as ‘Ukrainian propaganda,‘  the Kremlin narrative spread online following the invasion of Ukraine has been “aggressive,” says Pankieiev, adamant on making us “doubt what we see.”

Story continues below advertisement

But there are other disinformation tactics at play that threaten to blur the line between fact and fiction.

A senior research fellow for Harvard University told Global News that Russia has taken a deep dive in artificial intelligence.

Aleksandra Przegalinska says the Kremlin is using deep fakes — fabricated media made by AI. A form of machine learning called “deep learning” can put together very realistic-looking pictures, audios, and in this case, videos that are often intended to deceive.

Deep fakes are usually highly deceptive impersonations of real people. But the technology can also be used to create a completely synthetic individual using multiple faces.

Przegalinska says they’re a Russian specialty.  The Kremlin has already circulated several deep fakes on Facebook and Reddit –  one of a supposed Ukrainian teacher, another of a synthetic Ukrainian influencer, hailing Putin as a savior.

Some platforms have managed to take them down – but Przegalinska and Pankieiev say such disinformation continues to run amuck on other channels like TikTok, and state-controlled social media app Vkontakte.

“Russia has experience with deep fakes, and they really know how to use them,” said Przegalinska.

Story continues below advertisement

Early March, Ukrainian intelligence warned a deep fake of Ukrainian President Volodymyr Zelenskyy was being prepared. Days later, the website of TV network Ukrayina 24, as well as its live broadcast, was hacked. A deep fake of Zelenskyy appeared – calling for Ukrainians to surrender.

While some have called the video quality laughable and easily identifiable, others warn the next deep fake may not be.

Is this technology new?

Deep fakes have been around since 2017. Reportedly created by a Reddit user, the technology baffled the online community, and raised alarm bells about their disastrous potential.

Two years later, a cybersecurity firm found that 96 per cent of deep fakes being circulated online were of pornography, all of them depicting only women.

Those familiar with artificial intelligence warned it was just a matter of time before the technology would be used to threaten international security.

And it appears that time has already come.

“It is so easy (to fall for this}. Its about the easiest thing in the world,” Mike Gualtieri, VP and principal analyst of AI research firm Forrester, told Global over Zoom.

Story continues below advertisement

Gualtieri says the rise of the internet had already opened the door for misinformation and disinformation to spread rapidly.  Add AI to the mix and the advantage those engaging in disinformation has becomes astounding.

“When you add AI to it, it lets you test the effectiveness of these messages in real time.”

Gualtieri warns of generative adversarial networks (GANs), a branch of AI that can be trained to produce realistic-looking data. Basically, a computer can generate disinformation on its own (think pictures, videos, even research papers.)

GANs can then disseminate that disinformation like rapid fire, while at the same time tracking its performance online by counting clicks and engagement.

“It’s incredibly dangerous,” said Gualtieri. “When you have technology that can automate persuasion in the way that AI can, you can get public opinion to form in a very scary way.

“We are not prepared, and people in power and social media companies have every incentive not to prepare us. Because if we’re prepared, it doesn’t work.”

Story continues below advertisement

Where is Russia going with this? 

The kind of agenda Russia is trying to push depends on the target audience.

Right now, Pankieiev says the Kremlin is focused on reframing the narrative in the West, and within its own borders.

In the West, Russia is trying to justify the war on Ukraine as an unavoidable “special military operation.”

Putin is also trying to find hidden allies that are engaging with his movement, while threatening anyone inside and outside Russia that aligns themselves with Ukraine that “they will be the next casualty.”

“They’re starting the witch hunt on ‘traitors’,” said Pankieiev.

The good news? Przegalinska and Pankieiev say Ukrainians have been advancing in the war of information by flooding the internet with real-life accounts of what’s happening on the ground — something Russia did not expect.

The public is also getting suspicious, according to Przegalinska, as some are quickly spotting fabricated videos or TikTokkers reading from a pre-written script.

Alongside Gualtieri, she stresses the need for the public to practice spotting fabricated media by using online tools.

Story continues below advertisement

MIT has some tips on detecting deep fakes, while sites like Botometer can help discern whether an online post came from a bot account. Users can also reverse image search on Google to look for an old photo or video that may be re-circulating under a fake headline.

Such tools may give the public an upper hand on propaganda says Przegalinska.  However, not utilizing them keeps the door open for Russia to delude the public.

“Even if we have a ceasefire — the propaganda war, the misinformation war — this will still continue … Once the first wave of interest in the conflict wanes, Russia may strike again,” she said.

The long-term effect? A “huge radicalization” in Russia in the coming years, says Pankieiev. Not to mention lasting trans-border tensions that could harm Ukrainians seeking asylum.

Click to play video: 'How the Ukraine-Russia crisis is translating to Putin’s propaganda machine'
How the Ukraine-Russia crisis is translating to Putin’s propaganda machine

Sponsored content