Advertisement

What are deepfakes? Misinformation videos becoming more ‘powerful, precise’

Information has never before been so readily available — the answer to an unlimited number of questions lies between our fingers and a keyboard.

But experts warn we can no longer believe our eyes.

“Disinformation takes many forms — sometimes it’s a manipulated photograph, sometimes it’s a rumour that people might share with you face to face,” Claire Wardle, a misinformation researcher at First Draft, told Global News.

“There are many different elements to it, but fundamentally, disinformation is false information that people are sharing to cause harm.”

While the concept isn’t new, the rapid pace at which it spreads has dangerous implications.

Story continues below advertisement

“We’ve never had a means by which information can travel at such speed and we’ve never had a means by which anybody can create fabricated content really, really easily,” Wardle explained.

It’s a major concern for Canadians as an election looms in the near future.

WATCH: Misinformation spreads through India’s election campaign

Click to play video: 'Misinformation spreads through India’s election campaign'
Misinformation spreads through India’s election campaign

The Communications Securities Establishment (CSE), Canada’s cybersecurity agency, warns that deepfakes are a threat to political parties and candidates in the 2019 update to the Cyber Threats to Canada’s Democracy report.

“Improvements in artificial intelligence (AI) are likely to enable interference activity to become increasingly powerful, precise and cost-effective,” the report read. “Evolving technology underpinned by AI, such as deep fakes [sic], will almost certainly allow threat actors to become more agile and effective.”
Story continues below advertisement

In the U.S., the Pentagon’s Defense Advanced Research Projects Agency has spent millions of dollars on a media forensics program, Wired reports.

What are deepfakes?

The name comes from “deep learning” and fake news. Deep learning is an AI technique that uses artificial neural networks to automatically generate a video that didn’t happen.

Deepfakes specifically refer to putting people into false situations or making them appear to say false quotes.

While the technology appears to have first been used to transpose the faces of celebrities onto pornography, its reach has extended — and experts say it can be used for all sorts of things.

That can take the form of politicians saying things they never said.

One example was a video of former U.S. president Barack Obama that appeared to look live — but, in fact, was computer-generated and used the voice of comedian Jordan Peele. It was created to inform people about deepfake technologies.

Breaking news from Canada and around the world sent to your email, as it happens.

“People can duplicate me speaking and saying anything. And it sounds like me and it looks like I’m saying it — and it’s a complete fabrication,” Obama said last week in Ottawa.

Other examples include the recent video of Mark Zuckerberg in which a convincing rendition of Zuckerberg tells the viewer his success comes from a secret organization. It was posted with the hashtag “#Deepfake” and was part of an art project.

Story continues below advertisement
WATCH: A “deepfake” created by artists Bill Posters and Daniel Howe showing Facebook CEO Mark Zuckerberg
Click to play video: 'Artists create Mark Zuckerberg ‘deepfake’'
Artists create Mark Zuckerberg ‘deepfake’

This gets ethically complicated when the intended purpose of the videos isn’t as clear cut.

A Belgian political party called Socialistische Partij (or sp.a) created a deepfake of U.S. President Donald Trump talking about the Paris Climate Accord in 2018, sparking backlash against Trump online.

It was created to grab attention about the issue of climate change, and at the end of the video, the depiction of Trump admitted the video was fake. While the video doesn’t hold up to close scrutiny, many online commented as if it were real.

A new study showed how machine learning can animate still images like the Mona Lisa and how easy creating these deepfakes can be.

But perhaps the sophistication and accuracy aren’t as necessary as we think — a fake video doesn’t have to be complicated to go viral.

Story continues below advertisement

While deepfakes are created with sophisticated algorithms, simple video editing can alter or mask the truth of an event — and experts say these “shallow fakes” can be just as dangerous.

What are shallow fakes?

Editing videos can be done from smartphones, and even without the complex faking technology, they have the potential to be damaging.

For example, a recent video of U.S. House of Representatives Speaker Nancy Pelosi attacking Trump was altered to make it look like she was slurring her words and then shared on Facebook.

The video, posted by a group called Politics WatchDog, has been viewed more than two million times.

WATCH: Millions view doctored video shared by Trump supporters

Click to play video: 'Millions view doctored video shared by Trump supporters'
Millions view doctored video shared by Trump supporters

It’s likely that no complex technology was used to make the video — instead, the video was just slowed down and the pitch of the audio was adjusted, experts said.

Story continues below advertisement

Even though the video was edited, comments like “Omg is she drunk or having a stroke???” and “She’s drunk!!!!!!” appeared in response. Politics WatchDog said on its Facebook page that it “never claimed that Speaker Pelosi was drunk.”

Facebook said in a statement to Global News that it was not removing the video because it did not violate the platform’s community standards.

“We don’t have a policy that stipulates that the information you post on Facebook must be true,” the statement said, but Facebook also noted the video has been reviewed by third-party fact-checkers, who deemed it false.

It was later removed from the social media platform, but not by Facebook, a spokesperson confirmed to Global News.

Another example of a so-called shallow fake video that had real-world consequences was a video of an interaction between CNN reporter Jim Acosta and a White House staffer that was altered.

In the doctored video, the incident — in which the staffer attempts to take a microphone away from Acosta but he resists — is sped up to appear more aggressive, and Acosta’s spoken comment of “pardon me, ma’am” is inaudible.

White House press secretary Sarah Huckabee Sanders shared the altered video when explaining that Acosta’s “inappropriate” behaviour led to the suspension of his credentials.

Story continues below advertisement

“It’s very important to understand those moments where you see a spike — where something begins to go viral,” Storyful’s Padriac Ryan said. “Analyzing the origins of that moment is really important to understand the wider phenomenon.”

Fact-checking                                   

As the spread of misinformation and disinformation becomes more and more sophisticated, experts say the verification process needs to grow as well.

“Gone are the days of, for instance, Twitter egg accounts, which were very obvious and very easy to spot,” Ryan said.

Social media and other tech companies are implementing measures to crack down on false stories, but experts say that’s not enough.

“I think what we’re recognizing now is societies need gatekeepers,” Wardle said.

“They need people who can be trusted, who can help us navigate the information ecosystem, and right now, we don’t have that.”

A new poll says nearly half of Canadian respondents would support having governments censor fake news — and that number jumps to more than 60 per cent if you look at worldwide statistics.

Story continues below advertisement

But there is little agreement on who should decide what constitutes fake news: among all respondents, 17 per cent believe it should be the government, but there is a wide range in responses in individual countries, from 37 per cent in Indonesia to a low of seven per cent in Poland.

Canadian and American respondents fell on the lower end of the scale, with 10 and 11 per cent, respectively, saying that deciding what constitutes fake news should be the purview of the government.

Sixteen per cent overall say that job should fall to individual internet users, while 12 per cent think it should go to social media companies.

 

—With files from Global News’ Amanda Connolly

Sponsored content

AdChoices