The digital David and Goliath battle over online hate

Click to play video: 'The David and Goliath fight against online conspiracies'
The David and Goliath fight against online conspiracies
WATCH: Online conspiracies and misinformation are driving mistrust, division and even violence – much of it is seeded on social media platforms. And big tech companies are cashing in on the views and likes, driven by algorithms designed not to inform but to retain users. For The New Reality, Jeff Semple profiles Imran Ahmed and the Center for Countering Digital Hate, a small group at the heart of the fight for reform and accountability – Apr 9, 2022

“Britain first!”

The rallying cry of far-right Brexit supporters was the last thing British parliamentarian Jo Cox heard before she was fatally shot and stabbed on June 16, 2016.

The rhetoric on the streets and online during the contentious Brexit debate was often heated – at times, outright vicious.

In the days before Cox was murdered, her killer had been swimming in toxic torrents of online misinformation and hate against immigrants and their supporters on the “stay” side. In the dark corners of social media, the pro-immigration MP had been branded an enemy.

GLASGOW, SCOTLAND – JUNE 17, 2016: Candles surround a photo of Labour MP Jo Cox. (Photo by Jeff J Mitchell/Getty Images).

The assassination of Jo Cox was a tragic event for the democratic world.

And for Imran Ahmed, it was a moment of reckoning – one that would change his life.

Ahmed was working for the British opposition Labour Party at the time, and Cox, a Labour MP, was a close colleague. He was one of the first to learn about the killing. And he immediately recognized her murder as clear evidence of social media’s power to inspire real-world violence.

Click to play video: 'Who’s most vulnerable to disinformation?'
Who’s most vulnerable to disinformation?

“She’s a mum of two kids, and her life was taken by someone that believed the most outrageous misinformation and conspiracy theories. And in the twisted logic of the conspiracist, taking her life will save more lives.”

Advertisement

It wasn’t the first time social media had been used to whip up hate that spilled into real-world violence.

SHAH PORI ISLAND, BANGLADESH – SEPTEMBER 27, 2017: Rohingya refugees flee violence in Myanmar’s Rakhine State. (Photo by Zakir Hossain Chowdhury/Anadolu Agency/Getty Images).

When the internet became widely available for the first time in Myanmar in 2013, the country’s military started using fake accounts to spread fake accusations of rape, murder and terrorist plots against the country’s Muslim minority, the Rohingya. Facebook later admitted it failed to adequately police its platform.

That failure played a key role in spreading hate speech that fuelled widespread state rape and murder against the Rohingya through 2017, which the United States has labelled a genocide.

Misinformation and hateful rhetoric on social media has become a commonly used weapon in the propagandist’s tool kit, from both state and non-state actors. Ahmed has been watching it ramp up for years.

“We’ve got this perfect storm of people who are being radicalized at pace,” he says.

After Cox’s murder, Ahmed saw how big this problem would be. So he left his job in Labour Party politics and started the Center for Countering Digital Hate (CCDH).

At first, he says people scoffed when he tried to explain how dangerous platforms like Facebook could be.

“Six years ago when I was telling people, ‘Look, this is being inculcated on Facebook,’ people laughed. People said, ‘Facebook, you mean where I see what my grandkids are doing?’”

They aren’t laughing anymore.

In the last two years alone, misinformation has helped inspire many other real-world events, including the Jan. 6, 2021 attacks at the U.S. Congress, widespread intimidation and hate against doctors over masks and other health measures, and the February 2022 occupation of Ottawa by the so-called “Freedom Convoy.”

Ahmed says these are but a few examples that prove how powerful social media has become.

“If you’re ignoring what the impact of social media is on our politics, on the way that we live our lives, you are missing the main place where people now form relationships, where they share information, where they set the social mores.”

Ahmed recently moved to Washington, D.C. Between the U.S. Capitol and London, he and his small staff work full-time researching a wide range of disinformation on social media. The latest flashpoint is Russia’s invasion of Ukraine.

“Putin has invaded a sovereign country, and the first weapons he used in that war against Ukraine were not missiles or tanks. It was disinformation and lies spread through propaganda on social media,” Ahmed says.

Russian President Vladimir Putin in Moscow on March 18, 2022. Photo by Mikhail KLIMENTYEV / SPUTNIK / AFP) (Photo by MIKHAIL KLIMENTYEV/SPUTNIK/AFP via Getty Images)

It started with Russian President Vladimir Putin’s claim that Ukraine – a nation led by a Jewish president – was a hotbed of Nazism, and that Russia needed to undertake a “peacekeeping mission” to “de-Nazify” the country.

But when that didn’t stick, Kremlin disinformation peddlers went to the archives. One of the most successful Russian misinformation campaigns is a long-running conspiracy about secret American labs meant to create bioweapons. In truth, the U.S., along with international partners, openly funds research labs in countries around the world to detect and contain outbreaks of disease. A version of the story first surfaced in 2015, then again last year, accusing the U.S. of using the labs to release the virus that caused the COVID-19 pandemic.

After Russia’s invasion of Ukraine, the story resurfaced and went viral again. This time, the claim was that Putin had invaded Ukraine to bomb the labs before they released another deadly virus. From social media, the conspiracy gained traction on popular right-wing media in the U.S., and was amplified by Russian and Chinese government officials.

Advertisement

Two days into the war, Imran’s organization, CCDH, released a study of one year’s worth of English-language articles from Russian state-owned media on Facebook. Despite Facebook’s promise in 2019 to label posts from Russian state media, researchers found out that of the 1,300 most popular posts, 91 per cent carried no warning.

When Global News asked Facebook’s parent company Meta for a response, a spokesperson replied by email, calling the accusations ”wildly inaccurate.”

They went on to say CCDH’s study was “designed to mislead people about the scale of state-controlled media on Facebook. In fact, 70% of the posts had 10 or fewer interactions, and the 500,000 interactions overall represent just 0.07% of the over 700 million interactions on English public content about Ukraine or Russia from Pages and public groups over the same time period.”

Despite Meta’s denials, it wasn’t until a few days after Ahmed’s team released their study that Facebook appeared to have begun labelling all posts from Russian state media.

“It just shows you how defensive these people are. They’re always denying, deflecting, delaying taking action,” says Ahmed. “You know, I’m really pleased that they’ve taken the action, but it should have happened six years ago when Russian propaganda was used to try to steal an American election.”

Facebook has long been accused of failing to stop bad actors. But according to whistleblower Frances Haugen, it’s much worse than that.

WASHINGTON, DC – OCT. 5: Facebook whistleblower, Frances Haugen testifies at the U.S. Senate on Oct. 5, 2021. Matt McClain / Getty Images)

In damaging testimony to the U.S. Senate last October, the former Facebook data analyst claimed the company knew the platform was being used to spread and amplify extremism, and kept it a secret.

“The documents I have provided to Congress prove that Facebook has repeatedly misled the public about what its own research reveals about the safety of children, the efficacy of its artificial intelligence systems, and its role in spreading divisive and extreme messages,” Haugen testified.

Ahmed says his researchers reached the same conclusion, finding the social media giant’s algorithms drive users from one conspiracy theory to another.

“If you like anti-vax, it will feed you QAnon and antisemitism and vice versa, if you like antisemitism, it’ll drive you COVID disinformation, vaccine disinformation. So you see this deepening and broadening of extremisms,” he told Global News.

And, he says, the convergence of conspiracies often starts in a benign way.

That was Lauren Manning’s experience. She fell into an extremist group on Facebook after she lost her father to cancer.

“I was close to him, so I was looking for, like, someone or something to attach myself to.”

Lauren remembers sitting in her mother’s suburban Toronto home one day when she was 17, browsing the Facebook page of her favourite heavy metal band, when a man started messaging her.

“He was fairly careful around me. He wasn’t dropping racial slurs right away,” Lauren says.

But before long, the man started the process of grooming and recruiting her into a white supremacist group. She got white power tattoos, went to rallies and got into fights. It got so intense her mother Jeanette gave her an ultimatum: leave the hate group or move out on her own.

“I was heartbroken when she chose them,” Jeanette says. “And she left and I just sat and I cried.”

Lauren Manning burns hate literature after leaving a white power group. Lauren Manning

Lauren, now 31 years old, stayed with the hate group for five years before she started to question the group’s claims. She finally left after a friend of hers was stabbed in a random attack, but the group lied and claimed he’d been targeted because he was white. When she announced her plans to leave, she was attacked and spent a week in the hospital. Clearing her head after being fed so much hate and propaganda has been a struggle.

Advertisement

“I had wrapped my entire identity around this movement. So the biggest question was, OK, how am I going to find myself again?”

Today, Lauren uses meditation and plays guitar to find peace. She and her mother wrote a book together, and counsel others who find themselves or loved ones trapped in extremist groups.

They say in the last few years, as people spend more and more time online, social media has only gotten more addictive and makes it much easier for hate groups and conspiracists to draw people in.

“I don’t think the governments have realized how epidemic it is,” says Jeanette.

As bandwidth expands and virtual reality provides an even more immersive experience known as the metaverse, Ahmed says online communities are only getting more dangerous.

“It’s a dystopia in which the worst people can act out their most sick sort of impulses with complete impunity.”

CCDH researcher Callum Hood has spent a lot of time accessing third-party virtual reality games on Facebook, connecting with the virtual world through an Oculus headset sold by Facebook’s parent company, Meta. What he experienced was disturbing.

“There was immediately someone who was in the form of an avatar that was explicitly sexual, with children around in the same chat room,” Hood says. “That was five minutes after logging on.”

In 12 hours of footage recorded from his forays into the virtual world, Hood encountered dozens of examples of users doing everything from spreading conspiracy theories about the Holocaust, slavery and other historical events to assaulting other users with racial slurs and sexual harassment. Even worse, these incidents often occurred between adults and children.

Yet, CCDH’s reports of these incidents to Facebook have not been addressed more than two months later.

“You press the emergency button and nothing happens,” says Ahmed. “And you know, if you’re going to build with safety at the heart of your experience, they’re going to have to put in a lot of work to get this as somewhere where parents would feel safe leaving their kids. Because right now, you don’t want Mark Zuckerberg babysitting your kids.”

Facebook, now Meta, built its early success on the unofficial motto, ‘Move fast and break things,’ meaning they rolled products out quickly and tried to refine them on the fly, in response to real-time feedback from the public and government. In recent years, the company has distanced itself from that motto, and that image.

Facebook announced its name change to Meta on Oct. 28, 2021. (Photo illustration by Chesnot/Getty Images)

Meta’s vice-president of global affairs, former U.K. deputy prime minister Nick Clegg, said in September 2021 that “the metaverse is going to be very different. This is going to be a much much more gradual, deliberate and therefore a much more thoughtful process of building technology.”

In its response to questions from Global News, Meta emphasized its third-party fact-checking program that employs independent fact-checkers in 80 countries and 60 languages, and claimed it has “taken significant steps to fight the spread of misinformation using a three-part strategy – remove content that violates our Community Standards, reduce distribution of stories marked as false, and inform people so they can decide what to read, trust and share.”

The company claims it spent $5 billion last year, and employed 40,000 people worldwide, tasking them with keeping users safe, while allowing them to “express themselves openly.”

Meta also claims it’s rolling out virtual reality protections: virtual reality headset controls for parents this month, automatic blocking of age-inappropriate apps for minors in May, among other new safety features.

But Ahmed says the big tech companies have proven they can’t be taken at their word.

“If someone breaks your rules again and again and again by spreading misinformation, which is against your rules, then take action against them,” he says. “If you don’t take action, what you’re saying is the rules don’t matter: ‘There are no rules in our platform as long as we’re making money.’”

Ahmed is no longer waiting for the big tech companies to self-regulate. He’s been taking CCDH’s research into digital hate and misinformation to meetings with legislators in the U.K. and United States to encourage them to adopt new laws.

Advertisement

And the tide is starting to turn.

The European Union is close to enacting the most sweeping reforms in the world, with two massive pieces of legislation called the Digital Services Act and the Digital Markets Act. According to the European Parliament, “the Digital Services Act significantly improves the mechanisms for the removal of illegal content and for the effective protection of users’ fundamental rights online” and “creates a stronger public oversight of online platforms.” The EU’s Digital Markets Act will ensure large platforms, known as gatekeepers, do not use their power to gain an unfair advantage over consumers or new competitors trying to enter the market.

EU member states have provisionally approved the Digital Markets Act, which will come into force after the language has been finalized. The Digital Services Act was given preliminary approval by the European Parliament in January, and will now be subject to months of negotiation and debate.

Europe’s new laws are expected to influence legislation around the world. The U.S. Congress has been holding hearings to address online safety, particularly focusing on changes to Section 230, which shields tech platforms from lawsuits over harmful content created by users. However, no comprehensive legislation has been passed so far.

And Canada’s Liberal government promised to introduce new legislation against online hate within 100 days after it was elected in September 2021, but that deadline passed without any new legislation being tabled in Parliament.

Ahmed worries that changes aren’t coming quickly enough.

“The last few years have been an experiment in ‘What if we privatize political discourse and the way that we communicate with each other and we hand it over to a company who seeks to maximize their profits by picking out the stuff that makes us most angry and engaged?’ Our ability to peacefully negotiate our differences is the beating heart of democracy. And I think that we’re getting worse, not better. The enormous greed – the rapaciousness that underpins the way that social media companies have sought to change our societies for their economic benefit – that could destroy democracy.”

AdChoices