Advertisement

Fighting extremist content online: Feds dedicate $1.9M to terrorist analytic tool

Click to play video: '‘You’ll be fed more of that,’ former Twitter tech boss explains how social media algorithms can spread extremist content'
‘You’ll be fed more of that,’ former Twitter tech boss explains how social media algorithms can spread extremist content
WATCH: 'You'll be fed more of that,' former Twitter tech boss explains how social media algorithms can spread extremist content – Sep 14, 2022

The federal government is giving new funding to continue the development of an automated tool for finding and flagging terrorist content online.

In a press released issued Tuesday evening, the public safety department detailed a $1.9-million, three-year investment in funding “to combat online terrorist and violent extremist content.”

“We need to confront the rise of hate and violent extremism,” Prime Minister Justin Trudeau said in a tweet on Tuesday.

“At the Christchurch Call Summit, I announced that Canada will fund a new tool that helps small and medium-size online platforms better identify and counter content related to terrorism and violent extremism.”

The tool Trudeau referred to is the Terrorist Content Analytics Platform.

Story continues below advertisement

Created by the United Nations’ Tech Against Terrorism initiative in 2020, the tool combs various corners of the internet for terrorist content and flags it for tech companies around the world to review — and, if they choose to do so, remove.

The creation of this tool is funded by Public Safety Canada through the Community Resilience Fund. However, despite their funds supporting it, the government is at arms-length from the work TCAP does, according to the website for the tool.

How does the Terrorist Content Analytics Platform (TCAP) work?

Typically, terrorists share their content on “smaller platforms” first, according to Adam Hadley, who is the executive director of Tech Against Terrorism.

“Unfortunately, smaller platforms also tend to have limited capacity to handle terrorist use of their services,” he explained to Global News in an emailed statement.

“With the TCAP we are able to alert this content to smaller platforms quickly and thereby prevent the content spreading across the internet before it becomes viral.”

Story continues below advertisement

The TCAP starts with a team of open source intelligence (OSINT) analysts, who find out which platforms are preferred among terrorist entities. This team then identifies links to smaller terrorist-operated websites and social media platforms where their content is hosted, and uploads those links to the TCAP.

Automated scrapers also extract data from the platforms the OSINT identify, uploading relevant links to the TCAP.

Click to play video: 'Terrorism survivor aiming for a future of helping others'
Terrorism survivor aiming for a future of helping others

Once these links are uploaded to TCAP, they are verified and sorted into the corresponding terrorist organization. If the verified links are on a platform that’s registered with TCAP, the tech company will then receive an automated alert  leaving it up to them to decide whether they want to moderate the content, or not. The platform also monitors the content in order to see what the tech platforms decide to do.

Get the day's top news, political, economic, and current affairs headlines, delivered to your inbox once a day.

Get daily National news

Get the day's top news, political, economic, and current affairs headlines, delivered to your inbox once a day.
By providing your email address, you have read and agree to Global News' Terms and Conditions and Privacy Policy.

As a final step, the TCAP archives the content it gathers for what its website described as “academic and human rights purposes.” While it’s not available yet, the tool’s archive will eventually be opened up to researchers and academics.

Story continues below advertisement

What has it done so far?

To date, the TCAP tool has sent out just shy of 20,000 alerts to 72 different platforms, according to its website.

The alerts have dealt with a total of 34 different terrorist entities.

In its latest transparency report, which covered the period of Dec. 2020 to Nov. 2021, Tech Against Terrorism said 94 per cent of the content its TCAP tool alerted tech platforms about was ultimately taken down.

However, the takedowns didn’t occur equally between various terrorist groups. On average, tech companies who received alerts about Islamist terrorist content took down 94 per cent of the content flagged to them.

The removal rate of far-right terrorist content following an alert, however, was just 50 per cent.

On top of that, far-right media was submitted to TCAP at a much lower rate. While 18,787 submissions were made about Islamist terrorist content — resulting in 10,959 alerts being sent — just 170 submissions were made about far-right terrorist content, resulting in 115 alerts being sent out.

Story continues below advertisement

Part of the reason for the much lower submission rates might have been due to the stringent verification procedures undertaken by the TCAP tool. In order to be considered for an alert, the content must be tied to a designated terrorist organization — an official classification made under the Anti-Terrorism Act in Canada.

Canada only started adding right-wing extremist groups on its list of outlawed terrorist organizations in 2019, when it added the names of Blood & Honour and Combat 18.

“We closely follow the designation of further violent far-right organizations, and will include any new designated organizations in the TCAP as soon as they are legally designated by the above democratic institutions and nation states,” Hadley said.

“We would argue that the major democracies need to do much more to ensure that more is done to designate far-right violent extremist organizations, groups, and individually.”

Debate over the efficacy of automated flagging

Part of the goal of the latest round of funding is to help TCAP to enhance its efforts to archive the content it flags, according to Hadley.

Story continues below advertisement

He said the funding from Canada will, in part, “ensure that content referrals are auditable and accountable by providing access to the original content after a referral for takedown.”

Auditing content that is flagged to be taken down is one of the key steps in this process, according to J.M. Berger, a writer and researcher focused on extremism who has authored four critically-acclaimed books.

“There’s a desperate need for some kind of organized effort to archive extremist content that is vulnerable to takedowns, which is one of TCAP’s functions,” he told Global News.

“This material is important not only for prosecutions and research, but it’s a necessary component in any effort to audit how tech companies approach takedowns.”

As things stand now, the current takedown regime is “pretty opaque,” Berger said.

“The archive can enable some first steps toward accountability, but there’s a lot more that needs to be done.”

Click to play video: 'Canada adds 13 entities, including Proud Boys, to terror list'
Canada adds 13 entities, including Proud Boys, to terror list

However, not everyone is convinced that automation is the best route for managing online terrorist content, including Stephanie Carvin, a former CSIS analyst who now teaches at Carleton University.

Story continues below advertisement

“I’m not necessarily against it,” Carvin said of the TCAP tool.

However, she said the tech companies should take greater initiatives in dealing with far-right content on their platforms without relying on automated tools.

“The fact is you have problems with the far right that are going to have to be addressed with the (tech) companies themselves.”

Some major tech companies have taken recent steps to crack down on material from white supremacists and far-right militias.

According to Reuters, in 2021 a number of U.S. tech companies including Twitter, Alphabet, Meta — which was still known as Facebook at the time of this announcement — and Microsoft began contributing to the Global Internet Forum to Counter Terrorism’s (GIFCT) database.

This allows them to share their data to better identify and remove extremist content across different platforms.

Still, despite efforts from tech companies and TCAP, there’s  a risk of some far-right content falling through the cracks, given how quickly their symbols and memes change relative to those of Islamist terrorist groups like Daesh.

“When you had groups like Daesh that were using their flags and stuff like that…they were using certain kind of images. It was much easier,” Carvin explained.

Story continues below advertisement

“But the thing with the far right, for example, which is I think the primary concern of the Canadian government is that the memes and the content changes very quickly.”

The Canadian government, meanwhile, says providing online protections is a “central part” of efforts to keep Canadians “safe,” according to a statement from Audrey Champoux, a spokesperson for the public safety minister, which was sent to Global News.

“We must confront the rise of hate, misinformation and disinformation, and violent extremism which are too often amplified and spread online – which can result in real world consequences.”

— with files from Reuters

Sponsored content

AdChoices