Advertisement

Ottawa lays out proposal for digital safety watchdogs to crack down on hate online

Click to play video: 'Canada announces multi-faceted approach to combat online hate speech, crime with Bill C-36'
Canada announces multi-faceted approach to combat online hate speech, crime with Bill C-36
Canada's attorney general and minister of justice David Lametti announced a multi-faceted approach to dealing with online hate speech in Canada. Bill C-36, which was proposed in the House of Commons Wednesday, allows for anyone to file a formal complaint to the Canadian Human Rights Commission and "amends the criminal code to improve prosecution and prevention of hate crimes ," Lametti said. – Jun 23, 2021

As speculation mounts about a possible fall election, the federal government is laying out a proposal for a new digital safety commission that will have the power to regulate hateful content online.

The proposal specifically targets major platforms like Facebook, Twitter, Instagram, YouTube and Pornhub under a new legal category that deems them “online communications service providers” and under the authority of a new Digital Safety Commission.

That would place a new obligation on those providers to remove five categories of hateful content and review complaints within 24 hours. The new regulator would also get a last-ditch power to apply for court orders to have telecommunications companies block access to platforms that persistently refuse to remove child sexual exploitation or terrorist content.

READ MORE: Liberals introduce bill to fight online hate with Criminal Code amendments

Speaking in a technical briefing with reporters on Thursday morning, government officials pointed to the violent attacks on a mosque in Quebec City in 2017 and the Christchurch mosque attacks in 2019 as examples where individuals were radicalized by content online or where social media companies failed to remove content related to the attacks.

Story continues below advertisement

The proposal outlined Thursday is being put forward for consultation. Legislation is expected this fall.

Click to play video: 'London advocacy group leader calls for “non-partisanship” to fight hate in vigil for Muslim family killed'
London advocacy group leader calls for “non-partisanship” to fight hate in vigil for Muslim family killed

The five categories of harmful content covered under the proposed new powers will draw on offences already defined under the Criminal Code: hate speech, child sexual exploitation content, non-consensual sharing of intimate images, incitement to violence, and terrorist content.

Officials said the definition of hate speech would match that proposed under legislation to amend the Criminal Code which has not yet passed, Bill C-36.

READ MORE: Canadians need to step up to tackle online hate — even with ‘crazy uncles’, says expert

That legislation defines “hatred” as “the emotion that involves detestation or vilification and that is stronger than dislike or disdain” and that is “motivated by bias, prejudice or hate based on race, national or ethnic origin, language, colour, religion, sex, age, mental or physical disability, sexual orientation, gender identity or expression, or any other similar factor.”

Story continues below advertisement

That definition specifically excludes content that “discredits, humiliates, hurts or offends.”

Polling done by Ipsos exclusively for Global News last year suggested 88 per cent of Canadians believe more needs to be done to prevent and remove hateful and racist content from social media platforms.

As well, 82 per cent said they believe social media firms should be mandated to inform law enforcement of any posts that spread hate or racism, with 80 per cent saying they want to see more regulation of those platforms in Canada.

However, it remains to be seen how much teeth the proposed measures might actually have.

Click to play video: 'Controlling the spread of online hate and violence'
Controlling the spread of online hate and violence

The proposed Digital Safety Commission would include three bodies: a digital safety commissioner, a digital recourse council, and an advisory board providing guidance to both.

Story continues below advertisement

The digital recourse council would serve as a mechanism for individuals to challenge the decisions by platforms after reviewing complaints about harmful content.

READ MORE: How do violent, racist extremists raise money? New report zeroes in on crypto

For example, if someone submitted a complaint to Twitter alleging hate speech but Twitter dismissed that complaint, the individual could take the matter to the digital recourse council to seek a binding decision on whether the content in question should be removed.

But officials who spoke on background with journalists on Thursday provided little detail as to what would happen if a social media company refused to comply with that binding order by the council.

READ MORE: London attack was latest of Canada’s ‘most deadly’ form of extremism, national security adviser says

A separate power proposed in the measures would allow the Digital Safety Commission to ask the Federal Court to order internet service providers to block a social media platform if that platform has “persistently” refused to remove two kinds of content: child sexual exploitation and terrorist content.

Global security agencies have increasingly highlighted the threat posed over recent years of what is now known as ideologically motivated violent extremism. It’s a catch-all term designed to cover four categories of violent extremism that are not primarily motivated by factors like religion.

Story continues below advertisement

READ MORE: Violent extremists may exploit coronavirus pandemic, target hospitals, threat report warns

The four categories of ideologically motivated violent extremism as defined by CSIS are xenophobic violence, gender-driven violence, anti-authority violence and “other grievance-driven and ideologically motivated violence.”

CSIS has pointed to the “echo chambers of online hate that normalize and advocate violence” as factors in driving all four categories of ideologically motivated violent extremism.

Click to play video: 'How online conspiracies can spark offline violence'
How online conspiracies can spark offline violence

More to come

Sponsored content

AdChoices