Advertisement

Google warns Canada’s plan to fight online hate is ‘vulnerable to abuse’

Click to play video: 'Canadian government needs to take action on online hate, says expert'
Canadian government needs to take action on online hate, says expert
WATCH: Canadian government needs to take action on online hate, says expert – Jun 13, 2021

Google is one of the first major tech companies to comment on Canada’s proposed approach to handle harmful online content.

In a Google Canada blog post, the internet giant said that there are aspects of the government’s proposal that “could be vulnerable to abuse and lead to over removal of legitimate content.”

The government first proposed in July 2021 a new Digital Safety Commission (DSC) with the power to regulate hateful online content from major platforms such as Facebook, Twitter, Instagram, YouTube and Pornhub.

The government identified five categories of hateful content which the platforms would have to monitor within 24 hours of complaints: hate speech, child sexual exploitation content, non-consensual sharing of intimate images, incitement to violence and terrorist content.

Story continues below advertisement

However, Google notes the proposed requirement for platforms to take down user-flagged content within 24 hours could be taken advantage of by others to harass or limit speech.

“It’s essential to strike the right balance between speed and accuracy,” the company wrote. “User flags are best utilized as ‘signals’ of potentially violative content, rather than definitive statements of violations.”

Click to play video: 'The Facebook Papers: Internal documents reveal company failed to stop spread of abusive content'
The Facebook Papers: Internal documents reveal company failed to stop spread of abusive content

Google said that in Q2 2021, out of the 17.2 million videos flagged by users on its YouTube platform, nearly 300,000 were removed. However, Google removed 6.2 million videos total for violating its community guidelines, showing that flagging is not all-encompassing for tackling hateful content.

Get the day's top news, political, economic, and current affairs headlines, delivered to your inbox once a day.

Get daily National news

Get the day's top news, political, economic, and current affairs headlines, delivered to your inbox once a day.
By providing your email address, you have read and agree to Global News' Terms and Conditions and Privacy Policy.

Google also strongly warned against proactively monitoring content, where content is scanned for material that could fall into one of the five hateful content categories before being posted.

Story continues below advertisement

“Imposing proactive monitoring obligations could result in the suppression of lawful expression … and would be out of step with international democratic norms.”

Under the proposal, the platforms would be obligated to inform RCMP or other law enforcement of potential hateful content, and the new regulator DSC could also apply for court orders to have telecommunications companies block access to platforms that refuse to remove child sexual exploitation or terrorist content.

When first announced, the government cited as justification for the proposal the violent attacks on a mosque in Quebec City in 2017 and the Christchurch mosque attacks in 2019, when individuals were radicalized by online content and when social media companies didn’t remove content related to the attacks.

Michael Geist, the Canada Research Chair in Internet and E-Commerce Law at the University of Ottawa, said the proposal is “deeply flawed” and has been roundly criticized by anti-hate and civil liberties groups that share many of the same concerns as Google.

He envisions that Google would likely use artificial intelligence (AI) to proactively monitor content that could then potentially be reported to the police.

“[That] raises enormous concerns, especially for vulnerable communities, given the potential for bias within these AI systems,” he said.
Story continues below advertisement

Geist also predicted that hate groups could target anti-hate groups to get their content removed if there isn’t due process, which he said the 24-hour response requirement limits, especially if penalties are involved if companies don’t respond in time.

Click to play video: 'Canada announces multi-faceted approach to combat online hate speech, crime with Bill C-36'
Canada announces multi-faceted approach to combat online hate speech, crime with Bill C-36

“Google suggests that it’s actually going to lead to over-blocking, over-removal of content,” he said. “Companies are warning that there is a threat to freedom of expression. And that threat extends to the groups that we’re trying to protect.”

Geist was one of hundreds to submit feedback over the summer during a consultation period on the proposal that took place during the 2021 Canadian election and closed just four days after it.

However, he said the consultation has not been transparent since the government has not made public the feedback it has received as it “may contain confidential business information,” according to Heritage Canada.

Story continues below advertisement

Nevertheless, the Liberal government has promised to introduce online harms legislation in the first 100 days after the election.

“Trying to rush a deeply-flawed, highly-criticized proposal based on non-transparent consultation is not a step in the right direction,” Geist said. “It is likely ultimately to lead to constitutional challenges.”

-With files from Global News’ Amanda Connolly

Sponsored content

AdChoices