April 24, 2019 7:00 am
Updated: April 24, 2019 9:28 am

Social media bans, fines and penalties — how countries attempt to combat hate after tragedy

WATCH: Security footage shows Sri Lanka bombing suspect entering targeted church

A A

Sri Lanka has banned several social media sites, including Facebook, in the wake of bombings of churches and hotels that left more than 300 people dead on Easter Sunday.

The bans are meant to be a way for the government to control misinformation and hatred from spreading online following the attacks — and thus preventing further violence.

Story continues below

READ MORE: No evidence given linking Sri Lanka bombings to New Zealand attacks, Ardern says

This isn’t the first time the country has banned social media sites amid rising tensions.

In March 2018, Buddhist mobs ransacked businesses and set houses on fire in Muslim neighbourhoods around the city of Kandy. After the mob attacks, Sri Lanka’s government blocked some social media sites, hoping to slow the spread of false information or threats that could incite more violence.

READ MORE: Sri Lanka bombings — Is banning social media after a terrorist attack for the best?

After the most recent attacks, the government said sites will remain blocked until an investigation is completed.

But are social media bans the most effective solution?

While the Sri Lankan government has argued it helps, others say it also creates confusion among those searching for information or even lost loved ones.

WATCH: New Zealand and France to host summit to tackle online extremism

Ivan Sigal, executive director of digital advocacy organization Global Voices, told The New York Times that such bans are becoming less shocking as evidence piles up that social media platforms aren’t tackling the issue of hate and violence adequately.

“A few years ago, this would have been outrageous,” Sigal said, noting the ban is “a signal of the lack of trust that’s built up around their practices.”

The trust has been tested several times recently, especially when it comes to Facebook.

Last year, Facebook posts were linked to perpetuating genocide and violence against Rohingyas in Myanmar.

Facebook admitted in November 2018 that this was the case following an “independent” assessment.

READ MORE: Sri Lanka failed to heed warnings of bombings that killed 290 people, official says

“The report concludes that, prior to this year, we weren’t doing enough to help prevent our platform from being used to foment division and incite offline violence. We agree that we can and should do more,” Facebook said in a statement at the time.

Bernie Farber, chair of the Canadian Anti-Hate Network, told Global News that if social media sites don’t step up, governments need to.

“If social media themselves can’t bring their platforms into some kind of an orbit where hate and potential violence is dealt with properly then it’s time for the government to step in,” Farber said.

But Farber also cautioned that outright bans may not be the best solution and that a “middle way” is possible.

WATCH: More coverage on the Sri Lanka church attacks

“I believe it’s time for governments, including Canada, the United States and others, to take the lead — but short of banning, impose harsh penalties from significant fines to prison time,” he said.

Australia drafted and passed such an amendment to its Criminal Code in wake of the Christchurch, New Zealand mosque attacks. The legislation spells out that social media sites that fail to remove violent material in a reasonable time frame will be fined up to 10 per cent of their annual profit.

In Germany, a similar law calls on platforms to remove “obviously illegal” content within 24 hours.

Farber also noted that Facebook is trying to change things.

“Facebook has now taken the position that it’s going to start doing its own monitoring,” he said, pointing to the recent ban on white supremacist content in Canada.

READ MORE: Canada’s criminal code doesn’t mention ‘hate crime’ — so how do we hold people accountable?

The move came just weeks after Facebook allowed a livestream of the Christchurch mosque attack to be online and shared for several minutes.

But Sri Lanka’s ban on social media is also markedly different than what New Zealand’s government did following the Christchurch mosque attacks just over a month ago.

While New Zealand moved to block and remove videos of the attacks shared online, it did not block any particular social media platform.

WATCH: Controlling the spread of online hate and violence

Farber said New Zealand Prime Minister Jacinda Ardern “responded quite well” while addressing the role social media played in the attacks.

“I think they took the only action they could take, and that was basically telling social media leadership that they’d have to step up and take responsibility or it’s time there will be serious consequences, legal consequences,” he said.

Ardern referred to Facebook’s crackdown on white supremacist content as more of a “clarification,” saying it should have always been included in the company’s hate speech policy.

READ MORE: Facebook banned white nationalist accounts, but it’s hard to actually keep them offline

“But, nevertheless, it’s positive the clarification has now been made in the wake of the attack in Christchurch,” she told reporters in March.

Beyond Facebook, Ardern also called on countries around the globe to create a united strategy for tackling the problem, saying social media and the internet transcend borders.

“The solutions will need to be (global), too,” Ardern said.

On Tuesday, the New Zealand Herald said Ardern is now reportedly gearing up to lead a more co-ordinated international effort, making sure social media sites are held accountable for the content they publish.

The newspaper said a formal announcement on the initiative would be made in the coming weeks.

© 2019 Global News, a division of Corus Entertainment Inc.

Report an error

Comments

Want to discuss? Please read our Commenting Policy first.