Here is the EU’s ‘code of conduct’ Facebook and Twitter will use to combat hate speech online

Could the EU's new code of conduct really prevent hate speech online?. Nico De Pasquale Photography/Flickr

The European Union has joined forces with some of the world’s biggest tech companies to create a code of conduct in hopes to combat the spread of hate speech on social media.

Under the terms, the firms – which include Twitter, YouTube, Facebook and Microsoft – have committed to “quickly and efficiently” tackle illegal hate speech directed against anyone over issues of race, color, religion, descent or national or ethnic origin. The sites have often been used by terrorist organizations to relay messages and entice hatred against certain individuals or groups.

READ MORE: Twitter updates rules on banned content, abusive behaviour

Among the measures agreed with the EU’s executive arm, the firms have said they will establish internal procedures and staff training to guarantee that a majority of illegal content is assessed and, where necessary, removed within 24 hours. They have also agreed to strengthen their partnerships with civil society organizations who often flag content that promotes incitement to violence and hateful conduct. The European Commission and the firms have also agreed to support civil society organizations to deliver “anti-hate campaigns.”

Story continues below advertisement

“The internet is a place for free speech, not hate speech,” said Vera Jourova, the EU commissioner responsible for justice, consumers and gender equality. She added that the code of conduct, which will be regularly reviewed in terms of its scope and its impact, will ensure that public incitement to violence to hatred has “no place online.”

Breaking news from Canada and around the world sent to your email, as it happens.

“This agreement is an important step forward to ensure that the internet remains a place of free and democratic expression, where European values and laws are respected,” she said.

READ MORE: Social media, the new megaphone for violent perpetrators

The firms themselves say there’s no conflict between their mission statements to promote the freedom of expression and clamping down on hate speech.

Twitter, which has been at the center of much of the hate speech that’s spread online over the past few years, says it will continue to tackle the issue “head-on” along with partners in industry and civil society.

“We remain committed to letting the Tweets flow,” said Twitter’s European head of public policy Karen White. “However, there is a clear distinction between freedom of expression and conduct that incites violence and hate.”

And Facebook’s head of global policy management Monika Bickert urged the company’s 1.6 million users to use the site’s in-built reporting tools in the event they find content they consider unacceptable.

Story continues below advertisement

“Our teams around the world review these reports around the clock and take swift action,” she said.

READ MORE: Rape threat tweets raise concerns about hate speech on social media

Twitter, Facebook, YouTube and other social media companies have all come under fire in the aftermath of recent terrorist attacks; however, all note they have policies in place to block or remove posts that glorify violence and terrorism.

In early December, Facebook was forced to defend its policy on terrorism in response to a petition accusing the social network of not doing enough to shut down terrorism-related accounts.

Twitter updated its terms of service in December to clarify what it considers to be abusive behaviour and hateful content following criticism it was not doing enough to prevent the Islamic State’s use of the site for recruiting.

How will tech companies try to cut down on hate speech online?

Under the newly formed EU code of conduct, tech firms like Twitter and Facebook will be held to the following commitments:

  • Tech companies must have a “clear and effective processes to review notifications regarding illegal hate speech on their services so they can remove or disable access to such content” and clearly explain what qualifies as “hate speech” in their community guidelines.
  • Any posts flagged for hate speech must be investigated and/or removed within 24 hours.
  • The companies will receive support from “trusted reporters” in all member states to help flag any content that might be illegal.
  • Companies will also be required to “intensify cooperation between themselves and other platforms and social media companies to enhance best practice sharing.

– With files from Global News tech reporter Nicole Bogart


Sponsored content