Facebook defends its policy on terrorism after user outrage

Facebook’s Community Standards explicitly state that it doesn’t allow organizations that are engaged in terrorist activity.
Facebook’s Community Standards explicitly state that it doesn’t allow organizations that are engaged in terrorist activity. AP Photo/ Rajanish Kakade

Facebook wants to make it clear that terrorist propaganda is not welcome on its site in the aftermath of the Paris attacks.

“There is no place on Facebook for terrorists, terrorist propaganda or the praising of terror,” Facebook’s head of global product policy Monika Bickert bluntly stated Tuesday, in response to a petition accusing the social network of not doing enough to shut down terrorism-related accounts.

The petition – titled “Dear Facebook, thanks for the “Safety-Check”, but on fighting ISIS, you can do much better!” – has amassed over 140,000 signatures since it was started two weeks ago.

The campaign slammed Facebook for not responding quickly enough to reports of pro-ISIS accounts.

“It took less than two hours for Facebook to set up Safety Check, a feature that allowed Parisians to notify their friends and family, that they were safe and alive. Less than twelve hours later, Facebook asked its users if they wanted to color their profile picture in blue, white and red, like the French flag, but could come up with no ambitious tool or a serious plan to fight propaganda whatsoever,” read the letter to Facebook.

Story continues below advertisement

“An ISIS account sent me this message on Twitter: ‘it takes two minutes to create an account.’ That’s right: it takes two minutes to create a toxic jihadi account, and more than three days to get it deleted.”

READ MORE: Social media, the new megaphone for violent perpetrators

Facebook’s Community Standards explicitly state that it doesn’t allow organizations that are engaged in terrorist activity. It also states that it will remove content that expresses support for groups that are involved in terrorist activities.

“Supporting or praising leaders of those same organisations, or condoning their violent activities, is not allowed,” it reads.

However, Facebook relies on user reports to remove content that violates its policies. Content that is flagged is reviewed by “a highly trained global team” that works around the clock to determine if content should be removed.

“This is not an easy job and we know we can make mistakes and are always working to improve our responsiveness and accuracy,” said Bickert.

“We remain in close contact with NGOs, industry partners, academics, and government officials about the best ways to keep Facebook free of terrorists and terror-promoting content. As governments and academics have pointed out, it is often hard to identify new terror groups and individuals because the landscape is constantly changing.”

Story continues below advertisement

Facebook, Twitter, YouTube and other social media companies have all come under fire in the aftermath of terrorist attacks; however, all note they have policies in place to block or remove posts that glorify violence.

But experts say it’s an uphill battle. The Islamic State and similar groups have become deft at using social media to spread their message, both to recruit followers and to threaten their perceived enemies.

READ MORE: Can social media sites do more to stop terrorist activity?

“They can rapidly and easily identify others who share their beliefs,” said Marcus Thomas, a former assistant director of the FBI’s operational technology division.

Facebook, alongside other social media giants, pushed back against Senate legislation that would require them to alert federal authorities of any terrorist activity. Tech companies expressed concerns that the legislation would put them on the hook legally if they missed a tweet, post, or blog that hinted of an attack.

However, U.S. lawmakers resurrected the legislation on Tuesday following the San Bernardino shooting.

“That information can be the key to identifying and stopping terrorist recruitment or a terrorist attack, but we need help from technology companies. This bill doesn’t require companies to take any additional actions to discover terrorist activity, it merely requires them to report such activity to law enforcement when they come across it.”

Story continues below advertisement

With files from The Associated Press

Sponsored content