A look at ‘secret’ Facebook groups — and how hateful content is monitored on them
U.S. Border Patrol agents came under fire this week after it was revealed in media reports that they had made crude posts in secret Facebook groups.
ProPublica reported on Monday that the posts in the group, named “I’m 10-15,” included doctored photos of U.S. Rep. Alexandria Ocasio-Cortez and dismissive references to the deaths of migrants in U.S. custody.
On Friday, CNN reported that a second group, with the name “The Real CBP Nation,” had 1,000 members and had similar content.
The reports have brought the so-called secret groups to the spotlight, with many questions emerging on how exactly Facebook monitors them.
What is a secret group?
A secret Facebook group is one that only members, or former members, can see. They also don’t show up in search results.
Joining one requires being invited by a current member.
The groups are often a place for people — friends, families or strangers — with similar interests and hobbies to congregate.
Plenty of secret groups aren’t remotely nefarious. For example, people discussing health matters or posting photos of their children to family members and friends often make such groups secret.
Facebook says about 400 million of its users are in what it considers “meaningful” groups, but the company doesn’t disclose how many of these groups are public, closed or secret.
WATCH: U.S. border patrol agents push back against criticism over detainment of migrants
Do they abide by the same rules?
Facebook noted in an email statement to Global News that secret groups are held to the same community standards that apply to all posts on the platform.
Those rules forbid bullying and harassment, hate speech, glorification of violence and “cruel and insensitive” posts that target “victims of serious physical or emotional harm.”
However, the statement also acknowledged that the groups are closed, which means they aren’t held accountable by the general public.
“While the general public can’t see content within these groups, our detection systems can,” it read.
Facebook explained it uses artificial intelligence to monitor things like nudity, graphic violence, terrorist propaganda and a host of other things.
“Using a combination of technology and human review, we routinely remove many types of violating content before anyone reports it. There is still more we can do, and we continue to improve our technology to detect violating content,” the statement added.
Controversy over secret groups
Evan Balgord, the executive director of the Canadian Anti-Hate Network, said that monitoring secret groups and the hateful content that could be posted requires added effort and resources, such as having sources “infiltrate” groups and report back findings.
He explained that poses challenges for watchdogs, advocacy groups and law enforcement.
“We need to know what’s happening inside these groups because there is a potential for radicalization leading to violence,” Balgord told Global News.
He noted that Facebook may claim to take action on hateful posts in secret groups, but it’s difficult to monitor how much they really do on a large-scale.
“It’s difficult to hold Facebook accountable for anything,” he said, adding that it’s also hard to know how many hateful secret groups exist.
WATCH: Video shows thousands crossing U.S.-Mexico border illegally before being apprehended
Despite controversy over secret groups, Facebook has defended them saying they provide a “safe environment” for many users.
In a March blog post, Facebook CEO Mark Zuckerberg boasted the groups as essential to the network’s “pivot to privacy.”
“I believe the future of communication will increasingly shift to private, encrypted services where people can be confident what they say to each other stays secure and their messages and content won’t stick around forever,” he wrote.
Facebook noted in the statement that “there is still more we can do” when it comes to detecting and removing harmful content. But it added that it removed roughly four million hate speech posts in the first three months of 2019, 65 per cent of which were caught via “proactive detection technology.”
Balgord said that holding Facebook, and other social media giants, accountable for tackling the problem of hateful content comes down to legislation and hefty fines.
“They get penalized financially and that encourages them to actually do something.”
Facebook’s response to border agents’ group
Facebook has said that it will co-operate with an investigation that federal authorities are carrying out over the comments found in the group.
However, the social media giant has not said whether the comments found in the border patrol agents’ group violated its standards.
The posts included doctored images of Ocasio-Cortez, including one that showed a smiling U.S. President Donald Trump forcing her head toward his crotch. Other posts made jokes about the deaths of migrants.
— With files from The Associated Press
© 2019 Global News, a division of Corus Entertainment Inc.