Facebook has banned Canadian accounts that were propagating white nationalist sentiments — but it seems that hasn’t kept the people behind the accounts off the social media platform.
A joint report from Buzzfeed and the Toronto Star found Tuesday that some individuals and groups behind the banned accounts reappeared on Facebook in different forms. The report noted that the new accounts had similar names and posts as the banned ones.
READ MORE: Faith Goldy banned from Facebook after site enforces extremism, hate policy — now what?
Former Toronto mayoral candidate Faith Goldy, who regularly posts white nationalist content online, was among those who reappeared in a Facebook ad.
The ad was flagged on Twitter by user Lee Hunter, who said they reported it to Facebook. Hunter received a reply from Facebook soon after saying the ad did not violate the website’s policy.
Facebook later took down the ad, explaining the page associated with the ad and its administrators violated the platform’s “authenticity policy.”
The social network added that it has banned Goldy and will continue to take down any content that is “affiliate representation” of her or others that have been banned.
Facebook said it will also remove content that expresses support for the banned individuals and groups and that incoming AI technology will make the process smoother and more effective.
WATCH: Facebook says it’s considering restricting live video after New Zealand shooter streamed attack live
Bernie Farber, who chairs the Canadian Anti-Hate Network, told Global News that the problem of content reappearing on Facebook was expected.
“I would have been shocked if it didn’t happen,” Farber said.
He explained the ban is in the early stages and that Facebook “wants to do the right thing” but will have to streamline and strengthen the process.
For now, Farber compared the bans to a game of whack-a-mole.
“They have this game called whack-a-mole, where something pops up and you have to whack it down and then another pops up. This is exactly what these people do,” he said.
Facebook tried to address part of the problem Wednesday by introducing a “Remove, Reduce, Inform” strategy aimed at managing what it calls “problematic content.” A press release noted the strategy is aimed especially at curbing fake news, misinformation and clickbait on Facebook.
The three steps outlined include removing content that violates Facebook’s policies, reducing the spread of the content and then informing users so they can make better decisions about what they click and share.
But the problem is more widespread than just Facebook.
WATCH: Facebook, Google defend efforts to remove hate speech and white nationalism before Congress
Veronica Kitchen, an associate professor of political science at the University of Waterloo, said “de-platforming” a person or a group can have positive effects. But the content often finds a place elsewhere.
Kitchen noted that banned individuals and groups have followings on Twitter, YouTube and on their own websites.
“There are other mechanisms for those people who are already inclined to seek her out to do so without very much difficulty,” she explained. “There’s also lots of other internet forums where it’s easier for these people to share their hateful views.”
READ MORE: N.Z. privacy watchdog calls Facebook ‘morally bankrupt’ following mosque shootings
Kitchen also pointed to places like Reddit and 4chan where racist, sexist and hateful views are commonly expressed.
Farber added that the Canadian Anti-Hate Network has been pushing Twitter to also ban white nationalist content.
“We’ve certainly been in touch with Twitter, and they don’t seem terribly responsive,” he said.
While Twitter’s policy notes that it does not tolerate hateful conduct or violence on the platform, it has not indicated it will follow Facebook’s lead on a more widespread ban.
—With files from Global News reporter Laura Hensley