Advertisement

Buffalo mass shooting: How should platforms respond to violent livestreams?

Click to play video: 'Buffalo shooting was ‘act of domestic terrorism,’ attorney of victim’s family says'
Buffalo shooting was ‘act of domestic terrorism,’ attorney of victim’s family says
WATCH: Civil rights attorney Benjamin Crump said at a news conference on Monday held with the family of victim Ruth Whifield that the mass shooting at a Buffalo, N.Y. supermarket on Saturday was “an act of domestic terrorism.” Referencing the “White Replacement Theory,” a racist ideology that is being investigated as a motivating factor in the shooting, Crump warned that this spread of hatred is indoctrinating young people to commit violence – May 16, 2022

A livestream of the mass shooting in Buffalo, N.Y. over the weekend was taken down in under two minutes, according to Amazon’s gaming platform, Twitch, where it was hosted.

The stream was taken down much faster than some previous shootings. A Facebook stream of the 2019 attack on two mosques in Christchurch, New Zealand, that killed 51 people was live for 17 minutes before being removed, for instance.

Though Twitch removed the primary stream quickly, other users had time to proliferate clips and images of the attack to other social media sites, with varying response rates to remove the footage.

Experts say that in incidents like this where every second counts, much of what determines how quickly these sites address content is in the hands of the platforms themselves.

Story continues below advertisement

But whether to pull footage down immediately or subject it to review is at the heart of a debate on content moderation that’s concerned tech leaders and policymakers alike.

Click to play video: 'Learning about the lives lost in Buffalo supermarket shooting'
Learning about the lives lost in Buffalo supermarket shooting

How fast should content be removed?

Clips have been slow to disappear online of the shooting in Buffalo on Saturday, where police say a white gunman killed 10 people and wounded three others, most of them Black.

Story continues below advertisement

On Twitter, for instance, footage purporting to display a first-person view of the gunman moving through a supermarket firing at people was posted to the platform at 8:12 a.m. PT on Sunday, and was still viewable more than four hours later.

Twitter said Sunday it was working to remove material related to the shooting that violates its rules.

The company has been at the centre of a debate over the extent to which content should be moderated on social media platforms. Tesla CEO Elon Musk has promised to make Twitter a haven for “free speech” as part of his US$44-billion deal to acquire the platform.

At a news conference following the Buffalo attack, New York Gov. Kathy Hochul said social media companies must be more vigilant in monitoring what happens on their platforms and found it inexcusable the livestream wasn’t taken down “within a second.”

Click to play video: '‘This landed on our doorstep, and it’s evil’: Mass shooting forever changes Buffalo, N.Y.'
‘This landed on our doorstep, and it’s evil’: Mass shooting forever changes Buffalo, N.Y.

“The CEOs of those companies need to be held accountable and assure all of us that they’re taking every step humanly possible to be able to monitor this information,” Hochul said Sunday on an American news station. “How these depraved ideas are fermenting on social media — it’s spreading like a virus now.”

Story continues below advertisement

But crafting content moderation standards that immediately crack down on violent content without considering the context in which an image is being shared can be a difficult needle to thread.

“What it really comes down to is the dynamic between the ability for speech and the ability for safety,” says Sarah Pollack, head of communications at the Global Internet Forum to Counter Terrorism (GIFCT).

Click to play video: 'Questions, uncertainty remain over Elon Musk’s Twitter buyout'
Questions, uncertainty remain over Elon Musk’s Twitter buyout

How content gets flagged for removal

GIFCT originally formed in 2017 as a consortium between YouTube, Microsoft, Twitter and Facebook (now Meta).

Story continues below advertisement

Pollack worked at Facebook at the time the consortium formed but left to work with GIFCT in 2021, two years after the organization spun off into a non-profit in response to the Christchurch shooting.

GIFCT works to streamline the sharing of potentially dangerous footage or information between its 18 member companies — Amazon’s Twitch included — in the wake of attacks such as Buffalo or Christchurch. It has followed its incident response framework in response to terrorist attacks or other mass violence events 250 times since 2019.

Breaking news from Canada and around the world sent to your email, as it happens.

One of the main ways it does this is the hash-sharing database, which allows one organization to create a specific hash that they can add to the database of a specific video or image, which then flags other members when it appears on their platform.

Click to play video: 'Toronto reacts to Buffalo mass shooting'
Toronto reacts to Buffalo mass shooting

For instance, when Twitch creates a hash for the Buffalo shooting, any instances of that same footage shared onto Twitter would be flagged.

Story continues below advertisement

Importantly, the appearance of one hash on another site doesn’t automatically see that footage removed. It’s up to the policies of that platform to decide what happens with it, which could be anything from an immediate strike down to a review by a content moderation team.

Pollack says this is important because of the context an image might be shared. While most might agree that a video of an attack shared in praise of the action would be inappropriate, other examples are less cut and dry.

“Are they raising awareness of the hateful ideology tied to this attack and they’re speaking out about it? Are they an academic trying to encourage a neutral discussion, among other experts, about this content and what it means?”

In some circles, penalizing these individuals for sharing content created by a perpetrator might be swing too far into “over-censorship,” she explains.

A Twitch spokesperson said the company has a “zero-tolerance policy” against violence. So far, the company hasn’t revealed details around the user page or the livestream, including how many people were watching it. The spokesperson said the company has taken the account offline and is monitoring any others who might rebroadcast the video.

At Twitter, which clarified in a statement to the Associated Press Sunday that it will remove footage of the attack and “may remove” tweets containing parts of the shooter’s manifesto, content moderation might not look like explicit removal in all cases.

Story continues below advertisement

When people share media to condemn it or provide context, sharing videos and other material from the shooter may not be a rules violation, the platform said. In these cases, Twitter said it covers images or videos with a “sensitive material” cover that users have to click through in order to view them.

Governments need to ‘step up’ to address online hate

Marvin Rotrand, national director of the League of Human Rights at B’nai Brith Canada, says the online hate elements associated with the Buffalo shooting show the need for the federal government to act more quickly on a bill to address extremism born online.

“It shows the necessity for governments to wake up and step up to the plate and in particular look at the advancements of technologies that have made our laws on hate in some ways redundant and inefficient,” he tells Global News.

The Liberal government introduced Bill C-36, legislation to combat online harms, in June 2021, but the federal election just a few months later sunk the legislation, which has yet to be reintroduced.

Story continues below advertisement

Justice Minister David Lametti told Global News on Monday that the Liberals are working “diligently” on addressing the online element of hate and extremism, but stressed that finding the right approach will take time.

“Every time there’s a tragedy like this, you sort of think… Could we have done it faster? But it’s also important to do it right. And so we have to balance that,” he said.

Rotrand said that a lack of rigorous online content protections in Canada has the greatest impact on “impressionable minds,” who become susceptible to radical and racist ideas such as the white replacement theory highlighted in the Buffalo shooter’s manifesto.

“Often it’s young people who don’t have any other source of information than what they get online, who fall for this and really become radicalized. And that radicalization leads to violence,” he said.

Click to play video: 'The David and Goliath fight against online conspiracies'
The David and Goliath fight against online conspiracies

Can platforms get to all the content?

Even if social media platforms did agree to a zero-tolerance policy on violent content, it might not be possible to reliably catch all iterations of the footage when it’s been manipulated.

Story continues below advertisement

Pollack says that GIFCT is constantly adding new hashes from attacks like Christchurch even today as new iterations with text overlays, banners or other subtle adjustments that could skirt the hash system.

“This is always going to be a very adversarial dynamic. You have bad actors who are going to continue to try to find new ways to get around all of the new parameters,” she says.

“The more you manipulate the content, the less effective a particular hash you already have is going to work.”

Jared Holt, a resident fellow at Atlantic Council’s Digital Forensic Research Lab, said live-content moderation continues to be a big challenge for companies.

He noted Twitch’s response time was good and the company was smart to watch their platform for potential re-uploads.

Margrethe Vestager, who is an executive vice-president of the European Commission, also said it would be a stiff challenge to stamp out such broadcasts completely.

“It’s really difficult to make sure that it’s completely waterproof, to make sure that this will never happen and that people will be closed down the second they would start a thing like that. Because there’s a lot of livestreaming which, of course, is 100-per cent legitimate,” she said in an interview with The Associated Press.

“The platforms have done a lot to get to the root of this. They are not there yet,” she added. “But they keep working and we will keep working.”

Story continues below advertisement

— with files from Global News’ Abigail Bimman and the Associated Press

Click to play video: 'Learning about the lives lost in Buffalo supermarket shooting'
Learning about the lives lost in Buffalo supermarket shooting

Sponsored content

AdChoices