Advertisement

In tragedy’s aftermath, fake news purveyors strike fast, exploiting Google and YouTube

This YouTube video, claiming that Texas gunman Devin Patrick Kelley was affiliated with Antifa, got over 78,000 views. YOUTUBE

After hasty media reports in 2011 claimed that congresswoman Gabrielle Giffords had been murdered (she was shot, but survived) journalism professor Dan Gillmor made a case for ‘slow news’ – realizing that in a confusing breaking-news situation, reporters and local authorities need time to gather facts properly, and that readers should be patient.

“As news accelerates faster and faster, you should be slower to believe what you hear, and you should look harder for the coverage that pulls together the most facts with the most clarity about what’s known and what’s speculation,” Gillmor wrote.

Six years later, things are arguably worse.

The misreporting in the Giffords case came from well-meaning reporters under intense pressure making mistakes; the problem now involves aggressively competitive fabricators of ideologically driven fictions exploiting the weaknesses of social platforms.

The Times wrote earlier this week about Elmer T. Williams, a ‘popular right-wing YouTube personality‘ (now kicked off YouTube) who sprang into action after the Texas church shootings last week, quickly publishing videos that first claimed that the gunman was “either a Muslim or black,” and then, when he was identified as Devin Patrick Kelley, that  he was “most likely a Bernie Sanders supporter associated with antifa — a left-wing anti-fascist group — who may have converted to Islam.”

Story continues below advertisement

A large reason why operators like this are as successful as they are is that if they publish videos in the space between early reports of an incident — or better yet the perpetrator’s name being made public — and the appearance of real information, they can harvest tens, or hundreds of thousands of clicks.

(A variation involves fabricating a perpetrator name to get ahead of the game — here Snopes debunks a fake news report in which the Texas shooter is somebody named ‘Raymond Peter Littleberry‘.)

Google has a similar issue: when there’s intense public interest in someone who has up to that point been unknown, Google’s algorithm tries to meet the demand with information sourced from wherever it can be found.

Some of these places are very murky: in the immediate aftermath of the Las Vegas shootings, Google pulled two 4chan threads falsely naming a man named Geary Danley as the gunman into its top story results, which seemed to lend them credibility.

This time around, a search for Devin Patrick Kelley in the aftermath of the Texas shootings “surfaced an editor of the conspiracy site InfoWars, a parody Julian Assange account claiming the shooter had converted to Islam, and a “news” Twitter feed that’s tweeted a few dozen times since it was created last month,” the Atlantic reported.

Story continues below advertisement

In both the Kelley and Danley cases, the grown-ups woke up and fixed the problem — last Sunday, Google autosuggested ‘devin patrick kelley antifa,’ while now you can’t get that as an autosuggestion if you try (see below), and if you force it as a search term you get a series of factual sources pointing out the lack of a connection.

In tragedy’s aftermath, fake news purveyors strike fast, exploiting Google and YouTube - image

Facts take a while to gather; shameless invention is a lot faster; truth takes a while to put its boots on. A lot can happen in the gap, though, and the platforms aren’t helping.

WATCH: Representatives from Facebook, Twitter and Google faced Senators Tuesday at the first of three Congressional hearings looking into how Russia attempted to meddle in the 2016 presidential election through social media.

Click to play video: 'Russian interference in U.S. election comes under Senate spotlight'
Russian interference in U.S. election comes under Senate spotlight

In fake news news:

Breaking news from Canada and around the world sent to your email, as it happens.

‘We must face this prodigious menace, open-eyed and now’

“We’re building this infrastructure of surveillance authoritarianism merely to get people to click on ads,” argues techno-sociologist Zeynep Tufekci in a recent TED talk.

Story continues below advertisement

Tufekci is a shrewd observer of digital culture and a harsh pessimist: ” … Now, if authoritarianism is using overt fear to terrorize us, we’ll all be scared, but we’ll know it, we’ll hate it and we’ll resist it. But if the people in power are using these algorithms to quietly watch us, to judge us and to nudge us, to predict and identify the troublemakers and the rebels, to deploy persuasion architectures at scale and to manipulate individuals one by one using their personal, individual weaknesses and vulnerabilities, and if they’re doing it at scale through our private screens so that we don’t even know what our fellow citizens and neighbors are seeing, that authoritarianism will envelop us like a spider’s web and we may not even know we’re in it.”

Video is here and transcript is here. Worth your time, in a ‘the less you know, the better you sleep’ sort of way.

Story continues below advertisement
  • This week, attention was drawn to the shadier side of YouTube for children, where low-rent video producers publish randomly rearranged cartoon videos, produced on a vast scale more or less randomly and often with quite disturbing results that seem more or less what would happen if 4chan was allowed to produce children’s programs. (Which in effect it is.) James Bridle has the best take on this I’ve seen so far, linking the problem to the total automation of production by producers and publication by the platforms. “What we’re talking about is very young children, effectively from birth, being deliberately targeted with content which will traumatize and disturb them, via networks which are extremely vulnerable to exactly this form of abuse.”
  • The Times has a shorter take, also worth a look. “Algorithms are not a substitute for human intervention, and when it comes to creating a safe environment for children, you need humans,” says one of their interviewees. What we see is a world where producers aren’t all that clear on what they’re making, the platforms have no idea what they’re publishing, and the first human eyes on some of this material can belong to a two-year-old in a high chair.
  • NeimanLab talks to Claire Wardle about online misinformation, and the platforms’ responsibility for what they publish. “I just would love to see a way of saying, this technology has already been built, it’s incredibly powerful, and with that power comes really difficult conversations.
  • I’d never heard of Google’s ‘Popular on Twitter’ feature before this week; it seems to not be a very good idea, for obvious reasons.
  • In the New Yorker, Masha Gessen reminds us that “Russian online interference (in the U.S. election) was a god-awful mess, a cacophony.” Many of the Russian ads were terrible, and it’s not clear that they influenced anyone’s vote. “Russians generally believe that politics are a cacophonous mess with foreign interference but a fixed outcome, so they invested in affirming that vision. In the aftermath, and following a perfectly symmetrical impulse, a great many Americans want to prove that the Russians elected Trump, and Americans did not.”
  • Russian-funded propaganda ads on Facebook turned out to be comically bad — amateurish, culturally tone-deaf. But we should be open to the possibility that they swayed the outcome, Vanity Fair argues. ” … the 2016 election was also decided by only several tens of thousands of voters in a handful of counties where swing voters were already deeply suspicious of Clinton … they latched onto these ideas that people were already hearing, and probably to a certain extent already believed, and hammered them home.”
  • In Vox, a long read asking whether the United States, and the West by extension, faces an ‘epistemic crisis,‘ or a crisis in our ability to tell truth from falsity, driven in part by the fragmentation of media. What if Robert Mueller makes an airtight case, grounded in the reality-based community, for Trump’s impeachment, and our information ecosystem is too damaged to act on it? “The only way to settle any argument is for both sides to be committed, at least to some degree, to shared standards of evidence and accuracy, and to place a measure of shared trust in institutions meant to vouchsafe evidence and accuracy. Without that basic agreement, without common arbiters, there can be no end to dispute,” David Roberts writes. A long read, worth your time.
Story continues below advertisement
Advertisement

Sponsored content

AdChoices