Menu

Topics

Connect

Comments

Want to discuss? Please read our Commenting Policy first.

When a train hit that garbage truck in Virginia, the fake news sites pounced

Emergency first responders work at the scene of the crash where an Amtrak passenger train carrying Republican members of the U.S. Congress from Washington to a retreat in West Virginia collided with a garbage truck in Crozet, Virginia, U.S. January 31, 2018. Justin Ide/Crozet Volunteer Fire Department/Handout via REUTERS

That didn’t take long at all.

Story continues below advertisement

At 11:20 Wednesday morning, a train carrying Republican lawmakers through Virginia rammed a garbage truck that was stopped on the tracks. A passenger in the truck died.

Just over an hour later, the AP could offer some facts:

And only 65 minutes after that, a fully-fledged conspiracy theory was launched and promoted on Twitter:

That’s quite a conclusion to come to after a 65-minute investigation, but then YourNewsWire has been one of the more cheerfully shameless of the fake news sites. Others resort to hard-to-see figleaf disclaimers or to claims they are “just raising questions,” but YourNewsWire goes straight for the bald fabrication. It’s more straightforward, on some level.

Gateway Pundit was more coy, claiming only that “rumours swirl” after the accident, and embedding tweets that claimed the crash was “deep state sabotage,” and insisting in all caps that “NOTHING IS COINCIDENCE.”

Story continues below advertisement

At InfoWars, Alex Jones asked if “the stalled dump truck crash meant to send a warning to lawmakers to block Trump’s agenda.” Jones quickly produced two YouTube videos floating conspiracy theories about the accident. When we looked them they were the #5 and #8 results for a YouTube search for train gop time-limited to the last week. At the time, they had about 140,000 views.

Another strain of argument, which blazed very brightly on Twitter Wednesday, blamed Antifa, it being an immovable article of faith in some circles that whenever a mishap involves a train, Antifa must in some way be behind it.

Now, the fever swamp is what it is, and there’s not much to be done about that — the Alex Joneses of the world will always be with us.

The practical question is how the grownups should respond. The Internet democratized the means of publication, which we’d still like to believe did more good than harm, but as we know, it also has its dark side.

Story continues below advertisement

The platforms — Google, YouTube, Facebook — to a large extent have left editorial decision-making to their algorithms. Sometimes this works out, and other times, like when Google’s top stories featured a 4chan thread for hours accusing the wrong person of being the Las Vegas gunman, it doesn’t.

On Wednesday, the Daily Beast pointed out that conspiracy theories were a prominent feature of the ‘People Are Saying’ feature of the Facebook topic page on the incident. Facebook conceded that “the type of stuff we’re seeing today is a bad experience,” and promised a fix.

But the cycle has become a familiar one:

  • In a breaking news situation, the platforms’ algorithms publish fake news and conspiracy theories, unbeknownst to the platform
  • Humans point out the problem to the platform
  • Humans at the platform fix the immediate problem, but not the larger reason the problem occurred, which is editorial decision-making by machine
  • The platform, embarrassed, promises to do better

WATCH: GOP Rep. Bradley Byrne said in a phone interview Wednesday that the “train jerked very hard” when describing a train collision with a garbage truck in Virginia. At least one person, an occupant of the truck, is reported dead.

Story continues below advertisement

In fake news news:

Story continues below advertisement
  • A British man sentenced Friday for driving a van into a group of Muslim worshippers last June, killing one of them, was “brainwashed” by online propaganda that included Canada’s Rebel Media, according to a Vice report of what the prosecutor said. The judge agreed, saying that Darren Osborne had been “rapidly radicalised over the internetThe sentenced man will serve at least 43 years.
  • This week the New York Times published a long investigation into the trade in fake Twitter accounts, buying fake followers by the thousand being easier than earning real ones one by one. “At least 55,000 of the accounts use the names, profile pictures, hometowns and other personal details of real Twitter users, including minors,” the paper reported. In recent days, many have disappeared. (Caught in the fallout was Chicago Sun-Times film critic Richard Roeper; the paper alleged that some of his 225,000-odd Twitter followers were not actually real. If true, this would be an entirely new form of journalism misconduct for j-school ethics classes to dissect.)
  • At CJR, Matthew Ingram says that there’s nothing new about the revelations in the Times story: people have been buying and selling fake Twitter accounts for years. “One reason why Twitter has done very little about the fake account and bot problem until recently, critics say, is those accounts have boosted the size of its user base and the volume of network activity on the platform, making the company more valuable in the eyes of investors.”
  • Reuters Institute researchers based at Oxford found that fake news sites in France and Italy were much less popular than their real news equivalents in general, but that real and fake were much more equal in terms of the Facebook interaction generated.
  • Motherboard explores the strange world of what it calls AI-assisted fake porn (link is work-safe), which involves one person’s face and another person’s body. The production values aren’t what they might be, at least for now — “It’s not going to fool anyone who looks closely” —  but the videos aren’t that hard to make. It isn’t all that clear that making them is illegal in the way that revenge porn might be.
  • From CNN: How Russian trolls orchestrated a demonstration and counter-demonstration in 2016 in Houston through Facebook pages for about US$200:
Story continues below advertisement
  • Google has announced changes to its featured snippets feature, which turned out (like many online tools not monitored by live humans) to be easily manipulated. “Last year, we took deserved criticism for featured snippets that said things like ‘women are evil’ or that former U.S. President Barack Obama was planning a coup,” the company writes. In future, they say they will keep track of the reliability of sources. We’ll see.
  • YouTube offers us videos based on algorithms designed to find out what we want and give it to us. If the platform consistently offered pro-Trump, anti-Clinton videos in the U.S. 2016 election, viewers as a group should just look in the mirror, in this explanation. But what if the algorithms are being manipulated, the Guardian asks? Long read, worth your time.
  • At the American Association of University Professors site, a first-hand account of what it was like to be the target of months of digital harassment. (It’s very calm and objective, given what a chilling and exhausting experience it must have been to undergo. You’ll just have to read it.)
  • Switching Facebook feeds back to a simple chronological list might help to curb the spread of misinformation on the platform, former Facebook executive Dipayan Ghosh argues. Ghosh also reminds us that most content that ends up categorized as “fake news” (a flawed term that we seem to be stuck with) is created to make money, not for political reasons.
  • And another reminder not to trust screenshots of anything in a browser unless you trust the source. (We’ve dealt with this at length in the past, but it bears repeating.)
Story continues below advertisement
Advertisement

You are viewing an Accelerated Mobile Webpage.

View Original Article