That didn’t take long at all.
At 11:20 Wednesday morning, a train carrying Republican lawmakers through Virginia rammed a garbage truck that was stopped on the tracks. A passenger in the truck died.
Just over an hour later, the AP could offer some facts:
And only 65 minutes after that, a fully-fledged conspiracy theory was launched and promoted on Twitter:
That’s quite a conclusion to come to after a 65-minute investigation, but then YourNewsWire has been one of the more cheerfully shameless of the fake news sites. Others resort to hard-to-see figleaf disclaimers or to claims they are “just raising questions,” but YourNewsWire goes straight for the bald fabrication. It’s more straightforward, on some level.
Gateway Pundit was more coy, claiming only that “rumours swirl” after the accident, and embedding tweets that claimed the crash was “deep state sabotage,” and insisting in all caps that “NOTHING IS COINCIDENCE.”
At InfoWars, Alex Jones asked if “the stalled dump truck crash meant to send a warning to lawmakers to block Trump’s agenda.” Jones quickly produced two YouTube videos floating conspiracy theories about the accident. When we looked them they were the #5 and #8 results for a YouTube search for train gop time-limited to the last week. At the time, they had about 140,000 views.
Another strain of argument, which blazed very brightly on Twitter Wednesday, blamed Antifa, it being an immovable article of faith in some circles that whenever a mishap involves a train, Antifa must in some way be behind it.
Now, the fever swamp is what it is, and there’s not much to be done about that — the Alex Joneses of the world will always be with us.
The practical question is how the grownups should respond. The Internet democratized the means of publication, which we’d still like to believe did more good than harm, but as we know, it also has its dark side.
The platforms — Google, YouTube, Facebook — to a large extent have left editorial decision-making to their algorithms. Sometimes this works out, and other times, like when Google’s top stories featured a 4chan thread for hours accusing the wrong person of being the Las Vegas gunman, it doesn’t.
On Wednesday, the Daily Beast pointed out that conspiracy theories were a prominent feature of the ‘People Are Saying’ feature of the Facebook topic page on the incident. Facebook conceded that “the type of stuff we’re seeing today is a bad experience,” and promised a fix.
But the cycle has become a familiar one:
- In a breaking news situation, the platforms’ algorithms publish fake news and conspiracy theories, unbeknownst to the platform
- Humans point out the problem to the platform
- Humans at the platform fix the immediate problem, but not the larger reason the problem occurred, which is editorial decision-making by machine
- The platform, embarrassed, promises to do better
WATCH: GOP Rep. Bradley Byrne said in a phone interview Wednesday that the “train jerked very hard” when describing a train collision with a garbage truck in Virginia. At least one person, an occupant of the truck, is reported dead.
In fake news news:
- At Forbes, an explanation of how Microsoft’s Bing News was conned into displaying YouTube content from a fever-swamp channel called Top Stories Today (one offering, chosen from the top of the pile: Former FBI Asst. Director Thinks That Hillary Should be Shot by Firing Squad!) because, as far as anyone can tell, the algorithm decided that something named Top Stories must be perfect for its top stories slot. Bing fixed the problem after Forbes asked them for comment (see the cycle described above.)
- In Fast Company, Sarah Kendzior reflects on the false missile attack alarm that terrified Hawaiians on Jan. 13, and warns that a false alert could turn into a real nuclear war very easily. “If a false alert goes out, and Trump hears about it through Fox News, a Fox News imposter account, or another dishonest social media account, will he launch a retaliatory nuclear strike without further verifying the information with NORAD or consulting advisers? False nuclear strike alerts are terrifying for the population in their own right … but the greatest danger may be the combined effect of the president’s gullibility, impetuousness, and enthusiasm for war.”
- Back in November, fitness tracking app Strava released a global map of users’ workouts, depicted as glowing lines on a black background. (Undeniably, it looks cool.) This week, an Australian university student pointed out that the map revealed the locations of U.S. and Russian bases in obscure parts of places like Djibouti, Syria and Niger. U.S. lawmakers are demanding that Strava account for itself, but it’s hard to argue that special forces soldiers and CIA operatives shouldn’t have known better. “These digital footprints that echo the real-life steps of individuals underscore a greater challenge to governments and ordinary citizens alike,” Wired explains. “Each person’s connection to online services and personal devices makes it increasingly difficult to keep secrets.”
- For Zeynep Tufecki, the lesson of the Strava story is that “the privacy of data cannot be managed person-by-person through a system of individualized informed consent“: neither the user nor the company involved have any idea what the implications of gathering and releasing personal data night be.
- Arms control expert Jeffrey Lewis points out that Strava seems to have outed several locations connected to Taiwan’s secretive missile program.
- A British man sentenced Friday for driving a van into a group of Muslim worshippers last June, killing one of them, was “brainwashed” by online propaganda that included Canada’s Rebel Media, according to a Vice report of what the prosecutor said. The judge agreed, saying that Darren Osborne had been “rapidly radicalised over the internet. The sentenced man will serve at least 43 years.
- This week the New York Times published a long investigation into the trade in fake Twitter accounts, buying fake followers by the thousand being easier than earning real ones one by one. “At least 55,000 of the accounts use the names, profile pictures, hometowns and other personal details of real Twitter users, including minors,” the paper reported. In recent days, many have disappeared. (Caught in the fallout was Chicago Sun-Times film critic Richard Roeper; the paper alleged that some of his 225,000-odd Twitter followers were not actually real. If true, this would be an entirely new form of journalism misconduct for j-school ethics classes to dissect.)
- At CJR, Matthew Ingram says that there’s nothing new about the revelations in the Times story: people have been buying and selling fake Twitter accounts for years. “One reason why Twitter has done very little about the fake account and bot problem until recently, critics say, is those accounts have boosted the size of its user base and the volume of network activity on the platform, making the company more valuable in the eyes of investors.”
- Reuters Institute researchers based at Oxford found that fake news sites in France and Italy were much less popular than their real news equivalents in general, but that real and fake were much more equal in terms of the Facebook interaction generated.
- Motherboard explores the strange world of what it calls AI-assisted fake porn (link is work-safe), which involves one person’s face and another person’s body. The production values aren’t what they might be, at least for now — “It’s not going to fool anyone who looks closely” — but the videos aren’t that hard to make. It isn’t all that clear that making them is illegal in the way that revenge porn might be.
- From CNN: How Russian trolls orchestrated a demonstration and counter-demonstration in 2016 in Houston through Facebook pages for about US$200:
- Google has announced changes to its featured snippets feature, which turned out (like many online tools not monitored by live humans) to be easily manipulated. “Last year, we took deserved criticism for featured snippets that said things like ‘women are evil’ or that former U.S. President Barack Obama was planning a coup,” the company writes. In future, they say they will keep track of the reliability of sources. We’ll see.
- YouTube offers us videos based on algorithms designed to find out what we want and give it to us. If the platform consistently offered pro-Trump, anti-Clinton videos in the U.S. 2016 election, viewers as a group should just look in the mirror, in this explanation. But what if the algorithms are being manipulated, the Guardian asks? Long read, worth your time.
- At the American Association of University Professors site, a first-hand account of what it was like to be the target of months of digital harassment. (It’s very calm and objective, given what a chilling and exhausting experience it must have been to undergo. You’ll just have to read it.)
- Switching Facebook feeds back to a simple chronological list might help to curb the spread of misinformation on the platform, former Facebook executive Dipayan Ghosh argues. Ghosh also reminds us that most content that ends up categorized as “fake news” (a flawed term that we seem to be stuck with) is created to make money, not for political reasons.
- And another reminder not to trust screenshots of anything in a browser unless you trust the source. (We’ve dealt with this at length in the past, but it bears repeating.)