Take a Knee seems, on some level, like an all-American controversy — it involves football, the Star-Spangled Banner and the painful, unresolved history of race relations in the United States.
However, a Republican senator from Oklahoma warned this week, Russian trolls were doing their best to deepen the divisions that already exist, by amplifying the existing controversy on social media.
“They were taking both sides of the argument this past weekend, and pushing them out from their troll farms as much as they could to try to just raise the noise level in America and to make a big issue seem like an even bigger issue,” James Lankford told a U.S. Senate committee hearing.
A member of the U.S. Senate intelligence committee, Lankford can see classified intelligence reports on Russian troll farms and their activities.
However, some of the activity was visible to the rest of us.
Hamilton68, a site that tracks hashtags promoted by Russian-linked influence networks on Twitter, found that #boycottnfl and #nfl were on the list of top hashtags promoted by accounts they monitor, and #nflnogozone and #totalnflblackout were among the trending tags. (U.S. president Donald Trump this week urged fans to boycott the NFL for taking a tolerant attitude to player protests.)
Alert Twitter users this week noticed that a now-disabled fake account for Boston Antifa was tweeting on the #TakeAKnee hashtag from Vladivostok, an unlikely outpost for NFL fans. (A troll apparently forgot to disable geotagging on his account; odd, since Twitter turns geotagging off by default.)
“The goal is heightened tensions,” Hamilton68 founder Clint Watts told the Associated Press.
“They’ll use organic American content to amplify to American audiences. They would much rather use organic American content. It hits the audience better and it’s cheaper and more effective.”
“The Russians can just sit back and say: ‘Amplify on both sides. Make people angry.’ And it works, man, God, it works.”
In fake news news:
- On Wednesday, we covered the death of Paul Horner, a prominent fake news publisher who fell victim to an apparent overdose in Arizona at the age of 38. Horner actually died on September 18, and his brother announced it on Facebook on September 22, Friday of last week. The Washington Post explains why it took several days for reports to surface in mainstream media — faking his own death is just the kind of hoax that the live Paul Horner would have tried to pull off, and no reporter wanted to be the sucker who fell for it. The Post‘s due diligence included written confirmation from the local sheriff’s department, a search of public records to make sure that some other Paul Horner hadn’t died instead, and a call to Snopes — which as late as Friday, had still not committed itself one way or the other.
- A study from Oxford University found that U.S. voters on Twitter were shown more “junk news,” a term the researchers coined to cover “misinformation, polarizing and conspiratorial content” than real news reports from professional sources during the 2016 election. (h/t Axios and Mother Jones)
- On Thursday, Twitter told the U.S. Senate Intelligence Committee that it had shut down 201 accounts connected to the St. Petersburg, Russia-based Internet Research Agency, a troll farm that bought thousands of ads on Facebook during the U.S. election. Sen. Mark Warner called the company’s response “inadequate on almost every level,” the Washington Post reported. Democrats in the House and Senate said Twitter needed a much deeper and more aggressive investigation into Russian disinformation on its platforms.
- CNN reported that Russian-linked Facebook ads promoted a wide range of causes, from Black Lives Matter to gun-rights activism, and ” … if it appears that the targeting was particularly sophisticated, questions may be raised about how the Russians knew where to direct their ads … the apparent goal of the Russian buyers was to amplify political discord and fuel an atmosphere of incivility and chaos, though not necessarily to promote one candidate or cause over another.”
- Bots get a lot of attention, but Charlie Warzel of Buzzfeed reminds us that many live humans are willing tools of misinformation: ” … Focusing on Twitter’s bot scourge is an enticing but partial explanation for a far more difficult problem. It’s also ignorant of the very real, very human media machine bent on pushing a pro-Trump narrative and trolling its opponents at all costs, for whom bots are just one of many tools.”
- The Washington Post‘s Chinese-language edition (or a site purporting to be) “has built up a loyal audience among Chinese readers eager for international coverage,” the Financial Times reported this week. The problem? Not only is it not run by the Post, the paper didn’t even know about it until the FT called them to ask about it. The site actually paid to syndicate real Post material but mixed it with online copy from the Chinese state news agency, attributed to the Post. The Post put its foot down, and sunnewswp.com is no longer quite such an overt forgery (here’s what it used to look like.)
- Also in the Post: a long read on how it dawned on Facebook last year that Russian operators were using it as a tool of political manipulation. (Once again, the social media platform looks profoundly out of its depth in dealing with a problem that they seem not to have conceptualized until recently.) Facebook’s security experts were aware of the Russian hacker team Fancy Bear as long ago as June of 2016, the Post says, but misunderstood what it was doing: ” … (They) assumed that they were planning some sort of espionage operation — not a far-reaching disinformation campaign designed to shape the outcome of the U.S. presidential race.”
- Politico reports that Russian-funded Facebook ads promoted anyone who might cut into Hillary Clinton’s support, including Democratic rival Bernie Sanders and Green presidential hopeful Jill Stein, as well as Donald Trump: ” … The ads show a complicated effort that didn’t necessarily hew to promoting Trump and bashing Clinton. Instead, they show a desire to create divisions while sometimes praising Trump, Sanders and Stein.” Politico cites its source as “a person with knowledge of the ads.”
- Not widely reported at the time, perhaps significant now: the Daily Beast looks back at Ukranian complaints a few years ago that anti-Russian posts on Facebook were shut down when organized troll campaigns flagged them as hate speech or porn. Facebook CEO Mark Zuckerberg blew the concerns off, at the time. (The story recalls one in Wired back in June that suggested that Russia was using Ukraine to try out cyberwar concepts and that we would be wise to pay very close attention.)
- Defense One looks back at a large-scale cyberattack on Estonia on 2007. “Russia denied any involvement, but Estonia didn’t believe it … The attacks made Estonia more determined than ever to develop its digital economy and make it safe from future attacks. ‘I think every country should have a cyber war,’ says Taavi Kotka, the government’s former chief information officer.”
- The Atlantic Council’s Digital Forensic Research Lab has some post-game analysis of disinformation in Sunday’s elections in Germany. It seemed to be designed to boost the far-right AfD, which at 12.5 per cent of the vote did very well, for a fringe party. It featured an AfD campaign on the social network VK, a Russia-based equivalent of Facebook that has many users in Germany, and a social media campaign around alleged voter fraud. Its poster child was an imaginary poll worker who tweeted that she was looking forward to invalidating ballots cast for the AfD. (DFRLab traces the account’s profile picture to an actress in Pakistan.)
- The New York Times has a long read about how culture war-driven distorted reports of a sex crime in Twin Falls, Idaho, fuelled by anti-Muslim bloggers, sent the town into chaos.
- Facebook said this week that it would make political ads on the platform more transparent. But in ProPublica, Julia Angwin argues that it doesn’t go as far as it could. A page can buy an ad, for example, but the owner of the page and its funding could still be opaque. Also, Facebook gets to define what counts as ‘political advertising’ on its platform and what doesn’t. WNYC’s interview with Angwin about what Facebook has and hasn’t done, and could do, is here: “The truth is we haven’t had an election where you could automate your lying … and never have anyone see or know about it.”
- War on the Rocks reminds us that there’s nothing new in the idea of state-sponsored disinformation targeting other countries’ elections, just the form it’s now taking leveraging social media.
- In politics, “Dramatic lies do not always persuade, but they do tend to change the subject — and that is often enough,” writes Tim Harford in the Financial Times. ” … The very process of rebutting the falsehood ensures that it is repeated over and over again. Even someone who accepts that the lie is a lie would find it much easier to remember than the truth.”