One creepy subfield of fake news and disinformation involves faked audio and video.
In December, Vice introduced us to the world of fake porn videos, where the bodies belong to one person and the faces belong to an unsuspecting somebody else. (Link is work-safe.)
Fake audio and video pose more challenges to the fabricator than text or still images, and so far there are quality issues – it would be hard to call the results totally convincing.
However, it’s important to keep an eye on it, since it would be surprising if it didn’t get smoother over time. And we’re not as prepared for the idea that audio or video could be faked as we are, for example, that an assertion on social media could be.
This week, I tried out Lyrebird, a Montreal-based site, which takes voice samples you read, chops up the audio files and repackages them, then uses them to create audio of your voice speaking whatever text you enter.
The site says it works best for North American English speakers.
I ended up reading it 60 sentences that the site dictated, trying not to feel self-conscious and failing.
The sentences themselves might be described as Google Translate on a bad day. They included:
The aerodynamic shape of whales is nifty.
The one you marry has got to swim rivers for you! There’s a bunch of autograph seekers out front.
I always have to sneeze after eating grapes.
We got orders to hold you safely in the branches of this pine tree.
And the ever-important question:
What would it take to convince you to return my doormat?”
The lack of any change in intonation isn’t surprising, but it’s odd that it doesn’t respond to punctuation marks like commas and periods. Also, it speaks quite a lot faster than the speech I recorded (which the site won’t let me link to, so you’re spared having to listen to me say, “Then they got a hold of some dough and went goofy. The next thing the dope wants is a room,” or “Could you give me a piece of bread please? This is the only building that was left unaffected by fire.”)
Still, the point is that it works at all as a concept.
The Florida high school shooting
It’s depressing that there are a significant number of people whose immediate response to mass tragedy is to find ways to create and exploit online disinformation. The Florida high school shooting this week provided a number of examples.
Some are drearily familiar, like the unfunny 4chan meme that a comedian named Sam Hyde was the gunman. This has been recycled a number of times now, and the unwary are taken in, at least once. BuzzFeed has an explainer video:
Fake tweet blaming Antifa for the shooting — check. (It’s bad, it’s newsworthy, Antifa must be to blame.)
Anguished tweets pleading for help in finding people who aren’t actually missing (“EVERY RT HELPS“) and weren’t missing in any number of other tragedies when the same pictures were used, either — check.
The killings also unleashed a new round of the bitter and unresolvable U.S. debate about gun control. Adding further gas to an online debate that has already had lots of gas poured on it — Russian bots, doing what Russian bots do with divisive memes, any divisive memes. Russian bots tracked by Hamilton68 were promoting the hashtags #gunreformnow and #falseflag.
The shooting’s aftermath did reveal something new to keep an eye out for, as if we needed that.
Miami Herald reporter Alex Harris will have found it hard enough to cover the shooting (she also had to cover the 2016 Pulse Nightclub massacre) without having to deal with screenshotted forgeries of her tweets in which she was looking for witnesses to interview. Poynter explains.
In other fake news news:
- The Washington Post looks at faked videos, saying of one example that it’s ” … not exactly realistic, you’ll notice. But it’s easy to see how, using a different model and a different voice, it could be more convincing.” Technology has made this kind of fake much easier and continues to do so, they point out.
- In Buzzfeed, Charlie Warzel speaks to Aviv Ovadya, one of the first to identify the 2016 fake news crisis. Fabrications will get better and better, he predicts: “The future …will arrive with a slew of slick, easy-to-use, and eventually seamless technological tools for manipulating perception and falsifying reality.” The flip side: once high-quality digital fakes become common, it will be easier for public figures to cast doubt on real video. “We were utterly screwed a year and a half ago and we’re even more screwed now,” Ovadya warns. “And depending how far you look into the future it just gets worse.” Long read, worth your time.
- On Friday, a U.S. federal grand jury in Washington indicted the Internet Research Agency, a St. Petersburg, Russia-based troll farm, and 13 Russian nationals on charges of interfering with the 2016 U.S. election.
- This study is a deep look at paid trolls in the Philippines, based on a year of research. “The problem of disinformation production goes deeper than any one caricatured hero or celebrity villain,” the authors write. “It is systemic, deeply rooted, and entwined in the cultural fabric of Philippine society. Behind the madness is an invisible machine: industrial in its scope and organization, strategic in its outlook and expertise, and exploitative in its morality and ethics.”
- Partisanship is so deeply rooted that online advertising/disinformation/propaganda does very little to change anybody’s vote, a study suggests. The New York Times explains.
- Hollywood plans a movie about the Welsh journalist Gareth Jones, who was first to reveal the truth about the Ukranian famine of the 1930s to Western newspaper readers. Jones’ work stands in contrast to Walter Duranty, the New York Times Moscow bureau chief at the time, who systematically denied that the famine was happening, even as it unfolded around him. Duranty remains a controversial figure, to put it mildly, and counts as one of the great fake news purveyors of history.