On Wednesday, U.S. congressional investigators will grill a Facebook executive about the murky role that dark ads on the platform played in that country’s 2016 election.
The few things that are known are unsettling — that a Russian troll farm bought US$100,000 in ads on the platform during the campaign, that they often seemed to aim more at being divisive than at backing a particular cause, and that the U.S. isn’t their only target. German voters were hammered with deceptive social media messages ahead of elections there in September.
Canada’s next general election is less than two years away. Will Canadian voters be targeted on social media, and with what messages? What will they be trying to accomplish? We have very few ways of finding out.
There is no legal requirement for parties, or anyone else, to disclose the contents of election ads, explains York University political science professor Bob MacDermid.
“I’ve been interested in finding that out, and you end up with this crude statement of expenses that doesn’t tell you a thing.”
Before the social media era, though, the concept of a secret ad didn’t make much sense — a party ran an ad in a newspaper, or on TV, and everyone could see it.
On Facebook, however, political ads can be narrowly tailored to very specific groups, precisely tailored to the recipient’s race, age, location or even mood. And because they’re visible only to the recipient, they’re hard to study. Not all are necessarily associated with a political party.
“If you’re buying TV ads with a very negative message, you’d rather the people you thought were going to be annoyed by them didn’t see them, and that’s what Facebook allows you to do,” London, England-based developer Sam Jeffers says.
We’re applying 20th-century rules to the problems of 21st-century elections, he says.
“We’re legislating, as we always do, for past threats and problems. We need to do more work to imagine what might come next. We need to protect elections from developing technology that changes once or twice a cycle.”
His solution: a browser extension called Who Targets Me, launched for the British election campaign earlier this year. Users entered some basic demographic information and let it report the political ads they were seeing on Facebook. As a result, the wider society could see who was being targeted with what messages, and draw their own conclusions as to why. Around 12,000 people in the U.K. ended up participating.
WATCH: Two of the world’s biggest social media companies, Google and Facebook, are facing criticism over fake news sites after the outcome of the U.S. presidential election.
“Our regulations are kind of optimized for door-to-door campaigning, printing large numbers of leaflets,” he says. “They’re not designed around large volumes of very cheap, highly targeted digital advertising. ”
“There is a huge volume of this advertising taking place, and it’s hard to spot these things in real time.”
Earlier this month, Guardian Australia and the news non-profit ProPublica launched a similar plug-in for Australia. Australians are voting in a non-binding referendum on same-sex marriage.
Australians letting the plug-in crowdsource what they see on Facebook gives readers, in general, a much more complete picture of social media ads in the campaign than any individual would get.
Last week, Facebook announced that users in Canada would soon be able to see what ads a page is buying, if they went to that page. A user would still need to know what page to go to, however.
As well, Facebook said it will require political advertisers to disclose some information about who paid for an ad.
The changes would give an individual some information on why they are being targeted and by whom; they are less useful in shedding light on the role of dark ads in the campaign in general.
Facebook says it planned to find political advertisers who don’t identify themselves through machine learning tools; some observers have questioned whether that’s possible.
Facebook also promised to create a searchable archive of political ads for the United States.
WATCH: Sen. Mark Warner revealed during a Senate Intelligence Committee briefing that the Russian effort to sway the election involved attempts to test the vulnerabilities of 21 states’ election systems.
READ MORE: Russian cyberattacks targeted 39 countries and combined hacking, forgery, disinformation
Russian operatives have also set up political Facebook pages that drew hundreds of thousands of followers, such as the now-defunct Heart of Texas Facebook page, which argued that Texas should separate from the United States.
In June, the federal Communications Security Establishment warned that “… adversaries could use social media to spread lies and propaganda to a mass audience at a low cost (and) masquerade as legitimate information providers, blurring the line between what is real and what is disinformation.”
Earlier in October, the Russian Embassy called a Canadian law that targets foreign citizens deemed to be guilty of human rights abuses is a “deplorably confrontational act” which “will be met with resolve and reciprocal countermeasures.”
Kirill Kalinin, the press secretary at the Russian embassy, would not agree to an interview or expand on the language of the press release. In particular, he would not explain what kind of “reciprocal countermeasures” were being referred to.
“Elections are becoming these manipulatory free-for-alls, where voters are subject to all sorts of lies and narrowcasting and so on, and potentially take that information into making a decision about who to vote for, which is really a serious threat to democratic discourse and discussion,” MacDermid says.
“It’s become much more hidden, much more subterranean, much more an attempt to influence people without perhaps their being aware of it, and influence their voting decisions.”