That was the finding from Privacy Commissioner Daniel Therrien’s new report out on Thursday, in which he cautioned Canadians against the growing threat of “surveillance capitalism.”
“While we have seen state surveillance modulated to some extent, the threat of surveillance capitalism has taken centre stage,” Therrien wrote in his new report.
“Personal data has emerged as a dominant and valuable asset and no one has leveraged it better than the tech giants behind our web searches and social media accounts.”
So what is surveillance capitalism, and what can you do about it? Global News spoke to some experts to break it down.
What is surveillance capitalism?
Surveillance capitalism is a term that describes when companies gather information about what you do in your daily lives — and then package that into a product that can be sold, according to Dr. Taylor Owen, who is the director of the Centre for Media, Technology and Democracy at McGill University.
“It’s a new model of economic production, essentially, that takes the data as the extraction and creates a product out of that, which is our attention,” Owen said.
Companies can then sell that product — the data that details the best way to capture your attention — to the people who will pay for it, which is usually advertisers.
Surveillance capitalism, then, is “the ability to use the data about a user,” Owen said.
“So if I’m a user of a platform, (the platforms will) use the data that I create to then sell the product — ads, generally — that are designed to change my behaviour,” he said.
The term “surveillance capitalism” was first coined by Shoshana Zuboff, Owen said. Zuboff is an academic and researcher who wrote the book The Age of Surveillance Capitalism.
In an interview with The New York Times, Zuboff issued a stark warning to those who think their time spent scrolling on social media is entirely harmless.
“This is a massive surveillance empire worth hundreds and hundreds of billions of dollars,” she said.
“But we call it an app.”
Why is surveillance capitalism a threat?
In his new report, Canada’s privacy watchdog said that digital technologies like artificial intelligence, which rely on gathering and analyzing user data, are at the “heart of the fourth industrial revolution” and are “key to our socio-economic development.”
“However, they pose major risks to rights and values,” Therrien wrote.
“To draw value from data, the law should accommodate new, unforeseen, but responsible uses of information for the public good. But, due to the frequently demonstrated violations of human rights, this additional flexibility should come within a rights-based framework.”
“I think everybody’s familiar with having an emotional response to the content they’re seeing, being angry about it or hating it, or truly loving that piece of content,” Owen said.
“And I think whenever we have that emotional response to using these feeds of the content, part of that is because we are being presented with content that that company knows will evoke that emotional response.”
So when Russia used these tools and spent time ahead of the 2016 election building up pages they thought would evoke a response from the African-American community, mimicking the Black Lives Matter movement, it worked — and they “built up an audience,” Owen said.
“And then two days before the election, they started posting content to that feed designed to suppress the African-American vote,” Owen explained.
“And so what effect did that have? We don’t know. Right? Like, did that make some people who might have voted for Hillary Clinton just not show up? We don’t know. But the power to do that was there and in and of itself is something we should be concerned about.”
On the more individual level, documents from Facebook itself have made it clear that the algorithms can have an impact on mental health for many users.
As an example, the documents showed Facebook’s “machine-learning algorithms,” for a “significant portion of young women,” make them feel “demonstrably worse from being on social media,” said Christopher Parsons, who is a senior research associate with the Citizen Lab at the University of Toronto.
Facebook has pushed back on allegations that its platform is harmful in the past.
“We continue to make significant improvements to tackle the spread of misinformation and harmful content,” said Facebook spokesperson Lena Pietsch in a statement published shortly after a Facebook whistleblower spoke out publicly.
“To suggest we encourage bad content and do nothing is just not true.”
Personal data is not only being collected and monetized, but it’s being shared with “God knows who,” added Parsons. There’s no telling how those individuals will interpret — or misinterpret — the data they gather.
“The understandings coming out of that data are often biased or incorrect, or they’re just fit for sort of a normal population, which obviously means it isn’t an inherently equitable analysis,” he said.
On the flip side, Owen said, it’s important not to oversell the power that these platforms hold when it comes to influencing user behaviour. That, after all, is their entire business model — and is very good PR for them, he said.
“Perhaps critiques of that (business) model are actually playing into a pretense that the companies are selling, which is that they are all-powerful and that they can make us, anybody, do anything, any time,” he explained.
“That, in many ways, plays into … their very business model, because that’s what they’re selling. That’s the product they’re selling.”
What should be done about surveillance capitalism?
The issue of surveillance capitalism has become a problem that affects everyone, Owen said.
“We’ve allowed companies to behave in a way that I think have — in addition to all their benefits — have some social costs and economic costs,” he said.
“And that’s precisely when we expect governments to do something to limit these negative externalities.”
Owen said there are a number of things governments can do. They can limit the “type of data that can be collected,” and they could also “limit the types of uses of that data.”
“And of course, you can limit the companies themselves,” he said.
Parsons said governments will need to tread carefully in terms of how they do this, though, because companies are “chomping at the bit” to get regulated.
“They want to build systems that are so difficult to adhere to that they choke out any new competitors,” Parsons explained.
The key, Parsons said, is to break these companies up.
“The actual solution is to reinvigorate monopoly legislation, and to force these companies to break down into their smaller bits so that all of a sudden you might still have Facebook as a multinational, but you have Facebook Brazil, Facebook Canada,” he explained.
These would need to be more than just a branch office, he added, but rather each country should host the actual infrastructure of the platform.
“So you suddenly then have to start hiring content moderators that can speak the local languages, understand the local dialects, and aren’t able to just massively purchase all their competitors and create these supersites,” Parsons said.
But until that happens, there are some things individuals can do to protect themselves, too. Owen said a big part of the puzzle is “being more aware about how your data could be used and abused.”
“Thinking a little bit about the tradeoff of convenience versus violation of privacy: What are we really getting out of putting that smart speaker in our house? And is that worth exchanging it for data about your life in real-time inside your home, for example?” Owen said.
“Now it might be that convenience might be important, but we have to start thinking about those as a tradeoff.”
Many people will still walk to make that tradeoff, Owen said. But that’s where government comes in.
“We should be demanding governments to make that tradeoff better for citizens by limiting the data that can be collected, even if we do opt in to using these tools.”
— with files from Reuters