When Jason Nickerson’s daughter became old enough for social media, he did something radical: he bought a landline.
That way, when his 12-year-old daughter wanted to make plans with friends, she could pick up the phone and call — instead of messaging them through a social media profile, where she’d be exposed to all the realities of online life, from fun conversations with friends to developing self-esteem issues.
“Our house just does not have access to social media, and that’s a very deliberate parenting choice that my wife and I have made,” Nickerson said.
But as parents like Nickerson lay down rules for online engagement, he says it lays bare a problem when it comes to the ever-growing online world: a lack of action from the Canadian government in regulating it.
“We haven’t really done anything yet,” said Natasha Tusikov, an assistant professor at York University and author of Chokepoints: Global Private Regulation on the Internet.
Canada has yet to pass substantive legislation that reins in the powerful tech giants behind the world of social media. The few proposals the government has brought forward fail to tackle what Tusikov says is the heart of the problem: the business model.
Concern about the damaging effect social media can have — particularly on young people — is nothing new. But in the decades since the introduction of MySpace and the rise of Twitter, Facebook and Instagram, there are indications it’s getting worse. A recent deep dive into Facebook’s operations, by the Wall Street Journal, revealed the company is aware of its platforms’ negative influences on the mental health of users — a sizable percentage of them young people.
Despite the negative effects coming into clearer focus, the entrenchment of social media in the day-to-day lives of Canadians is nearly inescapable. Global News is unravelling the many facets of influence these platforms have — both offline and on — and what the government is going to do about them.
Social media companies make their money by keeping users’ eyes on screens.
“The business model of these social media companies … is to maximize user engagement, whether the content is excellent, wholesome content or whether it’s terrible, disgusting, hateful content,” Tusikov said.
“They make their money by maximizing user engagement, which generates advertising revenue.”
Tusikov’s comments echo testimony given by Facebook whistleblower Frances Haugen to the U.S. Senate in October 2021.
During the Senate hearings, the former Facebook data scientist accused the company of being aware of apparent harm to some teens from Instagram, and of being dishonest in its public fight against hate and misinformation.
“Facebook’s products harm children, stoke division and weaken our democracy,” Haugen said.
“The company’s leadership knows how to make Facebook and Instagram safer, but won’t make the necessary changes because they have put their astronomical profits before people.”
In a statement sent to Global News on Monday, Meta — which owns Facebook — pushed back on Haugen’s claims.
“We want our platforms to be a supportive and safe place for young people especially,” said Lisa Laventure, the head of communications for Meta in Canada.
“For years, Meta has done extensive work in bullying, suicide and self-injury, and eating disorder prevention and we will continue to look for opportunities to consult with experts and build new features and resources that help people who are struggling with negative social comparison or body image issues.”
Still, Haugen’s testimony worried some parents, including Sulemaan Ahmed, who has three children aged 12 to 18.
“I’m thankful that I didn’t have social media as a teenager, because I think the pressure … on kids today is (very) different than it was back then,” Ahmed said.
He pointed to Haugen’s testimony as a key example.
“She revealed internal Facebook documents that showed Instagram has a negative impact on young women,” Ahmed said.
“I think parents have a responsibility, and government does, to ensure that when (children) are young and impressionable, that they don’t see certain things that could be traumatizing to them, or radicalize them, or hurt them from a mental health perspective.”
Meta has pushed back on these characterizations.
The company told Global News it has “absolutely no commercial incentive, no moral incentive, no company-wide incentive” to do “anything other than” try to give people a positive experience on its platforms.
“Instagram’s research shows … that on 11 of 12 well-being issues — including serious areas like loneliness, anxiety, sadness and eating issues — more teenage girls who said they struggled with those difficult issues, also said that Instagram made them either better or had no impact, rather than making them worse,” Meta’s spokesperson wrote.
What has the government done?
Canada has dipped its toes in the water of online regulation, but when it comes to the world stage, we’re “laggards,” according to Tusikov.
“We’re behind Australia, we’re behind Germany, we’re behind the United Kingdom,” she said.
Australia, for example, has established what it calls an eSafety Commissioner, the world’s first government agency “solely committed to keeping citizens safer online,” according to its website. Germany, meanwhile, enacted what the New York Times called “one of the world’s toughest laws against online hate speech” in 2017.
Canada, meanwhile, has been studying the issue of online harm for years. Parliamentary committees have been examining social media’s impact on young people from various angles since at least 2014.
Despite all these studies, it wasn’t until 2020 that Canada finally put social media regulation into legislation — though none of the bills have made it through Parliament.
In the fall of 2020, the government made its first real foray into regulating the internet.
Bill C-10, legislation aimed at modernizing the Broadcasting Act, was supposed to help Canadian content regulations reflect today’s media consumption trends. But shortly after introducing C-10, the government brought forward another proposed law — one that took aim at online hate.
That legislation, known as Bill C-36, gave new recourse to people worried that another person will commit an offence motivated by “bias, prejudice or hate.” That hate can be based on a number of factors — including race, sex or gender identity — and the aggrieved party would be able to take the issue to a provincial court, provided the attorney general consents.
The bill would also amend the Canadian Human Rights Act to make it a “discriminatory practice” to communicate hate speech through the internet where it is “likely to foment detestation or vilification of an individual or group of individuals on the basis of a prohibited ground of discrimination.”
The last significant proposal for regulation of the digital space came in Bill C-11, which was introduced in December of 2020. The bill, if passed, would have implemented a new legislative regime governing the collection, use and disclosure of personal information for commercial activity in Canada.
Basically, it would set rules around what data digital platforms can collect and how they can use it.
When Prime Minister Justin Trudeau asked the Governor General to dissolve Parliament in August 2021, however, all of these bills died before they could become law, though the government has promised to revive them soon.
Meta says they “welcome and support regulation” — provided it preserves “the benefits of the digital economy while addressing potential harms.”
“It’s been 25 years since the rules for the Internet have been updated and it’s time for industry standards to be introduced so private companies aren’t making these decisions on their own,” a Meta spokesperson wrote to Global News in a statement on Monday.
Experts are unimpressed — so far
In late July, the government dropped a hint about what future legislation and regulations aimed at tackling online harms would look like.
The government published a “discussion guide” and a “technical paper” on its proposals for a future online anti-harm regime. The documents included a wide-ranging plan detailing which entities would be subject to the new rules, what types of harmful content would be regulated, and the rules for those regulated entities and new regulatory bodies.
“I found that proposal very problematic,” said Cara Zwibel, director of the fundamental freedoms project at the Canadian Civil Liberties Association.
“The government has taken a lot of the bad ideas from other countries and transported them here.”
Zwibel said if legislation is introduced based on this technical paper, it would be “really disappointing.”
“A lot of groups spent time to let the government know where they saw problems and if none of that is considered kind of relevant, it really, really raises a question of why you would ever have a consultation process at all,” she said.
Part of the problem with the proposal, according to Zwibel, is that it focuses too much on content moderation, as opposed to the business models of the platforms themselves.
“The content moderation piece is, to me, an issue that you get to further down the stream,” she said.
“It’s very hard to target hate speech without incidentally grabbing a bunch of other things that you don’t want to scoop up.”
Tusikov was equally critical of the government’s previous bids to regulate and legislate the online world.
“I think the last attempt at a bill was a mishmash. It was a poorly constructed, rushed bill that confused or collapsed too many different types of illegal content together,” she said.
That bill, Tusikov said, dealt with content that sexually exploits children, non-consensual sharing of sexual images, and terrorism — all in the same legislation.
“These are really broad, different issues,” Tusikov said.
“What I think the government needs to do now is to take a look at different types of illegal content or different types of harmful content and produce a clearly and cogently constructed bill that makes the argument of how and why this will be regulated.”
For both Tusikov and Zwibel, there’s one clear area the federal government needs to tackle going forward: the business model.
“There’s an incentive for companies to create content that goes viral, whether that’s medical misinformation or hate speech or cat photos,” Tusikov said.
“And until we address this business model, which is fuelled by advertising and the collection of users’ data, we’re not going to get anywhere.”
Going forward, the government says its aim is to “create an enabling environment in which all Canadians can participate in online public life,” according to a statement from David Larose, a spokesperson for the Department of Canadian Heritage.
“We have and will continue to consult Canadians, experts and key stakeholders on how best to tackle these complex issues, while upholding fundamental rights,” he said.
But opposition politicians remain skeptical that the end result will prove to achieve that balance.
“What we’d like to see going forward is that the Canadian Heritage Committee undertake a full review of the online world, (taking) the full opportunity to put forward ideas, rather than rushing into legislation that will in the end have unintended consequences on Canadians,” said Conservative heritage critic John Nater.
He wasn’t alone in his concerns.
“I’m hoping we can do this in a coherent manner so we don’t disrupt the positive elements of online communication,” said NDP MP Charlie Angus.
“But there are serious problems. I just don’t think that the Liberal government up until now has understood it.”
Still, Angus said the government does need to act.
“If these companies aren’t going to live up to the high standards to protect citizens’ rights, we have to do that as legislators,” he said.
In the meantime, parents are having to step up and establish their own rules to fill the void left by a lack of government regulation. For Nickerson, the best option for his kids is sometimes no technology at all.
“It sounds really simplistic to say kids need to go outside and play and hang out with other people,” he said.
“But I really genuinely believe that there is quite a bit of truth to that.”