Advertisement

Clearview AI: When can companies use facial recognition data?

Click to play video: 'Toronto police admits to using controversial facial recognition tool'
Toronto police admits to using controversial facial recognition tool
WATCH ABOVE (FEB.13, 2020): Toronto police admits to using controversial facial recognition tool – Feb 13, 2020

Privacy experts say that Canada’s guidelines around the use of biometric data — which include a controversial facial recognition system police agencies have admitted to using — are going to become more complex as time goes by.

On Sunday, the Ontario Provincial Police admitted to previously using Clearview AI, a New York City based facial recognition software company which scrapes billions of images off both public and social media websites.

Story continues below advertisement

The software, which markets itself as a “new research tool used by law enforcement agencies to identify perpetrators and victims of crimes,” has been under heavy scrutiny since its usage was first reported on in 2019, particularly focusing on facial recognition’s disproportionate identification of minorities as opposed to whites.

In Canada, police agencies in Nova Scotia, Alberta, Ontario and even the RCMP have also admitted to its members using the software.

In another case, while not specifically related to Clearview’s technology, involved a Cadillac Fairview mall in Calgary that was found to have been using facial recognition software in its cameras to identify and collect records on its customers.

The use of facial recognition software has been banned across several cities and states in the U.S., but as news comes to light on the use of Clearview AI’s software in Canadian organizations, people are beginning to question where Canada stands on the use — or regulation — of biometric data.

Breaking news from Canada and around the world sent to your email, as it happens.

Richard Austin, a Toronto-based lawyer who specializes in privacy law said that as of now, those questions have been uncertain.

Click to play video: 'Toronto police use of Clearview AI raises privacy concerns'
Toronto police use of Clearview AI raises privacy concerns

Austin said that while there is a consensus that Clearview’s particular application of the biometric data was privacy infringing as it collected information without peoples’ consent, there is a still an ongoing debate on where and how the information could be used if gathered for different purposes.

Story continues below advertisement

“Imagine you’re a hospital, and you want to collect, you want to have facial recognition technology because there’s a way of detecting if people are sick before they come into the hospital,” said Austin.

“So you have a camera set up, to take a snapshot of everyone, and analyze the information to see if it discloses evidence of illness — and that’s all you do with it … but if you were to collect it, and then go on and then put it into a database and then analyze it and use it for a lot of other purposes, it becomes much more problematic.”

Austin said the scenarios on where and how the technology can be used are “so complicated,” and that its uses “don’t fit into a neat little box where you can make a one-off decision.”

Most of Canada is governed primarily by two federal privacy laws — The Privacy Act, which applies to the information the government can store or collect on an individual, and The Personal Information Protection and Electronic Documents Act (PIPEDA), which governs how private organizations across most provinces can use and collect your personal information. Other provinces, such as Alberta, B.C. and Quebec have their own privacy laws in place which are “substantially similar” to PIPEDA.

Story continues below advertisement

A statement from the federal privacy commissioner confirmed that CF Fairview and Clearview AI were being investigated under PIPEDA, while the RCMP’s use Clearview was also being investigated under the Privacy Act.

Click to play video: '3 Edmonton police officers accessed Clearview AI'
3 Edmonton police officers accessed Clearview AI

Both the federal and Ontario’s privacy commissioners said that privacy regulators across “all provinces and territories” are now working together to develop a framework on biometric technology use for organizations, including law enforcement.

Privacy expert and former Ontario Privacy Commissioner Ann Cavoukian told Global News that the use of ones’ biometric data, if compromised, can be “hugely problematic,” and that legislation to regulate facial recognition needs to come as quickly as possible.

“Your face is the most sensitive biometric that exists. If its compromised or used in an offside manner, it can be totally problematic, it can cause identity theft, other things of that nature — it’s huge,” she said.

Story continues below advertisement

Cavoukian’s remarks comes after Clearview AI confirmed last week that it had suffered a data breach after a hacker gained unauthorized access to its client list — which includes financial institutions and enforcement agencies worldwide.

“Security is Clearview’s top priority. Unfortunately, data breaches are part of life in the 21st century,” read a statement from Tor Ekeland, the company’s attorney. “Our servers were never accessed. We patched the flaw, and continue to work to strengthen our security.”

Cavoukian begged the question: If a number of states and cities in the U.S. have already introduced facial recognition legislation, such as San Francisco and Texas, what’s to stop Canada from creating its own?

“We need movement on this. I can’t tell you how quickly we need it, but you know how governments move, so slowly, and they always lag behind technology,” she said.

“It’s nonsense to suggest it can’t be regulated … we can do this, this is the time to do it, because the harm that’s going to come from this, will be staggering, and its just beginning.”

— With files from Global News’ Kerri Breen, Kaylen Small and Andrew Russell

Sponsored content

AdChoices