How one Ontario city is blazing the trail for public sector AI use

A closeup of a pillar with the City of London crest in it as part of a footbridge over the Thames River.

Artificial intelligence is spreading beyond private industry and into the public sector, with one city in Ontario making a name for itself in its unique application of AI: in its fight against homelessness.

Several experts say the City of London’s approach has been responsible so far, but argue that the unregulated use of AI in the public sector necessitates the urgent need for standards and guardrails.

“Any use of AI that can make vulnerable people’s lives better or more comfortable is great,” said Marcel O’Gorman, professor and university research chair at the University of Waterloo and director of the Critical Media Lab.

“As long as it’s being done in a way that’s not putting them at some current or future risk, or exposing them to some kind of heavy-handed policing.”

The Office of the Information and Privacy Commissioner of Ontario (IPC) and the Ontario Human Rights Commission (OHRC) say it’s difficult to determine how widespread the use of AI is within municipalities as they are not aware of any statutory reporting requirements.

“Without (appropriate guardrails), AI technologies risk crossing the lines beyond what Ontarians consider legally, socially and ethically acceptable,” the two bodies said in a joint email to Global News.

Click to play video: 'Ontario urged to develop ‘guardrails’ on public sector use of AI'
Ontario urged to develop ‘guardrails’ on public sector use of AI

Using AI to predict chronic homelessness

The City of London’s Chronic Homelessness Artificial Intelligence, or CHAI, is meant to predict the likelihood of someone becoming chronically homeless in the next six months. For this purpose, chronic homelessness is defined as spending 180 days or more per year in a public shelter.

Advertisement

“From what I’ve read, there may be some initiatives like this happening in the U.S., but I haven’t read any others like this happening in Canada for sure,” said O’Gorman.

He added municipalities that are using AI are typically doing so to track energy consumption and delivery. Other examples of AI in the public sector include for traffic management in New Westminster, B.C., and for a variety of purposes from building safety codes and inspections to fire rescue in Edmonton, Alta., according to GovLaunch, which bills itself as an international wiki for “local government innovation.”

However, London’s CHAI tool could easily expand to other municipalities.

“What’s interesting is that the developers are making this an open-source system so that anyone can use the same system. It’s not proprietary. They’re not trying to sell it.”

CHAI went live in August 2020 at a time when the municipality was prioritizing its COVID-19 response. It’s been working in the background since that time, according to Kevin Dickens, deputy city manager of Social and Health Development with the City of London.

“(From) August 2020 to May of 2023, we’ve seen the number of individuals experiencing homelessness double in our community, and we’re starting to see a large population that is experiencing chronic homelessness and unsheltered homelessness that simply did not exist when this tool was conceived.”

Dickens says those defined as chronically homeless make up about four per cent of shelter users, but they use roughly a quarter of shelter resources.

Mat Daley, director of Information Technology and Services with the City of London, said CHAI can be used to predict heavy shelter use, which gives staff the opportunity to enhance “resource allocation and operations.” 

The City of London has invested roughly $57,000 in CHAI so far, with ongoing monthly costs of roughly $1,100, said Daley.

Click to play video: 'AI simplifies Montreal’s confusing parking signs'
AI simplifies Montreal’s confusing parking signs

The AI tool uses data from the Homelessness Individuals and Families Information System, a “comprehensive data collection and case management system” that is a federal standard, Daley explained.

“HIFIS would receive the data from the 20 organizations across the City of London who are supporting homelessness,” he said.

Currently, CHAI is using “anonymitized information from HIFIS and it is running the machine learning models to provide that intelligence to caseworkers in homelessness,” said Daley.

O’Gorman has described the development of CHAI as “respectable,” noting the developers chose to follow the guidelines of the General Data Protection Regulation (GDPR), established by the European Union.

“The guidelines help ensure that the data is used fairly and equitably, and it’s based on a model of consent so that a person has to give consent to enter into this project. And they are able to leave at any time.”

O’Gorman said the developers also followed the principles of “explainable AI,” which means that they are able to track how the AI came to its conclusions.

“It will tell you like, ‘This is how I figured out why this person is susceptible to chronic homelessness.’ And that’s good. I mean, it allows you to look at those decisions and say, ‘Is there any bias happening here? Can we trust this?’”

Furthermore, CHAI doesn’t make decisions, it provides information that caseworkers (who “have a better grasp of the full human context of homelessness”) can use to make decisions.

A flow chart showing how data from HIFIS is used by the AI. via CHAI pre-print, Github.com/Sept. 2020

As mentioned, it’s also available through an open-source licence, meaning the city is not trying to sell the tool.

“Anybody can use it, build on it and improve it. This is truly a ‘we’re all in it together’ type of undertaking,” Daley told Global News on May 23.

Advertisement

“Next week, I’m meeting with an individual from London, England, who’s interested in what we’re doing with the CHAI model and the potential for it to be implemented in England.”

The project was governed by the homelessness sector in a homelessness sector-led committee, underwent bias and privacy checks, and all of the results were reviewed by privacy experts in the city clerk’s office as well as a third-party data scientist expert, Daley added.

“And as part of a review for an article by Reuters, two unaffiliated computer science experts and a privacy lawyer found that ‘the program seems to take the necessary steps to protect users’ personal information.’

AI is only as good as the data it’s based on

However, even though several experts believe London’s approach to developing CHAI was responsible, O’Gorman notes that ultimately, the model is still one of surveillance.

“It doesn’t necessarily mean it’s all bad or evil, but we have to call a spade a spade.”

He added that often the public perception is that tech innovations impact privileged populations first, but technologies associated with surveillance tend to impact “those who are already in vulnerable situations and subject to policing and surveillance.”

Jacqueline Thompson is the executive director of Life Spin, a charity supporting low-income families in London, and believes CHAI reflects an existing bias in London’s homelessness response.

“That bias excludes our aging population. It excludes women with children. It excludes new immigrant families. It excludes Indigenous families living off reserve in the city. It also discriminates against folks who have mental health challenges.”

Thompson said that because the data for CHAI comes from HIFIS, it doesn’t include information from those using the so-called “private shelter system,” for example, people couch-surfing or living with several families in one small home.

“There’s forgotten thousands that do not use and will not use public shelter spaces or take their children to live on the street,” she explained.

“I had a woman show me a picture on her phone of the space where her children sleep, and it was a living room space with sleeping bags lining the walls. It breaks your heart. She’s like, ‘This is where I live. I need a home.’ And they don’t count in the London system as it stands because they’re only drawing data from places that serve public shelters.”

Click to play video: 'AG: Government unsure if homelessness measures are helping'
AG: Government unsure if homelessness measures are helping

A letter from Life Spin received by the Community and Protective Services Committee in May stated that the charity has seen “a marked increase in families sharing the rentals of homes and apartments to keep their children under a roof.”

“We have seen a similar increase in numerous family members sharing small apartments, such as a family of six, living in a one-bedroom apartment,” the letter reads.

“Just because they are not sleeping in the doorway of your workplace when you arrive for work does not mean that they should be discriminated against.”

O’Gorman said that bias resulting from missing data is a known issue for artificial intelligence.

“Even if you look at the reports currently about what the primary demographic is for homelessness, I believe it was a white man, approximately 52 years of age, single, no children, jobless. And that then becomes your profile for risk,” he explained.

“You have to ask people, what was that profile based on and what is the quality of the data that went into arriving at that profile? Who’s being left out? Who’s missing?”

A graph showing different feature explanations and their associated contribution to the probability of chronic homelessness. via CHAI pre-print, Github.com/Sept. 2020

Daley agrees that CHAI doesn’t include everyone experiencing homelessness, but he said CHAI isn’t meant to address all homelessness.

Advertisement

“The purpose of that model was to identify individuals at a higher risk of homelessness as measured in the shelter system.”

Dickens stressed that CHAI is simply one tool and not “the” tool in addressing homelessness in London and was critical of suggestions that the city is ignoring segments of the homeless population.

“What is our urgent crisis right now? Our urgent crisis is that there’s 38 active encampments in London and people are at significant risk of unnecessary death… What’s also a crisis is we have a severe shortage of housing and we have next to zero high-supportive housing,” he said.

“Are we also focusing on people that currently have a roof over their heads, are precariously housed, and might be in dangerous or risky situations? Absolutely. On a daily basis. And we’re not taking money away from those programs, those benefits, those supports to address the other. We’re trying to do both.

“Responsible” AI

While expert accounts point to a responsible approach from the City of London, O’Gorman suggested that it’s irresponsible to assume that will always be the case.

“What could be done with this system if it was put into the wrong hands or into a different government context, different bureaucratic context, that’s more driven by policing or by cutting funding to social programs? It might make use of the data and the AI that is being used to process that data and analyze the data.”

Last June, the federal government tabled Bill C-27, or the Digital Charter Implementation Act, 2022. Innovation, Science and Economic Development Canada (ISED) says the legislation aims to “create new rules for the responsible development and deployment of artificial intelligence (AI) systems.” However, the bill is aimed at the private sector.

“There is no direct requirement under the Bill’s proposed Artificial Intelligence and Data Act (AIDA) for municipalities to report their use of AI systems to the federal government,” an ISED spokesperson explained.

Additionally, the earliest date that legislation could come into effect is 2025.

Provincially, a spokesperson for the Ministry of Public and Business Service Delivery says, “We continue to have conversations with our partners, including many municipalities, on how to best take advantage of emerging technologies like AI.”

When asked if there were any current reporting requirements for municipalities on their use of AI, the ministry said that the province is in the process of developing its Trustworthy AI Framework.

The Office of the Information and Privacy Commissioner of Ontario (IPC) and the Ontario Human Rights Commission (OHRC) recently released a joint statement urging the government to take proactive measures to address AI in the public sector.

In an email to Global News, the human rights commission and privacy commissioner said AI technologies can be used for good – fast-tracking the delivery of government services or solving major public health issues, for example – but guardrails are essential.

AI technologies “often rely on personal information or de-identified data and it is imperative that this information be lawfully collected and properly protected.” It is also important that Ontarians are protected from “unjustifiable or unnecessary surveillance,” they add.

The two also pointed to the potential for biases in the technology and lack of accountability or transparency as additional risks.

Click to play video: 'White House says ‘self-regulation is not working’ following Facebook whistleblower who claims company prioritized profit'
White House says ‘self-regulation is not working’ following Facebook whistleblower who claims company prioritized profit

Even beyond the public sector, the IPC and OHRC say Ontarians impacted by AI technologies “should be able to challenge both inputs that are collected without justification as well as outputs that they believe to be unfair or discriminatory.” As well, any use of AI should be disclosed to the public.

Authorities worldwide are racing to rein in artificial intelligence, including in the European Union, where groundbreaking legislation passed a key hurdle this week with lawmakers agreeing to changes in draft rules proposed by the European Commission. However, it could still be years before any rules take effect.

In the roughly six months since the launch of ChatGPT, popularity and interest in AI has surged, bringing with it a growing chorus of concern. The World Health Organization warned of the risks of misuse of AI in health care while the head of the United Nations backed calls for the creation of an international AI watchdog body.

Advertisement

Outside of artificial intelligence, there is also growing interest in increased regulation of the tech sector at large.

In Canada, a senate committee is studying a bill that would make Meta and Google pay for Canadian journalism that helps the companies generate revenue. Prime Minister Justin Trudeau says the bill is meant to help prevent those companies from weakening Canada’s democracy by threatening its domestic media industry.

In 2021, a Facebook whistleblower raised concerns about the platform’s impact on children and politics.

O’Gorman added that external regulation is necessary because self-regulation simply doesn’t work.

“(Big technology companies) can’t be trusted to do that, we’ve seen over and over again, in part because they don’t really fully understand the implications of the technologies they’re developing.”

However, external bodies tend to move more slowly than technology can develop, making regulation difficult.

The focus on technological fixes over increasing investment in human supports also concerns O’Gorman, who views the development of something like CHAI as an example of “really backward priorities in our society and economy” with AI developers making “much better salaries than the people on the ground who are engaged in the actual care.”

“There’s something sexy about finding a techno-fix to homelessness,” said O’Gorman.

“But we can’t let that allow us to lose sight that this real problem is about human beings on the ground trying to survive and the people who are trying to care for them and help them survive.”

– with a file from The Associated Press’ Kelvin Chan, The Canadian Press’ Mickey Djuric, and Reuters’ Michelle Nichols 

AdChoices