Menu

Topics

Connect

Comments

Want to discuss? Please read our Commenting Policy first.

Are bank algorithms sexist? How tech might be working against women

banking-sexism. Getty Images

To be approved for a credit card or limit increase, banks often check your payment history, income and credit score. 

Story continues below advertisement

But a financial scandal that rocked Apple in November shows gender may also be factor. In fact, reports showed AI technology was making these discriminatory decisions. 

Software developer David Heinemeier Hansson exposed these issues on Twitter when he looked at the stark differences in credit limits between him and his wife.

When applying for a credit limit increase for her Apple Card, Hansson’s wife Jamie was denied, despite having a higher credit score than him. 

Hansson tweeted Apple Card had a “sexist program.”

“My wife and I filed joint tax returns, live in a community-property state, and have been married for a long time,” said Hansson. “Yet Apple’s black box algorithm thinks I deserve 20 [times] the credit limit she does.”

Even when she paid off her balance in full, the Apple Card wouldn’t approve any spending for his wife until the next billing period. “Women apparently aren’t good credit risks,” Hansson tweeted.
Story continues below advertisement

He also explained his frustration that multiple customer service representatives with Apple had no understanding of why his wife wasn’t approved. They blamed “the algorithm”. 

Increasingly, multiple industries worldwide are using artificial intelligence algorithms to help with processes, from hiring, to sentencing, to translation software. Apple is using one to determine who gets approved for their credit programs.

But murmurs about discrimination due to the use of these algorithms has hit multiple industries, and now has swept banking.

Hansson’s Twitter thread led to hundreds of responses from people who said they’ve experienced something similar, either with Apple or at other banks. Even Apple co-founder Steve Wozniak said there were inconsistencies between his and his wife’s account.

“Hard to get a human for a correction though. It’s big tech in 2019,” said Wozniak.

Now, The New York Department of Financial Services has gotten involved by opening up an investigation into whether Goldman Sachs, the institute behind the Apple Card, is engaged in discriminatory practices.

Story continues below advertisement

The investment bank addressed the controversy stating that no credit decisions are based on factors like gender and they will be reviewing their credit process with a third party.

After the tweets, Jamie Hansson’s credit limit was raised. But that doesn’t address the core of the issue, Hansson said.

“What’s even worse is how complete and unquestioned the faith of these Apple reps were in the wisdom of the algorithm,” he said. 

“Which won’t be explained, can’t be appealed, and is simply assumed to be correct because all faith is in the mighty machine.”

Algorithms ingest flawed, sexist data

The shock online about Apple’s seemingly bias algorithm didn’t surprise Meredith Broussard, a New York University professor and author of Artificial Unintelligence: How Computers Misunderstand the World.

Story continues below advertisement

“For a long time, people have had this idea that algorithms are somehow more objective or more unbiased than people. And this idea is wrong,” she said. 

The belief that algorithms are entirely neutral in their decision making is what Broussard calls “technochauvinism”. Algorithms work by taking data about the world as it is, and then creates a model that essentially mirrors that world, she said.

“That means all of the existing inequality in the world gets reproduced inside these automated systems,” she explained.

In the U.S., there’s a long history of credit inequality between men and women. Many married women were not able to acquire credit or even have a bank account without their husband’s permission. Until 1975, women in the U.S. who were single, divorced or widowed needed a man to co-sign their applications.

In 1964, women in Canada were allowed to opening a bank account without their husband’s signature. 

The data that algorithms are ingesting today still have echoes of the past, including the current gap in income equality, said Broussard.

Story continues below advertisement

Although the wage gap has narrowed in the last 20 years, women aged 25 to 54 in Canada earned $0.87 for every dollar earned by a man in 2018, according to Statistics Canada. These numbers are often even lower for women of colour and Indigenous women.  

“Men still earn more money than women overall,” she said. “If you’re just mathematically saying, who is a better credit risk, who makes more money? It’s always going to look like the man.”

The Canadian Human Rights Act prevents against discrimination, which would cover credit applications. 

But in order to potentially make a complaint if an algorithm made a discriminatory choice, the nature of the technology means it’s unclear where or how the AI made it decisions. 

“When there is discrimination in the code, it’s hard to see and it’s almost impossible to erase,” said Broussard. “These models are opaque because you can’t see inside of them, and you can’t audit them.”

Story continues below advertisement

It’s hard to appeal a decision made by a computer because you won’t know the reasoning behind it, she said, adding that financial firms in the U.S. are highly regulated and care about compliance.

“I don’t think they’re purposely trying to discriminate,” she said.

“They’re pretending that their math is better than the social systems that were previously set up, and it’s not.”

With the lack of current regulations, the onus is on consumers to detect whether an algorithm has been bias, says Suzie Dunn, a PhD candidate at the University of Ottawa whose research focuses on law, gender and technology.

“The discriminatory effects of an algorithm need to be discovered by someone,” she said, adding that banks and other bodies using AI should know if there’s an issue with the program before it harms a customer.

Story continues below advertisement

Marginalized groups will be the most impacted by these kinds of algorithmic errors, she said. Federal and provincial human rights policies provide protection, but often you have to prove that discrimination has occurred.

“With AI, it really complicated how you’re able to demonstrate that you’re being discriminated against,” she said. 

However, if an algorithm is faulty, it’s on the business to figure out exactly what the problem is, she said.

“If that business is relying on A.I., they’re still responsible if the AI is discriminatory, in the same way if one of their service representatives were to discriminate,” she said. 

“Just because we’re using [algorithms] doesn’t mean that human rights no longer exists or that consumer rights no longer exist.”

Canadian regulations aren’t up to date to protect consumers

It would come as a surprise if Canadian banks didn’t use algorithms for certain aspects of banking, like percentage of income allowable to service debt, says Maura Grossman, director of Women in Computer Science at the University of Waterloo.

It’s also likely they are using machine learning to assess whether or not someone is worthy of credit, she said. But it’s difficult to know as it’s proprietary information banks don’t disclose. 

Story continues below advertisement

There’s a need for more discussion between regulators and those who work in tech when it comes to new technology, she explained.

“We do not have a consensus on what it means for an algorithm to be ‘fair,’ and we can’t entirely rid algorithms of bias that exists in historical data,” she said.

In order for there to be more oversight, without stifling technological advances, input is needed from legislators and those who work in tech.

“[The tech industry] is not necessarily trained to consider the social policy impact of the output of algorithms,” she said. 
Story continues below advertisement

For Canadians, the concerns around bias in credit approval practices go beyond issues with this kind of new technology, says Stephanie Ben-Ishai, professor at Osgoode Hall Law School at York University in Toronto. 

“There’s very little regulation on the practicals around credit ratings and credit rating agencies as it is,” she said. “In a pre-AI world, there were already challenges.”

The Financial Consumer Agency of Canada is responsible for enforcing consumer protection legislation and provides information about your rights when it comes to credit and loans.

They also provide a contact if you feel your rights aren’t being respected

But there isn’t a policy within Canada’s financial regulatory structure that says you can’t discriminate against people based on gender when it comes to lending, she said. 

“[Apple Pay] is just the current iteration of the lack of transparency about practices around lending … and the fact that we have weak regulators in the Canadian context,” she said.
Story continues below advertisement

Regulators are always playing catch-up to technological innovations, creating challenges, she added.

“As we rely increasingly on things like machine learning, we need to make sure that the data that is fed in is good data,” she said.

“The only way we can do that is being more transparent to people about the data we’re creating and collecting about them, and putting limits and checks and balances on that.

 

Olivia.Bowden@globalnews.ca

Curator Recommendations
Advertisement

You are viewing an Accelerated Mobile Webpage.

View Original Article