Advertisement

Robo recruiters: Algorithms are trying to tell if you’re right for the job

Is artificial intelligence better than humans at picking job candidates?. Getty Images

You’ve probably heard about robo-advisers. Now meet robo-recruiters.

A growing number of employers in both Canada and the U.S. are turning to tech start-ups that use artificial intelligence (AI) to screen job applicants and existing employees to help find the perfect recruit or the right candidate for a promotion.

The promise isn’t just better hires and happier employees. It’s also that of a fairer workplace — one that doesn’t discriminate against women and minorities.

READ MORE: Reality check: Does name-blind hiring help improve diversity?

For example, Toronto-based Fintros, which provides talent acquisition software for financial firms like Bank of America and BlackRock, says its algorithm zeroes in on women’s resumes much more frequently than human recruiters do on average in what is notoriously a male-dominated industry.

That is, in part, because Fintros’ hiring process is completely anonymous.

Story continues below advertisement

Designed to help high-level finance professionals explore job opportunities without jeopardizing their current position and to help companies tap top talent that might not be actively job hunting, Fintros strips resumes of any identifying information. That means things like names, emails, location, job history and educational background.

“We completely scramble that information,” co-founder Sloan Galler said.

A sample resume that Fintros showed to Global News reads “Fintros Candidate #100360,” where the candidate’s name would normally be. And instead of the applicant’s current employer, you see: “Big 5 Canadian Bank,” and a couple of lines below, “one of RBC, TD, BMO, Scotiabank or CIBC.” It’s the same with universities: The fictitious candidate earned a bachelor of commerce at a “Tier 1 Canadian Business School,” one of “Ivey Business School, Schulich, Rotman, UBC or Queen’s Commerce.”

This not only shields the candidate’s identity but also makes it impossible to know whether it’s a man or a woman or whether his or her last name is something more like “Smith” or, say, “Patel.”

It’s easy to see how that might help prevent the — sometimes unconscious — bias that often seeps into the hiring process, leading more female and minority applicants to be screened out.

READ MORE: 14 factors lead to workplace gender equality — here’s how Canada measures

A 2012 study by the University of Toronto, for example, found that employers were less likely to call back job applicants with Indian or Chinese names compared to those with English-sounding names.

Story continues below advertisement

Another Canadian startup, Plum.io, says it eliminates bias by matching people and jobs based mostly on skills rather than their work history and credentials.

AI is only as good as the data it uses to make predictions, Plum co-founder and CEO Caitlin MacGregor told Global News. The risk with that, she added, is that if you feed it only what’s on people’s resumes, it will repeat the same mistakes that humans make.

WATCH: Should you use a robo-adviser to invest?

Click to play video: 'Money 123: Should you use a robo-advisor to invest?'
Money 123: Should you use a robo-advisor to invest?

Plum is different because “we don’t rely on history to make recommendations.”

Financial news and insights delivered to your email every Saturday.

Instead, the company says it creates its own data by teasing out from employers what skills they’re looking for and assessing candidates based on their talents and inclinations. Once Plum thinks it knows what employers are after, it scans its database of applicants’ “Plum Profiles” to find the best matches.

Story continues below advertisement

A Plum Profile takes about 25 minutes and involves a combination of logic games and multiple-choice questions. At the end of the quiz, you get a list of your strong points, which might involve things like “adaptation,” “decision-making” or even “cultural awareness.”

And companies are also using Plum Profiles to spot internal candidates who might be ripe for a promotion, MacGregor said.

Plum is working with clients like Intact Insurance and Deloitte and recently partnered with the University of Waterloo to help pair its students find the right co-op opportunity.

Another thing AI can do to bring us closer to a bias-free workplace? Help employees to speak up so that management can spot and deal with problems faster or replicate what worked, according to Rob Catalano, chief engagement officer at WorkTango.

The company is using data science and natural language analysis to shake up what might be the holy grail of corporate HR: employees’ engagement surveys.

WorkTango quickly gathers feedback from the trenches and then uses its algorithm to spot patterns among both multiple-choice selections and employees’ comments. This can help a large multinational notice that, say, a certain country’s office seems to have an issue with gender diversity or, vice versa, could be a model for how to create an inclusive environment, Catalano said.

WATCH: No cashiers, no lines in Amazon’s retail store

Click to play video: 'No cashiers, no lines: Amazon opens retail store'
No cashiers, no lines: Amazon opens retail store

But can imperfect humans really build algorithms that don’t make mistakes?

Stripping certain information from resumes and screening candidates for skills and talents rather than just job and education background “doesn’t seem that bad of an idea,” said Philip Oreopoulos, co-author of the University of Toronto study.

Story continues below advertisement

Many employers in both the private and public sectors are already doing that, and algorithms could help.

“Applying AI to hiring has a lot of potential,” Oreopoulos told Global News.

READ MORE: Ottawa to launch pilot project on name-blind hiring for public service

But employers should proceed with caution, he added, perhaps using AI’s suggestions as a check against their own hiring decisions.

One of the risks is that algorithms will swap one kind of bias for another, said Dipayan Ghosh, a former technology policy adviser at the White House during the Obama administration.

“It is very very difficult to build a bias-free algorithm,” he told Global News.

Ghosh has a concrete example of how algorithms laden with good intentions might still lead to hell.

The example involves a Boston app that helps the city fix potholes by crowd-sourcing pot-hole alerts from neighbourhood volunteers. What could go wrong with that, right?

Well, after a while, Ghosh said, the city noticed that it was mostly dispatching crews to younger, wealthier areas of the city — those inhabited by the young and hip, who are more likely to have a smartphone and be aware of such an app.

Story continues below advertisement

Programmers have since fixed the problem, but the cautionary tale stands.

It’s easy to see how using AI for recruiting might unwittingly discriminate against the less tech-savvy, who have a higher probability of being older and lower-income, Ghosh said.

If that was the case, he added, it would be very difficult to tell.

That’s because all these algorithms are proprietary, which makes it difficult for outsiders to “look under the hood.”

WATCH: Is artificial intelligence more dangerous than a nuke?

Click to play video: 'Is Artificial Intelligence really more dangerous than a nuke?'
Is Artificial Intelligence really more dangerous than a nuke?

The case of algorithm auditors

One solution might be to bring in third parties to test algorithms the way auditors double-check companies’ financial statements, Ghosh said.

Story continues below advertisement

But auditors have a well-known tendency to cozy up to the very companies they’re supposed to vet — after all, that’s where they get their business.

To keep auditors honest, you’d need a credible threat of government intervention if things were to go awry, Ghosh said. At the same time, the auditing process should shield private companies from naming and shaming when problems are found in order to encourage co-operation.

And while it’s politically out of the question to ask a company like, say, Google to have its search algorithm scrutinized by a third party, Gosh is hopeful that an auditing system will emerge for robo-recruiters.

“The [concept] that we should have regulators or [private] auditors [vetting] the quality of recruiting algorithms, I think that’s an idea that people can really get behind.”

Sponsored content

AdChoices