The risks of using AI in recruitment

'Class and disability discrimination is just as serious as sex or racial discrimination', law lecturer says

The risks of using AI in recruitment

The use of technology such as artificial intelligence (AI) in recruitment can create risks of discrimination.

Why? AI is a “black box,” according to Angelo Capuano, law lecturer at Central Queensland University.

“Its reasoning is hidden and people are yet to figure out why AI makes certain decisions, including recruitment decisions,” he said. “So this lack of transparency is a big problem because ultimately there are limits on how we humans can police and monitor AI.”

Using AI in recruitment

Some of the key reasons why AI is used in recruitment is to save time, costs and to remove human bias from recruitment decisions, Capuano told HRD Australia. But that can be problematic.

He used an example of a machine learning model developed by Amazon which automated recruitment decisions by observing patterns in resumes that were submitted over 10 years. Most of the resumes came from men and the system learned to prefer male candidates, he said.

“So if AI learns to prefer men over women, as Amazon's AI did, a red flag might be raised when we see mostly men being hired from a recruitment process,” Capuano said. 

“But what if AI learned to prefer the able bodied over people with disability or the rich over the poor, or private school graduates over public school graduates? This preference might go unnoticed because disability equality and socio-economic equality might not be on an employer's radar as much as gender equality or racial equality.”

For example, it might go unnoticed if all the successful candidates are able-bodied or from private schools, said Capuano, who wrote Class and social background discrimination in the modern workplace: Mapping Inequality in the digital age, which examines the law in Australia, South Africa, Canada and New Zealand to consider whether, and to what extent, it addresses discrimination in employment based on class and social background.

The book also looks at a number of tools and technologies used in recruitment – such as contextual recruitment systems, asynchronous video interviews and gamification – which could lead to discrimination of potential candidates.

“Class and disability discrimination is just as serious as sex or racial discrimination but if AI is used in recruitment, people will probably never know if it is, in fact, directly discriminating against applicants from working class backgrounds or those with disabilities,” he said. “I think that’s a key risk from using AI. It's a black box. We don't quite know if it is discriminating and there won't be any evidence if it is discriminating on the basis of disability or social origin.” 

Anti-discrimination legislation and AI

There are various federal and state anti-discrimination laws which address issues across race, sex, disability, age and employment. However, Capuano highlighted a gap in the laws when it comes to technology used in recruitment.

“From the perspective of regulating recruitment technologies, the main issue with some of this legislation, such as the Disability Discrimination Act and adverse action under the Fair Work Act is that it applies to discrimination in hiring,” he said. “Such as refusing to employ someone.

“These laws might not apply to the use of technology, which is used to help make decisions about who to interview or who to progress to a second round interview before any hiring decision is made. I think this leaves a big gap in the law’s ability to address discrimination as it increasingly arises in the digital age.”

Capuano added that there is legislation that allows people to make complaints of social origin discrimination in employment to the Human Rights Commission. But they would need to show that “there was a distinction, exclusion or preference made on the basis of social origin that had the effect of nullifying or impairing equality of opportunity or treatment in employment or opportunity,” he said.

But because of the “black box” issue with AI, people have no clear idea how it makes decisions.

“We cannot really know for sure if it directly discriminates on the basis of or because of a protected attribute because its reasons for a decision are not yet able to be traced,” he said. “And explainable AI is still an emerging field. But we can figure out if its use might have discriminatory effects.”

Tools that could be discriminatory

A contextual recruitment system refers to an algorithm that mines demographic data of candidates in a bid to create a more diverse workforce. But because it can only look at publicly available or voluntarily given data, it may not get an accurate picture of potential candidates who may be disadvantaged.

“Instead, it may simply favour the stand out performers in settings it determines are less fortunate, to the detriment of those whose disadvantage cannot be measured by an algorithm,” Capuano said in a report.

Asynchronous videos use AI to replace people in job interviews.

“The issue with this type of interviewing is that it might favour people who have had the opportunity to cultivate certain language skills from professional backgrounds, and therefore that might work to the disadvantage of people from working class backgrounds who haven't had that opportunity to cultivate those similar language skills,” he said.

Gamification requires job candidates to play an online game and AI compares the scores with a database of employees who have been successful in the role to determine who gets to progress to the next stage of recruitment.

“The problem with gamification is its discriminatory effects,” Capuano said. “For instance, people with disabilities which affect visual perception, or which slow reaction speeds will be disadvantaged by gamified design. And it's not clear how playing these games would be relevant to jobs which don't require good vision or fast reaction speeds. You can understand it being used to hire pilots or bus drivers but what about bankers and lawyers?

“The disadvantage gets worse when candidates not only have disability but also are part of groups which suffer from a so-called digital divide, who do not have good access to technology, such as certain people from lower socioeconomic backgrounds.”

What should HR do

Capuano suggests HR teams be more proactive in preventing discrimination from the use of AI. And the first step to doing this is being alert for the red flags, “sSuch as seeing a disproportionate number of successful people of a certain sex or race or age group or disability or class, for example,” he said.

“This might call for revisiting the algorithm or perhaps even disposing of it entirely.”

The second step is to audit the use of AI and algorithms to consider whether they have discriminatory effects, Capuano said.

“Do they disadvantage people with protected attributes such as disability or those of a certain social origin or class?” he said. “Does the technology create a practice where preferences have been conferred based on social origin? Does the technology test things like eyesight or mobility which are not directly related to the inherent requirements of the job? If the answer to these questions is yes, this should sound alarm bells as there may be risks of discrimination in using the technology.”  

Recent articles & video

Hiring intentions down as recruitment challenges hit Aussie firms

Ai Group calls for stronger industry-university connection for skills development

WA implements 'transition arrangement' ahead of engineered stone products ban

3 in 4 employees worldwide now use AI: report

Most Read Articles

Meet this year's top employers in Australia

Is raising your voice at a worker considered bullying?

Senior female engineer quits over director's 'misogynistic' behaviour