Study calls for reforms to protect workers from algorithm-facilitated discrimination
The implementation of artificial intelligence in hiring systems (AHSs) in Australia may reinforce and create new forms of discrimination among job applicants, a new study has found.
Research by Natalie Sheard, a lawyer and McKenzie Postdoctoral Fellow at the University of Melbourne, explored the potential risk of algorithm-facilitated discrimination amid the growing use of AI by employers in recruitment.
"It has found that the use of AHSs by Australian employers may solidify traditional forms of hiring discrimination, play an active role in creating new forms of structural discrimination, and pave the way for intentional discrimination," the research stated.
The findings reflect the emerging concerns among HR professionals across Australia that the use of AI tools in recruitment may lead to discrimination.
In fact, data from the Australian HR Institute released late last year found that 39.4% of HR leaders using AI in recruitment and selection activities believe that it discriminates against under-represented groups.
Sheard's study said discrimination is the result of how the AI system is customised and set up by recruiters.
"The training data are at risk of embedding present-day and historical discrimination and may not be representative of the diversity of the population in the country in which the AHS is deployed," the study said.
"Many of the features built into the algorithmic models contain proxies for protected attributes, which may prevent members of protected groups from being shortlisted for jobs."
The study called on the government to review and reform discrimination laws to protect jobseekers from algorithm-facilitated discrimination.
"If we do not want disadvantaged groups to be subject to algorithm-facilitated discrimination, we need to take urgent action," it said.
The Australian government proposed last year mandatory guardrails for AI in high-risk cases, such as in employment matters including recruitment, referral, hiring, remuneration, promotion, training, apprenticeship, or termination.
"We are inching closer to AI regulation, with the government's recent proposal for mandatory guardrails for 'high-risk' AI applications, but legislation is not yet in sight," the study stated.
Meanwhile, Sheard's study also called on greater transparency from providers and deployers of AI systems regarding the operation of these tools. It stressed that training data must be carefully curated and fully documented.
"We need increased levels of understanding by employers of the AHSs implemented within their organisations and their potential to cause harm at scale," it said.
"It is essential that employers provide comprehensive training to those responsible for customising, operating, and overseeing these systems."