Academic research suggests AI can amplify discrimination
With research suggesting artificial intelligence (AI) used in hiring systems could be discriminatory against candidates, more needs to be done to analyse the risks associated with implementing new technologies, according to one of Australia’s leading background and reference check technology experts.
“Technology is moving so quickly, it’s almost like predicting the weather. It might be accurate two hours from now, but three days from now – it’s more guesswork than accuracy. We have no idea what’s going to happen in 18 months’ time,” Luke Maddison, Fractional CMO at Referoo, told HRD.
“The speed of change is both exciting and confronting, which is why it’s so important to have good processes in place around your hiring – tech companies are all investigating how they can leverage AI to improve client experience, but without a solid foundation, you’re going to get lost,” he added.
The comments follow a recent report by AI researcher, Dr Natalie Sheard, which highlights gaps in our understanding of “real” risks of discrimination when these systems are deployed.
“This may arise from the data, the use of proxies, the system's implementation, new structural barriers, a failure to provide reasonable adjustments, or the facilitation of intentional discrimination,” the report cites.
The study found around 30% of Australian organisations use predictive AI systems in recruitment – but they were revealed to reinforce and amplify discrimination on several grounds, such as “Black-sounding names” or when asked to accommodate requests for disability access.
The Guardian has reported on videos of job candidates being interviewed with at-times faulty AI video interviewers shared widely on TikTok – something Maddison (pictured right) emphasised is proof we still aren’t using the technology in the way it was intended.
“We’re looking at AI as part of what we do, but in a lot of cases, it’s still very nascent – it’s early days. When you’re looking at data, you need to be able to tell if it’s accurate and true. You do that by heavily checking and referencing that.”
“We look at background checks and reference checks every single day, which is great, but we’re still looking at constantly evolving and adopting new ways of working. You’re only as good as the data you have and if there’s discrimination in that – whether conscious or unconscious – it’s going to amplify that,” he added.
It was also noted that ensuring data is as accurate as possible is especially important with the advent of AI being used more to create CVs – which could also skew the information received by recruiters.
AI, at its current stage, needs to be used alongside human-based analysis to ensure it works in the way you want – something Referoo utilises in their workflow.
“It’s about the data, not about the candidate,” he told HRD. “We’re using AI to make jobs easier, but by doing that, it reinforces the need for accurate information on the candidate you’re hiring and solid steps to validate that data.”
“How do we know if data on the candidate is accurate and true, especially in a world of Ai-generated CVs? References can be subjective, so having technology that recognises that is crucial because it could be taken the wrong way which can cause you even greater problems down the line. Constant maintenance is key,” Maddison added.
Dr Sheard’s research concludes by asking whether AI hiring systems should be used at all, in the face of potentially amplifying and deepening discrimination. Maddison told HRD tech systems should be used, but much like Sheard’s analysis, a deeper understanding of how they are used must be developed.
“From Referoo’s perspective, the future of AI is all about streamlining the process of validating someone’s background check and reference check. AI is used, on our end, to make sure that the candidate you're hiring possesses the skill sets, the qualifications, have the and are not in any way compromised.”
“That’s where the future is, it’s about making life easier. You have the ability to implement AI into your HR technology stack right now, if you wanted to, but there needs to be a focus on how the data is used and validating the data we’re seeing. That has to be done in a way that’s not discriminatory,” Maddison added.