Organisations urged to implement framework to evaluate AI tools in recruitment
Organisations should implement a framework that can sanity-check artificial intelligence tools used by HR teams to ensure they don't discriminate when applied in recruitment.
This is according to Keir Garrett, regional vice president at Cloudera, who recently spoke to HRD about fair recruitment practices in the wake of AI use by HR teams.
Garrett said AI is the output of datasets that are written by humans, which have their own natural biases.
"We can't remove that from the process because even in today's world, if you're doing it manually, there are potentially personal biases that come into play," Garrett told HRD.
"So, you've got to remember that the people who are writing these tools and these algorithms to do that data mining may come at it with an approach which has a certain degree of biases, whether that be gender, cultural, age, colour, or whatever bias it is."
Garrett made the remarks amid widespread concerns that AI tools can be discriminatory, which has led to strong hesitation from Australian HR leaders when it comes to utilisation.
Data from the Australian HR Institute in December revealed that two-thirds of HR leaders in the country support the use of AI tools. However, the majority of the respondents said they are not using AI nor have plans to do so in the future in the following recruitment stages:
"The use of AI in recruitment has also drawn negative attention, with many suggesting it heightens the potential for discrimination, particularly in light of case studies where AI selection decisions have been shown to be gender-biased," the AIHR report read.
In fact, 39.4% of HR teams who are already using AI tools in recruitment believe that it discriminated against under-represented groups. They think it discriminated against:
"AI has been found to disadvantage some groups, but few organisations have investigated," the AIHR report read.
"Few organisations have closely examined the impact of AI on different cohorts in their organisation, but half of those that have, found some groups were disadvantaged."
Having a framework in place to sanity-check AI tools will be crucial, according to Garrett.
"You've really got to get a framework in place that sanity-checks and ensures that biases are limited or removed because the goal is to remove, but you really want to mitigate as much as you can," she said.
"We, as organisations, need to ensure that we've got an AI framework or an AI assurance framework that we can measure ourselves against to eliminate the biases that don't necessarily limit ourselves to women, they extend far beyond that into culture, ethnicity, so on and so forth."
This sanity check will need human intervention, which should bring in various groups of people, according to Garrett.
"Minority groups or age groups or certain smaller minorities need to have a voice, and like it or not, we need to have those frameworks in place that tend to lift the bar on our assumptions and the way we view data and AI," she said.
"If you pull other people into that mix to sanity-check the profiling, the modelling, and the execution, we're gonna get to a better outcome because as a team we can, we're better than some of the parts are better than the individual."
Garrett also believes that AI can potentially sanity-check other AI tools in the future.
"The use case of AI is sanity-check the new AI tools that you're using against the old way and go: 'What is the difference between the talent pool that we used to hire and the talent pool that we are hiring? What are the trends that have changed from the traditional way of hiring to the new way of hiring, leveraging these new AI and tools?'" she said.
"You're going to see some trends which are measurable and actionable that we can do something different. And you can continue to tweak and fine-tune it to a better place."
Ultimately, Garrett said organisations should start considering AI "for good."
"Ethical AI and economical AI – the two are inextricably linked. And I think talent acquisition has to be an iterative process of review, learn, review, and execute and challenge the norms both internally and externally," she said.