Exploring the rise of AI in recruitment

'The biggest challenge we face is fear'

Exploring the rise of AI in recruitment

In a time of great disruption, the HR industry has accelerated vastly in the last 12 months.

Automation and technology have helped HR leaders to streamline manual, time-consuming processes, enabling them to focus on people strategy. That shift has been vital in many areas of HR, but no more so than in recruitment.

Uncertainty in the labour market sparked a frenetic bout of job-seeker activity in the early months of the pandemic and for businesses still hiring, the volume of applications was overwhelming.

Many recruitment teams turned to tech – and artificial intelligence in particular – because of its ability to sift through data points of applicants at unparalleled speeds and accuracy. But there has also been plenty of discussion around AI’s ability to create a more diverse hiring process.

It’s well established that recruitment carried out by humans is impacted by biases – both conscious and unconscious – which presents a real barrier for companies looking to improve diversity. So, is AI the answer to reducing bias, or is there a danger of embedding past biases into algorithms?

HRD spoke to Barb Hyman, CEO of Melbourne-based tech company PredictiveHire, a chat-based recruitment tool used by the likes of Bunnings, Qantas, and Tennis Australia.

She believes you only have to look at the statistics on female CEOs and board members in Australia to see the dial on diversity is not moving quickly enough.

“It's not only the right thing to do but any organisation that's wanting to create innovation in how they work in their core business needs to have diversity,” she said. “There's a fair amount of research that says innovation is the intersection of difference. If you're lucky enough to get a really diverse talent pool like Qantas or Bunnings, then why wouldn't you want to ensure that your hiring mirrors that.”

Read more: Recruitment challenges during COVID-19

Hyman argues that unconscious bias cannot be trained out of human beings – after all, how can change something we’re unaware of?

Last year, the UK government announced it would scrap unconscious bias training for civil servants because there was no evidence it had improved workplace equality. The government said it would be taking a different approach to tackling the issue and urged other public sector businesses to follow suit.

Hyman said this change in mindset is what makes tech a powerful tool but in Australia, the biggest barrier is fear.

“There’s a myth that automation somehow takes away from the human experience and a myth that all AI equals bias,” she said. “We would say that AI at a simple level is really just data and how do you know if someone is biased unless you can see it? AI allows you to measure bias in a way that you wouldn't otherwise.”

Read more: Revealed: Five recruitment priorities for 2021

Earlier this year, PredictiveHire launched an ethical framework to alleviate some of the existing fears around AI.

Those concerns are compounded by the lack of regulation around AI – both here and around the globe. There are currently no countries with specific laws in place addressing the ethical use of AI, but the European Union has been the most vocal jurisdiction about proposing rules and regulations.

The framework aims to set a global standard for responsible use of AI in recruitment, creating a resource for hiring managers and HR leaders.

Recent articles & video

Ai Group seeks 2.8% minimum wage hike in 2024

Australia's job vacancies fall 6.2% in February

Love and business: Can a break-up lead to unjust dismissal?

Worker claims unfair demotion after temporary supervisor role ended

Most Read Articles

Queensland bans insurance use in paying fines, penalties

Employer shoots down worker's request for 'mutual separation'

Payroll officer charged for stealing over $1 million from employer: reports