From streamlining recruitment to enhancing employee engagement, AI has the power to transform HR - but only if it's implemented with care
This article was created in partnership with Dayforce.
As AI continues to transform how, where and when employees work around the world, HR leaders face an increasingly difficult set of challenges. While automation and AI are nothing new, their accelerating capabilities are raising complex questions around ethics and personal data.
Speaking to HRD, James Saxton, VP Global Product Ambassador at Dayforce – and speaker at Dayforce’s upcoming webinar AI in HR – What’s Working, What’s Not, and How to Get It Right – explained that when it comes to data privacy, AI is raising more questions than answers.
“AI and automation have been around for a while,” added Saxton. “But the rapid advances in technology are raising questions about the risks AI could pose for the organisation’s compliance, ethics, security, and privacy, especially when applying it to sensitive employee data. As gatekeepers of employee data, HR holds the responsibility of protecting this information and its ethical use.”
HR has always had to navigate changing legislation, but as Saxton told HRD, the pace of AI innovation often outstrips legal developments in Australia.
“Regulatory changes are not new to HR, but the world of AI changes rapidly, even more than legislation in many cases,” he explained. “Regional frameworks are constantly evolving, and AI solutions must be agile enough to adapt to these changes, helping organisations manage compliance today and tomorrow.”
And beyond compliance, HR teams are grappling with the broader implications of bias and ethical use of AI. Concerns around fairness are especially critical in HR, where algorithms may unintentionally reinforce discriminatory and biased patterns.
“Business leaders surveyed by the IBM Institute of Business Value report cited concerns about data accuracy or bias as the top AI adoption challenge,” Saxton explained. “Employers, legislators, and employees often wonder how fairly and accurately AI can make judgments, especially as the goalposts for addressing biases are constantly shifting.
“Organisations have an ethical obligation to ensure that their use of AI doesn’t harm individuals or society. Solution partners should adhere to a strict set of ethical standards that prioritise trust and employee focus.”
One of the most pressing challenges lies in understanding how AI makes decisions. Many third-party systems function as ‘black boxes,’ offering little visibility into the logic behind outcomes. A Gartner study, referenced by Saxton, found that fragmented technologies and lack of transparency for third-party AI solutions are major challenges to implementing AI in an organisation.
“Employers planning to use AI in ways that will influence consequential decisions should ask their solution partner to explain how their model makes inferences and influences decisions,” he added.
“AI performance can also degrade over time as it encounters real-world data that’s different from where it was trained on, so being clear about how models are monitored and updated will help reduce risks associated with poor quality.”
Looking beyond external influences, employee data is among the most sensitive information an organisation holds, making privacy non-negotiable when applying AI.
“The stakes of privacy for AI in HR couldn’t be higher,” explained Saxton. “Eighty-nine percent of surveyed C-suite leaders worry about security risks associated with AI, according to an NTT report. It’s essential for AI solutions to follow Privacy by Design, a methodology for embedding privacy directly into the solution.”
And even the best AI systems are only effective if they’re understood and embraced by the people who use them. Here lies another hurdle. According to a survey from Skillsoft, 43% of employees cite their biggest skills gap is in AI/ML – and just 40% of surveyed executives and IT leaders say they offer formal IT training in a Pluralsight survey. For Saxton, he believes that this makes it critical for organisations to design effective training and support mechanisms as they adopt AI technologies.
However, while AI excels at efficiency, it still lacks the creative and emotional capabilities that define great HR leadership.
“AI technology is only as good as the people who use it,” Saxton added. “Start by putting together tailored training programs for those who will be interacting with AI, including training on ethics and correct usage. It’s also important to communicate to employees that AI is a complement to their current efforts, not a replacement.”
Want to learn more on how AI can supercharge your organisational strategy? Register for Dayforce’s upcoming webinar - AI in HR – What’s Working, What’s Not, and How to Get It Right.