AI storm gives HR managers a headache

AI is sending more workers to the Fair Work Commission. Is it creating needless work for HR?

AI storm gives HR managers a headache

Artificial intelligence was supposed to ease the workload for HR. Instead, it is increasingly showing up in Fair Work Commission (FWC) proceedings – as the unseen hand behind dismissal decisions, the ghost-writer of employee claims and the target of a new wave of regulatory scrutiny.

For HR managers already stretched by bargaining, restructures and talent shortages, the question is starting to bite: is the rush to AI actually generating a new layer of avoidable disputes and paperwork?

In August, the FWC issued a pointed warning after a Sydney worker tried to run an unfair dismissal case almost two-and-a-half years after he resigned from his job at Electra Lift Co.

Read next: Fair Work Commission to address AI-assisted dismissal claims

The former employee, Branden Deysel, told the Commission he had relied on ChatGPT, which suggested his employer had breached workplace laws and encouraged him to file a claim. Deputy president Tony Slevin found there had been no dismissal at all – Deysel had voluntarily resigned – and dismissed the application as “hopeless”, noting it had wasted the time of both the Commission and the company.

Employment lawyers say the case is no outlier. Law firms are already reporting unfair dismissal and general protections applications that carry the familiar fingerprints of generative AI: lengthy, legal-sounding submissions that recycle standard phrases but misstate basic Fair Work Act concepts or miss crucial jurisdictional points.

For HR, that can mean more time briefing counsel and responding to sprawling claims that never should have been lodged in the first place – but that still require a proper response.

AI is also creeping into the other side of the FWC ledger: the decisions employers must defend.

Recruitment platforms and HRIS tools are increasingly using algorithms to screen candidates, flag “high potential” employees or help pick who is “at risk” in a restructure. A federal inquiry into workplace digital transformation has recommended that AI systems used in employment decisions be classified as “high-risk” and subject to stronger guardrails, including consultation and transparency obligations.

Read more: Employees 'gatekeeping' knowledge amid AI-driven job insecurity

Employment lawyer commentary has been blunt. One leading practitioner recently described it as a “very dangerous game” to rely on AI to make termination decisions, warning that bias or errors in opaque systems could fuel unfair dismissal and discrimination claims.

In practice, HR managers may find themselves in the witness box not only explaining why a role was made redundant or a candidate rejected, but also fielding questions about how an algorithm scored the person, what data it was trained on and whether anyone checked its outputs.

Recruitment bias: a future test case waiting to happen

Outside the tribunal, Australian researchers are already documenting the discrimination risks of AI in hiring.

A recent study led by University of Melbourne researcher Dr Natalie Sheard found that AI video interview tools struggled to accurately process diverse accents, with error rates of up to 22 per cent for some non-native English speakers.

The study warned that job candidates with accents or speech-affecting disabilities could be disadvantaged, and criticised the lack of transparency in how some systems ranked applicants. Around 30 per cent of Australian employers are estimated to be using AI recruitment tools, a figure expected to grow.

No AI-related discrimination case has yet run to judgment in Australia, but past problems with automated decision-making in the public sector – including overturned promotion decisions at Services Australia – show how quickly a technology experiment can turn into a legal headache.

If and when such a case lands in the FWC or the Federal Court, HR will be the one assembling data trails, audit logs and policy documents to show that the algorithm did not unlawfully discriminate.

FWC and regulators push back

The Fair Work Commission itself is moving to define the boundaries. It has issued an artificial intelligence transparency statement, confirming that only human members make decisions, and cautioning parties against relying on generative AI for legal advice.

Meanwhile, the federal inquiry’s recommendation that AI systems affecting workplace rights be treated as “high-risk” signals more regulatory detail to come.

In the short term, though, the practical consequences are landing squarely on HR desks:

  • more access-to-information requests about how AI-enabled tools operate
  • tougher questions from unions during consultation on restructures and new systems
  • heightened expectations that employers will audit tools for bias and explain how decisions are made.

Necessary evolution – or self-inflicted admin?

There is no question AI can help HR: highlighting pay-equity gaps, predicting flight risk, or automating parts of recruitment. Many HR teams are already using these tools carefully and getting real value.

But as Deysel’s “ChatGPT claim” shows, AI is also lowering the barrier to lodging weak applications. At the same time, algorithm-driven HR tools are becoming a new line of attack for employees and their lawyers, even when human decision-makers thought they were acting fairly.

The result, for many HR managers, is a pincer movement:

  • on one side, more, longer and sometimes misconceived FWC applications drafted or bulked-up by generative AI
  • on the other, growing scrutiny of any decision that touched an algorithm on the way through.

That raises the uncomfortable question: are we actually improving fairness and efficiency – or just creating another wave of avoidable work for HR?

Where to from here for HR leaders

If AI is here to stay, there are some practical steps HR can take to keep the benefits while minimising the headaches:

  • Treat AI in HR as “high-risk” by default – even before legislation catches up. Demand explainability from vendors and keep humans firmly in charge of decisions.
  • Build an internal register of AI-enabled tools touching employment decisions, and schedule regular bias and accuracy audits.
  • Be transparent with staff about where AI is used in recruitment, performance management and restructures – and where it is not.
  • Train managers on the limits of generative AI, both for internal use and in dealing with employee claims that appear to be AI-written.
  • Insist that every termination or major decision can be justified on human reasoning alone, with AI outputs treated as one data point, not the verdict.

For now, the FWC’s message is that there is no shortcut around genuine legal advice or sound process – and AI is no magic shield for either side. Whether this moment becomes a turning point or just another compliance burden will depend largely on how HR chooses to use, question and where necessary push back on the technology.

The temptation is to see AI as a way to take work off HR’s plate. The emerging reality is more complicated: unless it is tightly governed, AI may be one of the biggest new creators of HR work – and Fair Work claims – in years.

LATEST NEWS