Hiring or firing with AI? Employers face potential legal exposure, says lawyer

From CV screening to staff cuts, AI is reshaping HR, but Christopher Tan of K&L Gates says legal responsibility ultimately still rests with employers

Hiring or firing with AI? Employers face potential legal exposure, says lawyer

AI tools are becoming more embedded in recruitment workflows, including in Singapore. LinkedIn’s new Hiring Assistant, currently in beta testing, promises to free recruiters from repetitive tasks, such as screening resumes and scheduling interviews. It’s part of a broader push to make hiring more efficient through automation.

But while productivity may rise, questions about fairness and transparency remain. With nearly half of recruiters in Asia-Pacific spending up to three hours daily on manual screening, the appeal of AI is clear.

Yet, as more companies adopt AI-powered hiring tools, there is growing concern about how these decisions are made and whether employers can justify them if challenged.

In Singapore, this concern is particularly pressing as the government prepares to roll out the Workplace Fairness Act 2025 in the next few years. The law will place stricter obligations on employers to eliminate bias in employment decisions.

To understand how HR can approach this responsibly, HRD Asia spoke with Christopher Tan, Partner and Director at K&L Gates Straits Law LLC.

The risk of algorithmic discrimination

"The main risk for employers is algorithmic discrimination," says Tan. Unlike decisions made by a human recruiter, AI tools often operate as black boxes, with limited visibility into how they arrive at a conclusion.

"Unless there is full disclosure from the AI tool developer, on the data used, the training infrastructure, and the safeguards against bias, it becomes very difficult to defend an employment decision made using that system."

Tan stresses that current AI tools rarely provide such transparency. "You can't cross-examine an algorithm the way you would an HR manager. If you're challenged on the fairness of a decision, you need to be able to explain how you arrived at it. If the tool is a black box, you're stuck."

"Ultimately, the responsibility to ensure fair and unbiased employment decisions lies with the employer, regardless of the tools or methods which may have been adopted in that decision-making process," Tan adds.

When is AI use defensible?

While AI is risky when used to make subjective decisions, Tan notes that it can be helpful for filtering hard data, such as educational qualifications or licensing requirements.

"Say you’re hiring for a legal role, and the relevant legal regulatory authority only recognises certain universities. If you configure the AI tool to exclude unqualified applicants based on that criterion, that’s a perfectly legitimate use."

Similarly, roles with legally valid occupational requirements can justify the use of narrow filters. "A wellness spa catering exclusively to female clients might require female therapists. A filter for a specific gender using an AI tool can be validly defended under the Workplace Fairness Act."

What HR should avoid, says Tan, is letting AI systems make decisions based on softer indicators, such as suitability for promotion or cultural fit. "We simply don’t have the technology to reliably justify those types of employment decisions yet. For such decisions, there should still be a significant amount of human judgment involved."

AI’s low uptake, but rising scrutiny

Tan observes that the widespread adoption of AI in HR remains limited in Singapore. "So far, we haven’t had clients come to us for legal trouble linked to AI hiring. Even the Ministry of Manpower data shows few complaints in this area."

However, he believes this is likely to change as AI tools become more prevalent. With the proposed Workplace Fairness Act expected to take effect around 2026 or 2027, HR leaders will be held directly responsible for discriminatory practices, regardless of whether they rely on third-party platforms.

"That piece of legislation places the obligation on employers not to have any discriminatory practices in their employment decisions. There isn't any carve-out in the legislation that says, ‘If you use AI, you can actually shift the responsibility on the AI developer.’ There’s none."

Reputational consequences of AI misuse

Even when there are no lawsuits, the reputational risks of any biased AI tools can be severe. Public trust can erode quickly if a company is perceived as relying on technology that yields unfair outcomes, particularly in hiring or termination decisions.

Employees and job seekers alike may begin to question the integrity of the organisation's decision-making process, which can damage an employer’s brand and make talent attraction more difficult.

Thus, HR departments, Tan says, must vet tools thoroughly and seek information to understand how a system works.

"To the extent that it comes out in the open that a company has been using an AI tool that's biased, it may be perceived as a level of oversight on the part of the company in selecting the right tool for its operations."

Scrutinising AI use in HR

Tan recommends a close inspection of how the AI tool was built. "Ask about the data it was trained on, whether it used a sufficiently large data pool, and what protective measures were put in place to prevent bias."

HR leaders should also request any internal testing results that demonstrate how the tool has performed across various demographics.

"If the developer is confident of their product, they should be able to provide that. Their willingness to share such data would go a long way to demonstrate how robust the system is."

He also notes a key compliance issue: data privacy. "CVs contain personal data, so employers should ask whether the data is being used to train the system further. If you're using a free version of an AI platform, your data may be contributing to its learning model."

Documentation and human intervention remain vital

To protect themselves, companies must document not only the final hiring decisions but also the process by which those decisions were made.

"It's not enough to keep a shortlist. You need to record how the AI tool was used, what criteria were applied, and where human judgment stepped in."

Tan emphasises that AI should remain a support tool, not the final decision-maker. "At this stage, it should only be used to filter hard data that you know you can definitely defend."

“Sometimes the AI tool may be capable of non-biased outputs, but proper prompting and usage of the tool is equally important to achieve the desired outcome. In that regard, there should be adequate training in prompt engineering to relevant personnel to complement the usage of the AI tools.”

AI in redundancy and performance tracking

Some employers are beginning to use AI to identify underperforming employees or shortlist staff for retrenchment.

"For example, in professional services firms, one measurable metric is utilisation. If an employee falls below a certain utilisation threshold, that could be an initial indicator for HR to explore redundancy," he says.

"AI can be effectively used to filter employees who fall into that particular category, but that still needs to be weighed against context. Maybe the person did a lot of pitching or non-billable work (including pro bono work) which was useful to the firm but not reflected in the utilisation statistics.

You still need to get input from their immediate supervisors to see whether there are any mitigating circumstances that would move the line between ‘yes’ or ‘no’ in terms of whether the person should be made redundant."

He explains that this use is not limited to professional services. In sales, for instance, performance can be tied to output metrics, while in operations, productivity benchmarks might be automated.

Even so, such data rarely tells the whole story. Tan sees significant danger zones for any HR practice with a heavy reliance on AI.

"You can't just let AI make the final or ultimate decision… There needs to be a second layer of human judgment involved."

How can HR use AI responsibly?

"You need to fully understand the AI tool that you're using. The HR head also has to ensure that there is proper training for its personnel on how to use AI tools effectively. So, one thing is really about [knowing] the tool, and secondly, to train personnel well enough to use the tool properly."

Ultimately, AI should serve HR, not replace it.

"It can filter. It can flag. But at the current stage, judgment still has to come from people."