How to successfully implement new AI – without overwhelming employees

Any powerful tool like ChatGPT requires constant and rigorous oversight, says chief data scientist

How to successfully implement new AI – without overwhelming employees

In part four of HRD’s ChatGPT series, we look at the trials and tribulations of rolling out HR tech – and how to qualm employee anxieties.

When it comes to rolling out new technology, employees can feel anything from trepidation to anxiety to excitement. Any change of pace or strategy will inevitably incite some sort of nervousness – but for HR leaders, it becomes a case of managing that worry and channeling it in the right direction. Afterall, we’re very much living in the AI Age – and that reliance on technology will only continue to grow over the coming years.

As such, employers need to perfect their processes in implementing new technologies, especially as ChatGPT looms large on the horizon. Speaking to HRD, Dr. Lindsey Zuloaga, chief data scientist at recruitment giant HireVue, says that before rolling out any new AI, there’s some key considerations leaders need to keep in mind.

“Any powerful tool requires constant and rigorous oversight and generative AI is no exception,” she says. “It’s important that vendors don’t jump to integrate the ChatGPT model into existing tools until they’ve conducted rigorous testing. New AI regulations are being proposed and passed constantly, from the new EU AI Act to NYC Local Law 144, and generative AI should be held to the same, if not greater, standards as other AI tech.”

The legalities of AI bias came to the fore in a recent interview with Mike MacLellan, partner at CCP, who told HRD that, ultimately, employers bear the blame for an legal fallout with new tech.

“First and foremost, vendors should be able to explain how their AI systems were trained and how they should be used to people with any level of technical expertise,” adds Dr Zuloaga. “It’s important for vendors to demonstrate that models work as well as audit them for bias. I believe creators of these tools should prioritize creating an AI Explainability Statement, which is a valuable third party process that documents to the public, customers, and users how a given technology is developed and tested.”

Channeling the benefits of ChatGPT

One of the benefits of AI is that when it’s used in conjunction with humans to automate mundane tasks, humans can focus on the things leaders are good at. Recruitment, in particular, is bracing itself for a revolution when it comes to talent attraction strategies – something sorely needed in the current labour shortage.

“I work in the hiring and talent acquisition space, and our Science Team is eagerly researching ChatGPT to see if there are ways it could benefit our customers and their candidates,” says Dr Zuloaga. “For recruiters, first drafts of offer letters and job descriptions come to mind as areas where ChatGPT could save time.”

What’s more, candidates could use generative AI to help them improve cover letters and resumes. says Dr Zuloaga says she’s particularly interested in how ChatGPT could be used to help job seekers transition industries by better understanding how their skills are applicable across different roles.

Bracing for future challenges

While the benefits are numerous, there’s downsides to new AI too. The main concern for many leaders is the spread of false information – or essentially letting the bot run away with itself. As with all things that concern humans, a human needs to be present in the process – or you risk running afoul of ethics.

“Most of us have interacted with AI in some way, whether it’s returning clothes to an online retailer, making a dinner reservation, or asking about the status of a job application,” says Dr Zuloaga. “Interactions of this kind are the typical, benign chatbot use cases, but ChatGPT and other generative AI tools are raising well-deserved concerns. The primary concern I’m seeing is about the proliferation of disinformation. Frankly, innovation has outpaced safeguards, and it's important that researchers and technologists are asking critical questions and rapidly trying to build in safeguards.”

Calming employee anxieties

It’s been a challenging year for employees – and it’s only March. Mass layoffs in the Canadian tech sector coupled with rising costs of living and an impending potential recession has left workers rightly anxious. Now with AI gaining ground, workers are understandably worried about being outset by the robots.

A recent report from the All Party Parliamentary Group on the Future of Work (APPG) found that AI monitoring is leading to diminishing mental health issues for employees – specifically around targets and performance.

The report reads: “AI offers invaluable opportunities to create new work and improve the quality of work if it is designed and deployed with this as an objective. However, we find that this potential is not currently being materialized. Instead, a growing body of evidence points to significant negative impacts on the conditions and quality of work across the country.”

But it’s not necessarily the tech’s fault – more the lack of dialogue and transparency in how it’s being utilized. For people leaders it’s a case of getting ahead of the fallout before it hampers morale and productivity. This means forewarning employees of any roll outs and, most importantly, involving them in the decision making process. If AI is there to make your strategies more seamless, then your people are the ones to tell you how, where and why the improvements are most needed.

Three-point plan for implementation of AI

So you’ve picked your new AI, you’ve spoken with your supplier and you’ve pre-warned your people. Now comes the tricky part – the roll out. Inevitably, something always goes wrong. There’s simply too many factors at play to perfect everything the first time around. However, that’s not a major stumbling block – as long as you prepare for the challenges.

“Rigorous testing is the bedrock of any product strategy and every powerful tool should undergo testing before it’s deployed, as well as after. The details of that testing will vary by product.”

Dr Zuloaga suggests a three-point plan for testing, perfecting and rolling out new HR tech successfully:

  1. Pre-deployment testing to ensure maximum predictive accuracy and minimal group differences, ensuring that models meet our standards on both
  2. The deployed algorithm is then ‘locked,’ meaning that it will not change in the wild as candidates are interacting with it
  3. Our team conducts model refreshes for every algorithm, no less than annually, to ensure continued validity and fairness.

Recent articles & video

Can AI help put the human back in HR?

Canadian HR Awards: Recognizing the best and brightest in Canadian HR

Compensation increases in Canada fall short of projections: report

5-Star Benefit Programs: Last few days to enter

Most Read Articles

Federal public servants to be required in office 3 times a week

Ontario proposes bigger ESA fines, greater job transparency with new legislation

Alberta launches third phase of ‘Alberta is Calling’ campaign