New research points to rising retention risk amid AI adoption
Mandated use of artificial intelligence tools at work is emerging as a potential driver of resignations, as employees push back against AI systems.
HR technology provider Click Boarding has warned that, rather than producing productivity gains, poorly implemented AI initiatives are quietly fuelling disengagement and prompting some workers to quit.
Its report, conducted in January and February 2026, found that employees said some AI adoption strategies reduce autonomy, add bureaucracy, and make their jobs feel less meaningful.
The findings come as US employee engagement has fallen to its lowest level in a decade, while job‑seeking activity has hit a 10‑year high.
Search data shows a 10% year‑on‑year increase in US searches for "quitting my job," alongside emerging queries such as "made to use AI at work."
"The rapid adoption of AI has many employees, and organisations for that matter, feeling like everything is spinning," said Stephanie Davis Neill, chief operating officer at Click Boarding.
"Even before AI, change management has always been one of the most challenging aspects of running a business, especially for HR professionals who are often looked at to lead these efforts. Just like AI must learn, so do the employees working with it. It is a process, not just a switch to turn on."
Common AI complaints
But another problem is the sharp disconnect between employer and employee perceptions.
Only four per cent of employers reported employee resistance as a barrier to AI adoption, yet nearly a quarter of workers (22%) said they would consider leaving a job if forced to use AI tools in ways they did not support.
The study, which drew on Glassdoor reviews and social media posts, highlighted recurring complaints from workers involved in AI rollouts.
These included being excluded from discussions with leadership about AI strategy, discomfort at developing or reporting on AI systems that might eventually replace their own roles, and a preference in some cases to complete tasks without AI because of concerns over creativity and quality.
Some respondents also reported feeling blamed when AI underperformed, with shortcomings attributed to "bad prompts" rather than limitations of the tools.
Others described management expectations that AI could take over responsibilities it was not yet capable of handling.
One Glassdoor reviewer likened company monitoring systems to "AI Big Brother," citing screen‑time tracking down to the minute. Another suggested that employees who did not embrace AI risked worse career prospects.
What can employers do?
Davis Neill said employers should update their compliance‑driven policies to include AI guidelines. They should also ensure that employees acknowledge them.
Key AI process information should also be shared as early as the onboarding period, while regular feedback channels should also be provided to allow organisations to address concerns proactively.
These measures can help reduce retention risks, according to Davis Neill, and will keep employees engaged amid AI adoption.
"Internal feedback mechanisms, especially anonymous ones, often provide a place for disengaged employees to communicate some of the frustration that can build up, especially when regular conversations are not happening with a direct leader," she said.