The AI proficiency gap is now an HR problem

Think your staff are using AI well? Think again...

The AI proficiency gap is now an HR problem

In many corporate boardrooms, artificial intelligence is treated like a deployment: choose a tool, publish a policy, run a training, count the logins. By those measures, the AI era is already here. Leaders report broad adoption, clear strategies, and a workforce eager to experiment.

But a new survey of 5,000 knowledge workers across the United States, the United Kingdom and Canada suggests a less flattering truth: employees are using AI, yes — just not in ways that meaningfully change how work gets done. The result is a familiar corporate mirage: high activity, low impact.

For senior HR leaders, this isn’t a technical shortcoming. It’s a capability and operating-model failure — and it sits squarely in the terrain HR owns: skills, manager expectations, role design, and performance measurement.

From “Can You Prompt?” to “Can You Produce Value?”

The report argues that “AI proficiency” has shifted underneath organizations’ feet.

In 2025, proficiency largely meant baseline fluency: understanding what AI is, how to avoid obvious data risks, and how to write a serviceable prompt. Companies invested heavily in those table-stakes skills, with predictable outcomes. Employees learned to ask AI to summarize emails, rewrite messages, or provide quick answers.

In 2026, the bar has moved again. Proficiency now means something more demanding and more operational: incorporating AI into real, value-adding tasks every week. Not AI as an occasional helper, but AI as a regular component of workflows that matter — the point where productivity gains can compound into enterprise return on investment.

The report’s central finding is blunt: most organizations have not crossed that threshold.

A Workforce of “Experimenters,” Not Practitioners

Three years after ChatGPT’s launch, the dominant mode of AI use remains superficial.

Most workers fall into what the report calls “AI experimenters”: people who use AI for basic tasks like meeting-note summaries, email rewrites, and quick informational searches. The second-largest group are “AI novices,” who either do not use AI at all or tried it a few times and stopped. Only a small sliver qualify as “AI practitioners” — those who integrate AI into workflows and report significant productivity gains — and an even smaller fraction as true experts.

This matters because the path to ROI is not paved with occasional experimentation. It is paved with repeatable use cases: automation that replaces steps, analysis that accelerates decisions, and workflow redesign that changes throughput.

Instead, the report finds that many employees are saving little to no time. A meaningful share report no time savings at all, and many save less than four hours per week — far below what most organizations would need to justify large-scale AI investments.

The “Use Case Desert” — and Why Training Isn’t Fixing It

If you ask executives what’s holding AI back, you’ll often hear variations of “people need more training.” The report points to a different bottleneck: workers don’t know what to use AI for.

This is the “use case desert”: employees may understand the mechanics of prompting, but when faced with the reality of their job — the weekly rhythms, the handoffs, the approvals, the constraints — they struggle to identify where AI can do more than polish prose.

That distinction is crucial for HR. Traditional training approaches tend to teach tools and rules. But use case discovery is not just knowledge; it’s judgment. It requires mapping work, spotting bottlenecks, and designing new routines — the kind of applied competence that is best developed in role-specific contexts, reinforced by managers, and measured over time.

The report suggests that even when organizations provide training, outcomes remain weak because the content is aimed at the wrong target. Employees may leave training with better safety awareness and improved prompting, yet still lack intermediate skills: breaking down processes, selecting high-leverage tasks, and building repeatable workflows.

Most AI Use Cases Are Unlikely to Pay Off

The use cases workers report as “most valuable” are telling. High on the list: using AI as a replacement for search; generating drafts; editing tone and grammar; summarizing documents; basic brainstorming. Lower on the list: automations, robust data analysis, code generation that changes delivery speed.

In other words, AI is being used as a convenience layer — not a productivity engine.

A workforce that relies on AI primarily for one-off writing assistance may feel more efficient in the moment, but it rarely produces the kind of measurable time savings leaders are betting on. That requires something harder: taking recurring work and redesigning it so the machine does more of the repeatable lifting, every week, with less variability.

For HR leaders tasked with proving the value of enablement investments, this is the uncomfortable middle: high reported usage, low demonstrated impact.

The Leadership Perception Gap

Perhaps the report’s most consequential finding is not about employees at all, but about executives’ understanding of them.

C-suite respondents are far more likely than individual contributors to say their company has a clear AI strategy, a well-functioning policy, accessible tools, and a culture that encourages experimentation. Leaders also report high enthusiasm and trust in AI, and frequent personal use.

Yet the rest of the organization tells a different story — one where access is uneven, policies feel unclear or unhelpful, and support is inconsistent. The report frames this as an “awareness gap”: leadership believes deployments are succeeding, while employees report minimal impact.

For HR, this is a governance issue as much as a cultural one. When executives measure success through adoption and access metrics, they can remain insulated from the more operational truth: whether AI is improving cycle time, reducing rework, and changing outcomes in the roles that actually run the business.

The Hidden Fault Line: Individual Contributors

The report identifies a striking inequity: individual contributors — the employees most likely to perform repetitive, automatable work — often receive the least support.

Compared with managers and executives, they are less likely to have clear access to tools, less likely to receive training, and far less likely to be reimbursed for AI products. They also report more anxiety and less trust, and they experience less manager encouragement to use AI.

This is the inversion at the heart of many AI rollouts: the people with the most influence and discretionary budget get the most enablement, while the people whose workflows could generate the most aggregate time savings are left with the least.

Senior HR leaders are uniquely positioned to correct this because the levers are HR levers: standardizing access, defining expectations for managers, embedding AI into role competencies, and ensuring that learning resources match the actual work of frontline knowledge roles.

Industry and Function Differences: A Map of Where HR Should Start

Not all parts of the economy are experiencing AI the same way. The report finds higher proficiency in sectors like technology and finance and lower proficiency in sectors like healthcare, education, and retail — patterns that track with differences in strategy, policy clarity, and tool access.

Across functions, engineering and strategy rank higher than areas like operations and customer service/support, despite customer service being a domain with obvious automation potential. The report also highlights gaps between “obvious” value cases and actual behavior: many engineers are not using AI for coding assistance, and many product managers are not using AI for prototyping.

For HR, this suggests a pragmatic approach: stop treating AI as a single enterprise skill and start treating it as a portfolio of function-specific capabilities, each with its own use cases, risks, and measures of impact.

What HR Leaders Should Do Now

The report’s mandates for 2026 read like a management agenda, but they are, in practice, an HR agenda.

Measure outcomes, not access. If your dashboards stop at adoption rates, they will flatter your program and mislead your executives. HR can push measurement toward time saved, use case quality, and business outcomes at the role level — the metrics that reveal whether AI is changing work.

Make use case development a managed competency. The report argues that use case discovery cannot be left to individual curiosity. HR can formalize it: role-based playbooks, curated libraries, internal communities of practice, and a clear expectation that teams will develop and share repeatable workflows.

Bridge the individual-contributor gap. If ICs have the least access and support, the organization is starving the very segment that could deliver the largest productivity dividend. HR can standardize access, create equitable reimbursement policies, and set manager expectations that are audited and coached.

Rebuild training around workflows. Safety and prompting are necessary foundations. They are not sufficient. HR learning programs should evolve toward workflow mapping, task decomposition, evaluation of AI outputs, and the habit of continuous experimentation tied to measurable outcomes.

Close the executive awareness gap. HR can create mechanisms for leaders to see what employees experience: structured skip-level listening on AI barriers, shadowing programs, and regular reporting that surfaces where adoption is shallow and why.

The Real Story: AI Is Changing Work — Just Not Yet at Scale

The report’s most useful contribution may be psychological. It punctures the comforting narrative that AI transformation is chiefly a matter of tool rollout. Tools are necessary, but they are not the transformation. The transformation is behavioral and managerial, and it arrives only when AI becomes routine inside the actual work — not the demos, not the policy PDFs, not the training completion rates.

For senior HR leaders, this is a familiar pattern: enterprise technology succeeds only when capability, incentives, and management practices change with it. In 2026, AI proficiency is no longer a question of whether employees can use AI.

It is whether the organization has taught them what to do with it — and whether leaders are willing to measure the answer honestly.

LATEST NEWS