Workers Are Using A.I. They Don't Trust. That's a Problem for the C-Suite

A new national poll finds that adoption of artificial intelligence in American workplaces is accelerating sharply -even as anxiety about the technology deepens and confidence in the people deploying it erodes

Workers Are Using A.I. They Don't Trust. That's a Problem for the C-Suite

The numbers tell a story that corporate America's human resources chiefs would rather not hear.

More than half of American adults -55 percent, up from 44 percent just twelve months ago -now believe that artificial intelligence will do more harm than good in their daily lives. Seventy percent think the technology will reduce job opportunities across the economy. Eight in ten say they are concerned about A.I. And fewer than one in four say they trust what the technology produces most or almost all of the time.

Yet the same survey, conducted by Quinnipiac University and released this week, also found that 51 percent of Americans are already using A.I. to research topics, that nearly a third of employed adults are using it to do their jobs, and that the share of Americans who say they have never used an A.I. tool has dropped to 27 percent -down from 33 percent a year ago.

Read next: Business leaders using AI report higher job satisfaction: survey

The picture that emerges from these findings is not one of a workforce in revolt against a technology its employers are forcing upon it. It is something subtler, and in many ways more difficult to manage: a workforce that is adapting to artificial intelligence at speed, while trusting neither the technology itself nor the institutions deploying it.

For the men and women who hold the title of chief human resources officer at American companies, that gap -between accelerating adoption and deepening unease -has become one of the defining workforce challenges of the moment. 

A generation of optimists turns pessimist

Among the findings in the Quinnipiac data, perhaps none is more arresting for talent executives than the generational breakdown on job displacement.

Gen Z -the cohort born between 1997 and 2008, now beginning careers that will be shaped more profoundly by artificial intelligence than those of any generation before them -is the most pessimistic of any age group about what the technology means for employment. Eighty-one percent of Gen Z respondents said they believe A.I. will lead to a decrease in job opportunities. That compares with 66 percent among baby boomers and 57 percent among the silent generation.

Read next: Employers deploy job protection measures amid widespread AI layoffs

The finding inverts a narrative that has become common in boardrooms and human resources departments: that younger workers, raised on technology and fluent in digital tools, are the natural advocates for A.I. transformation inside their organizations. The Quinnipiac data suggests the opposite may be true. Gen Z's familiarity with artificial intelligence appears to have produced not enthusiasm but clear-eyed anxiety -a recognition, perhaps, that the jobs most exposed to the technology's disruptive potential are often the entry-level and early-career roles that younger workers currently occupy.

The implications for recruitment, retention and organizational culture are considerable. Companies that have built their A.I. change narratives around the assumption of generational enthusiasm may find those narratives do not hold. 

The trust problem no one is solving

The most strategically significant number in the survey for human resources executives may not be the headline figures on harm and job loss, but a quieter statistic buried deeper in the data.

Seventy-six percent of Americans said they can trust A.I.-generated information only some of the time, or hardly ever. At the same time, more than a quarter of employed adults are using A.I. for workplace projects, and a similar share are using it to analyze data.

The arithmetic is uncomfortable. A substantial portion of the American workforce is regularly deploying a tool it does not trust to produce outputs that inform business decisions. That is not an adoption problem. It is a judgment and governance problem -one that sits at the intersection of workforce capability, organizational culture and the credibility of the executives overseeing A.I. deployment.

Read next: The career ladder is disappearing. Are taxes making it worse?

Quinnipiac researchers noted the tension directly. "The contradiction between use and trust of A.I. is striking," said Chetan Jaiswal, an associate professor of computer science at Quinnipiac's School of Computing and Engineering. "Americans are clearly adopting A.I., but they are doing so with deep hesitation, not deep trust."

For chief human resources officers, the challenge is not simply to increase adoption -adoption is already rising without intervention. It is to build the critical literacy and organizational conditions under which workers can use A.I. well: consistently, skeptically and with enough confidence to know when to trust it and when to override it. 

The supervisor question

Nowhere does the survey draw a cleaner line between what workers will accept and what they will not than in the question about machine authority.

Eighty percent of Americans said they would be unwilling to hold a job in which their direct supervisor was an A.I. program that assigned their tasks and set their schedule. The figure was consistent across generations, income levels and job types. Even among Gen Z -the group most fluent with A.I. tools -82 percent said they would be unwilling.

The finding has direct operational implications for human resources departments overseeing the rapid expansion of A.I.-assisted tools in performance management, workforce scheduling, capacity planning and productivity monitoring. Workers appear willing to use artificial intelligence as an instrument. They are not willing to be governed by it.

Read next: What entry-level jobs is AI most likely to replace?

The survey's findings on medical artificial intelligence underscore the point. When respondents were asked whether they would prefer A.I. alone, a human alone, or a combination of both to read their medical scans -even if the A.I. were proven more accurate -81 percent chose the combination. Only three percent said they would rely on A.I. alone.

The desire for human oversight, it turns out, persists even when the case for A.I. superiority is stipulated. For executives deploying tools that use machine learning to influence decisions about people's working lives, that finding represents a constraint that is unlikely to soften soon. 

White collar, blue collar: the anxiety is the same

A second assumption that the Quinnipiac data complicates is the widespread belief inside many large organizations that artificial intelligence anxiety is concentrated among lower-skilled and manual workers, and that professional employees are broadly comfortable with the technology.

The poll found that 73 percent of blue-collar workers believe A.I. will lead to a decrease in job opportunities. Among white-collar workers, the figure was 71 percent -a difference so small as to be within the survey's margin of error.

What does differ significantly is current usage. Nearly half of white-collar workers -49 percent -report using A.I. on the job. Among blue-collar workers, the figure is 18 percent. Professional employees are not more sanguine about the technology's implications for their careers. They are simply further along in confronting those implications in practice.

For human resources executives, this has direct consequences for the design of internal communications strategies, change management programs and workforce development investments. A message calibrated to reassure frontline workers while assuming the professional workforce is on board is likely to miss a significant portion of the talent it most needs to reach. 

The transparency deficit

The poll's findings on institutional trust add a further dimension to the challenge facing human resources leaders.

Seventy-six percent of Americans said businesses are not doing enough to be transparent about how they use artificial intelligence. Seventy-four percent said the government is not doing enough to regulate it. Nearly half -47 percent -said they do not believe A.I. development is being led by people or organizations that represent their interests.

Those numbers describe a trust environment that is deteriorating, not stabilizing, and they are the backdrop against which every corporate A.I. communication is being received. Vague assurances about responsible deployment carry little weight in an environment where three-quarters of the public has already rendered a negative verdict on business transparency.

For chief human resources officers, the question is not whether to be transparent about the use of A.I. in people processes -in hiring, in performance evaluation, in workforce planning -but how specific and credible that transparency can be made. The organizations that can answer concretely -identifying which tools are in use, what decisions they inform, what human oversight exists and what recourse employees have -are likely to occupy a meaningfully different position in the minds of their workers than those that cannot. 

What the data demands

The Quinnipiac findings arrive at a moment when the financial stakes around artificial intelligence could scarcely be higher. Amazon, Meta, Google and Microsoft together plan to spend a combined $650 billion on A.I. infrastructure this year. Boards across Corporate America are pressing their executive teams for evidence that productivity returns are materializing.

The data suggests those returns will not be delivered by technology alone. They will be delivered -or not -by workers, and by the organizations that have or have not done the work of building the trust, capability and cultural conditions that turn anxious, low-confidence adoption into genuine and durable productivity.

That work belongs, in large measure, to the chief human resources officer. The Quinnipiac poll is a detailed accounting of how much of it remains to be done.

America and AI: what workers really think

Quinnipiac University national poll, March 2026

55%
AI does more harm than good
70%
AI reduces jobs
80%
Concerned about AI

Key insight: Adoption is increasing faster than trust.

Gen Z
 
81%

Source: Quinnipiac University Poll (2026)

LATEST NEWS