Despite unease, Canadians letting AI make decisions: report

‘As AI becomes more capable of making decisions on our behalf, trust is not a ‘bolt‑on’ advantage, it’s a ‘built‑in’ necessity’

Despite unease, Canadians letting AI make decisions: report

Canadians continue to express concern about artificial intelligence (AI), yet a growing share are allowing AI systems to act on their behalf — which poses a challenge to HR professionals.

The 2026 EY AI Sentiment Study, released 11 May, highlights a gap between what people say about AI and how they use it in practice.

Overall, just 13% of Canadians have used autonomous AI in the past six months — systems that go beyond offering recommendations to taking action without human intervention, reports EY.

The professional services company describes this as a “meaningful minority” whose behaviour could shape wider expectations as AI‑driven services become more reliable and convenient.

“What we’re seeing is less about blind trust and more about conditional permission,” says Biren Agnihotri, EY Canada Chief Technology Officer. “Canadians are comfortable with AI in familiar, low‑risk moments, and that everyday experience is reshaping how trust actually develops.”

AI woven into everyday routines

Despite that, up to 78% of Canadians have used AI in energy and mobility, including route optimisation, travel planning and managing home energy consumption. Much of this use is embedded in apps and services rather than explicitly labelled as AI.

In customer experiences, 67% of respondents say they have interacted with AI through chatbots, recommendation engines or personalised offers. Technology and entertainment are also major touchpoints, with 61% using AI for content recommendations or smart‑device management, according to EY.

Health and wellness is identified as another growth area, with 55% of respondents saying they use AI for health information, symptom checking or insights from wearables.

“Autonomous AI is no longer theoretical,” the report states, noting that usage is moving from “low‑risk assistance to higher‑stakes decisions, including health, finance and transport.”

More than 70% of employees are using AI tools every week, and up to one-third are doing so without IT oversight, pointing to the rise of “shadow AI” across workplaces, according to a previous report.

Trust, security and accountability lag behind capability

Despite rising use, confidence in AI governance is not keeping pace. EY concludes that “AI use is racing ahead of people’s confidence in how it is governed, controlled and accounted for.” In Canada, 71% of respondents say they worry AI systems could be hacked or breached, making security the top concern identified in the study.

Only 39% of Canadians trust companies to protect their data when it is used by AI. Six in 10 respondents worry that organisations will fail to hold themselves accountable for AI use that leads to negative consequences, and 61% fear organisations will not comply with their own AI policies or relevant regulations.

EY reports that 59% of Canadian respondents fear AI decisions may conflict with their personal values, while 72% say human oversight remains essential, even when AI systems perform accurately.

“As AI becomes more capable of making decisions on our behalf, trust is not a ‘bolt‑on’ advantage, it’s a ‘built‑in’ necessity,” says EY Global CEO Janet Truncale. “When AI is designed with clear guardrails and continuous oversight, organisations can expand autonomy with confidence.”

Nearly one in three organisations in the U.S., Canada and Europe using AI in Microsoft 365 have experienced an AI‑driven data exposure incident, exposing HR records and other sensitive information, according to a previous report.

Strong demand for AI safeguards

The EY study finds consistent public demand for stronger safeguards. Respondents across markets want clearer rules for organisational AI use, parental controls, age limits and tougher regulatory frameworks. Authenticity is another concern, with more than three‑quarters of Canadians worried they will no longer be able to tell what is real or fake as generative AI spreads.

“What surprises me isn’t that people want stronger AI safeguards — it’s how consistent that expectation is across markets,” says Sarah Liang, EY Global Responsible AI Leader. “Regardless of where people live or how advanced AI adoption is, they’re asking for clearer rules, stronger accountability and visible protections.” EY notes that some organisations are responding with explicit public AI policies aimed at addressing bias, human oversight and data protection.

EY concludes that the future of AI will be shaped by “everyday choices — by when, where and how people decide to let AI act on their behalf.”

“The findings from this year’s survey make one thing clear: AI adoption is moving faster than sentiment,” the firm states.

A growing minority is already delegating decisions to AI, while a broader group is building confidence through everyday, low‑risk uses — a pattern that is likely to influence employee expectations of AI in HR and workplace systems.

Human resources professionals are facing mounting people and governance risks as organisations lean more heavily on AI while controls and workforce protections struggle to keep pace, according to two previous studies.

Here’s a list of Canadian companies with documented AI-use directives:

Company

Specific AI tasks and instructions to workers

Source

Shopify (Ottawa)

"Reflexive" AI usage declared a "fundamental expectation" for all employees; teams must demonstrate why AI cannot do the work before requesting more headcount; AI usage factored into performance and peer reviews; AI required in the "GSD" prototype phase for product designers.

Canadian HR Reporter — Should AI use be mandatory in the workplace? (https://www.hrreporter.com/opinion/editors-desk/should-ai-use-be-mandatory-in-the-workplace/394106)

Royal Bank of Canada (RBC)

Roughly 27,000 employees use RBC Assist for daily workflows; 8,000 Capital Markets staff use Aiden; AidenResearch lets a single analyst cover up to 50 companies; AI fluency prioritised for executives with bank-wide training rolled out by HR.

Finextra

TD Bank Group

Contact-centre colleagues use a Layer 6 virtual assistant to retrieve policy answers in seconds; TDS AI Virtual Assistant used by institutional Sales, Trading and Research staff to synthesise information for client inquiries; engineers required to use GitHub Copilot; Microsoft 365 Copilot rolled out to targeted populations with 80% engagement.

TD Stories

Scotiabank

Scotia Navigator gives staff assistive AI for routine work, research, analytics and coding; employees can build and deploy custom AI assistants; mandatory training and annual attestations on responsible AI use; AI handles 40%+ of contact-centre queries and routes 90% of commercial banking emails.

Newswire / Scotiabank

CIBC

Bank-wide mandate of AI literacy with enforced governance; capital markets workflow cut from 10–13 hours to about 10 minutes; mortgage operations use AI to read documents, extract data and auto-populate adjudication packages; "safe-to-try" environment with Azure AI rolled out across teams.

Microsoft customer story

BMO

Opened the BMO Institute for Applied AI & Quantum to expand internal AI use and explore quantum computing; AI cited in fraud detection and credit monitoring at the bank's annual meeting.

eMarketer

Telus

Call-centre agents in B.C. required to use AI "co-pilot" tools on 100% of retention calls; AI listens to calls and produces performance reports for managers; Telus Digital deploys Tomato.ai speech-to-speech models to alter offshore agents' accents in real time.

CBC News; The Globe and Mail

Sun Life

Employees upskilled through CIFAR's "Destination AI" course on the internal training platform; prompt-engineering course added to learning platform; AI tools deployed for software engineers as they code; generative AI tools rolled out as part of the hybrid employee experience.

Tech Talent Canada

Element Fleet Management

Centralised AI strategy with HR-led training mandate and ongoing learning programs to develop "AI culture champions," described by VP Talent and Performance Siobhan Calderbank.

HRD Canada 

Symcor

HR-owned enterprise AI training mandate to ensure employees are equipped to use AI tools, described by Director of HR Technology and People Analytics Rachel Wong.

HRD Canada

LATEST NEWS