AI-related layoffs ‘do not reflect structural changes driven by technological transformation,’ claims researcher
Canada’s labour protections are not equipped to deal with the speed and scope of job losses linked to artificial intelligence (AI), leaving workers to absorb most of the costs of technological change, a new analysis warns.
Policy choices over recent decades have weakened protections just as AI and other digital technologies begin to reshape workplaces, claims Dilara Baysal, a researcher at Concordia University, in a report published in Policy Options.
“Work no longer provides stability for millions of Canadians,” Baysal writes, citing rising unemployment, declining job quality and the growing risk of AI‑driven displacement. As a result, she concludes, “the labour market is becoming more fragile rather than more resilient.”
Baysal contends this fragility “is not accidental.” Instead, she says it reflects decisions about “how work is organized, how jobs are eliminated or restructured and who bears the risks of economic and technological change.” With “the absence of robust labour protections,” she writes, “adjustment costs” from AI are being shifted onto individual workers.
Previously, one employment lawyer warned that AI in the workplace can be a legal minefield.
From postwar protections to deregulation
Baysal places Canada within a broader trend toward labour‑market deregulation since the early 1990s, undertaken “in the name of flexibility and competitiveness.”
However, according to Baysal, “promised productivity and employment gains have not materialized.” Evidence from Canada and peer economies, she argues, shows that reducing wages, job security and employment protections “does not make for a more dynamic labour market,” but rather “reallocates risk from firms and governments that benefit from new technologies to workers, who are left to absorb the consequences of restructuring and permanent loss of jobs.”
Baysal contrasts the current approach with the postwar period, when technological job loss was treated as a shared social risk. The Unemployment Insurance Act of 1940 established job loss as a “collective responsibility,” while the Industrial Relations and Disputes Investigation Act of 1948 secured the right to unionise and bargain collectively (p. 320). This framework, she notes, ensured that “workplace change would be negotiated rather than imposed.”
Federal labour standards enacted in 1965 then set minimum requirements for hours of work, termination notice and related protections, according to the report published in Policy Options. The last significant federal update directly addressing technological change came in 1973, when amendments to the Canada Labour Code introduced advance notice and consultation requirements for job losses driven by new technology. But Baysal points out that these protections apply only to federally regulated sectors, “about 6% of the workforce,” leaving most workers uncovered.
Previously, Anthropic, a new AI tool aimed at in‑house legal teams triggered a sharp sell‑off in major legal and data‑services stocks, underscoring how quickly generative AI could alter white‑collar work in the corporate sector.
Income security and severance gaps
Basyal also highlights the gradual scaling back of Employment Insurance. Beginning in the late 1980s and entrenched in the 1995 federal budget, employment insurance (EI) was significantly reduced. Between 1976 and 2019, coverage fell from 87% of unemployed Canadians to 38%, with “women and part‑time workers especially hard hit.”
Across Canada, Baysal says, severance rules are fragmented and provide only basic protection. Employers are generally required to give notice or pay in lieu, but in many provinces — including British Columbia, Alberta, Manitoba and Quebec — they are “not required to provide severance beyond these minimum standards.” While some workers receive more through contracts or collective agreements, “many receive only the minimum the law requires.”
Once severance ends, laid‑off workers may receive EI replacing 55% of prior earnings up to a maximum of $729 a week, based on annual earnings up to $68,900. Baysal argues that “these rules do not adequately support workers in a fast‑changing economy shaped by rapid technological change.”
Limited access to retraining
Retraining opportunities are also limited, Baysal states. A federal programme launched in 2018 to support unemployed and mid‑career workers in upgrading their skills while on EI was time‑limited, and full‑time training under EI “remains largely out of reach.” In 2020–2021, fewer than 1% of eligible recipients — 613 workers — were approved, rising only to about 780 in 2023–24.
These figures, Baysal writes, “point to systemic barriers that keep retraining inaccessible for most workers.”
Most training is delivered through a patchwork of provincial programmes – including Better Jobs Ontario and the Skills Development Fund in Ontario, the B.C. Employer Training Grant, and supports via Services Québec. Baysal argues that complex eligibility rules and limited spaces “make it difficult to upgrade or learn new job skills” even as AI and other technologies rapidly change work.
AI‑linked layoffs and calls for reform
Basyal notes that in 2025 “several major employers globally cited AI as a factor behind mass layoffs and hiring freezes.” While the federal government has introduced temporary measures for tariff‑related unemployment, Baysal says these “do not reflect structural changes driven by technological transformation.”
She calls for two main reforms: extending labour protections related to technological change beyond federally regulated sectors, and redesigning EI so that “support for retraining is a core feature” rather than a marginal add‑on. EI, she argues, should fund full‑time skills training “without placing recipients at risk of losing income support.”
“New technologies are evolving at an unprecedented pace in almost every aspect of our day‑to‑day lives,” Baysal writes. “It is crucial that Canada provide workers with resources and protections to navigate a shifting labour landscape and better position themselves in a rapidly changing world.”
Here are some of the current AI-related legislations in Canada:
|
Instrument / Framework |
Level & Status |
Scope & Key AI Relevance |
Sources |
|
Personal Information Protection and Electronic Documents Act (PIPEDA) |
Federal – in force |
Core private‑sector privacy law governing collection, use and disclosure of personal information, including data used to train and operate AI systems in most provinces. |
Parliament of Canada; Office of the Privacy Commissioner of Canada guidance on PIPEDA and AI; Justice Canada backgrounders. |
|
Provincial private‑sector privacy laws (Alberta PIPA, B.C. PIPA, Quebec private‑sector law) |
Provincial – in force |
“Substantially similar” to PIPEDA and apply instead of PIPEDA for intra‑provincial activities; increasingly updated to address automated decision‑making and AI‑related risks. |
Government of Alberta (PIPA); Government of British Columbia (PIPA); Government of Quebec privacy legislation summaries. |
|
Digital Charter Implementation Act, 2022 (Bill C‑27) – incl. Artificial Intelligence and Data Act (AIDA) |
Federal – proposed, did not pass |
Would have created a risk‑based framework for “high‑impact” AI systems and overhauled federal privacy law; treated as a preview of likely future federal AI regulation. |
Justice Canada “Digital Charter Implementation Act, 2022” materials; Parliament of Canada LEGISinfo on Bill C‑27; legal commentaries by Canadian law firms. |
|
Online Harms Act (Bill C‑63) |
Federal – proposed, stalled |
Focused on harmful online content and platform duties; indirectly touches recommender systems and algorithmic amplification but not an AI‑specific statute. |
Parliament of Canada LEGISinfo on Bill C‑63; Canadian Heritage fact sheets; academic and legal analyses of the Online Harms Act. |
|
Treasury Board Directive on Automated Decision‑Making (ADM Directive) |
Federal (public sector policy) – in force |
Binding directive for federal departments using automated decision systems that affect rights or entitlements; requires Algorithmic Impact Assessments, documentation, transparency and human oversight. |
Treasury Board of Canada Secretariat – “Directive on Automated Decision‑Making” and Algorithmic Impact Assessment documentation. |
|
Quebec’s Law 25 (Act to modernize legislative provisions as regards the protection of personal information – formerly Bill 64) |
Provincial (Quebec) – in force, phased in |
Modernises privacy law and explicitly regulates decisions made “exclusively by an automated decision‑making system”; creates rights to be informed, obtain reasons and seek correction; high penalties for non‑compliance. |
Government of Quebec – Law 25 texts and guidance; Quebec privacy regulator (CAI) materials; privacy law firm analyses. |
|
Human‑rights codes (federal and provincial/territorial) |
Federal & provincial/territorial – in force |
Prohibit discrimination in employment; apply to algorithmic and AI‑driven decisions in hiring, promotion, discipline and termination just as they do to human decisions. |
Canadian Human Rights Act; provincial/territorial human‑rights codes; guidance from human‑rights commissions on AI and discrimination. |
|
Sector‑specific financial and insurance rules touching algorithms |
Federal & provincial – in force / evolving |
Supervisory expectations and guidelines on model risk, fairness and transparency in financial services and insurance; increasingly applied to AI‑driven credit, underwriting and fraud systems. |
Office of the Superintendent of Financial Institutions (OSFI) guidance; provincial insurance regulators’ bulletins; industry risk‑management standards. |