The world's largest social media company has a nascent workers' movement on its hands — and the legal, psychological, and cultural questions it raises are landing squarely in Canadian boardrooms.
Meta Platforms employees at multiple United States offices staged an extraordinary workplace protest on Tuesday, distributing anonymous flyers urging colleagues to sign a petition against the company's recent installation of mouse-tracking software on their work computers — technology many workers believe is being used to train the artificial intelligence systems intended to replace them.
The pamphlets, photographed by Reuters, appeared in meeting rooms, atop vending machines, and balanced on toilet paper dispensers inside the Facebook parent company's offices. "Don't want to work at the Employee Data Extraction Factory?" they asked.
READ MORE: Meta to cut 10% of its workforce as Zuckerberg redirects billions toward AI
The action comes roughly a week before Meta is expected to begin notifying approximately 8,000 employees caught in its first wave of 10 per cent workforce reductions. It is the most visible sign yet of a labour movement brewing inside one of the world's most powerful technology companies — and it arrives at a moment when every Canadian people professional faces versions of precisely the same pressures.
What Happened
Meta confirmed earlier this year it would cut approximately 8,000 roles in May, reorganising surviving employees into AI-focused "pods" under its Superintelligence Labs division. A second wave of cuts is planned for the second half of 2026.
Into this climate of deep job insecurity, Meta installed mouse-tracking software that captures cursor movements, clicks, and navigation patterns. The company's justification is straightforward: it is building AI agents that need to learn how humans operate computers. Spokesperson Andy Stone said the software captures "things like mouse movements, clicking buttons, and navigating dropdown menus."
For employees already watching their colleagues receive termination notices, the framing has landed badly. The flyers and an associated online petition cite the US National Labor Relations Act, informing signatories that workers are "legally protected when they choose to organize for the improvement of working conditions."
Meanwhile, in the United Kingdom, a group of Meta employees has launched a formal union drive with United Tech and Allied Workers (UTAW), a branch of the Communication Workers Union. The campaign website uses the URL "Leanin.uk" — a pointed reference to former chief operating officer Sheryl Sandberg's book encouraging women to seek equal footing in the workplace. "Meta's workers are paying the price for management's reckless and expensive bets," said UTAW organiser Eleanor Payne. "Staff are facing devastating job cuts, draconian surveillance, and the cruel reality of being forced to train the inefficient systems being positioned to replace them."
Why This Matters in Canada
Canada is not the United States. The legal, regulatory, and cultural context for workplace monitoring here is distinct — and in several respects, more protective of workers.
As HRD has reported, Ontario's Working for Workers Act (2022) requires any employer with 25 or more employees to have a written policy in place on electronic monitoring — specifying what is tracked, in what circumstances, for what purposes, and what the employer intends to do with the information. A policy that discloses monitoring is occurring does not, of itself, make the monitoring lawful; employers must separately satisfy themselves that it does not breach privacy law.
The federal privacy landscape is equally demanding. The Personal Information Protection and Electronic Documents Act (PIPEDA) prohibits the collection of personal information for purposes that are not disclosed, not consented to, or that could cause significant harm. British Columbia and Alberta have their own provincial privacy legislation. And Quebec, under Law 25, requires transparency in automated decision-making and mandates that employees be able to understand how any significant decision affecting them was reached. As employment lawyers have told HRD, monitoring is explicitly required to comply with privacy laws in British Columbia, Alberta, and Quebec, and is subject to employment standards consultation requirements in Ontario.
The practical question for HR is this: does deploying mouse-tracking software that captures employee workflow data and feeds it into AI model training — without the ability to opt out — satisfy those obligations? Meta's own framing makes the answer harder, not easier, to defend. The company has described the data as being used not for performance management but for AI product development. That distinction matters legally. Employees in Ontario, Quebec, or a federally regulated workplace who receive a monitoring disclosure that omits this use case may have grounds to challenge it.
The Anxiety Beneath the Flyers
To understand the depth of anger at Meta, it is necessary to appreciate what employees are experiencing. HRD has tracked the widening chasm between executives and workers on AI adoption: only four per cent of employers report employee resistance as a barrier to AI adoption, yet nearly a quarter of workers say they would consider leaving a job if forced to use AI in ways they did not support. Glassdoor reviewers have likened company monitoring systems to "AI Big Brother," citing screen-time tracking to the minute.
Research cited by HRD found that 90 per cent of organisations have already reduced or frozen hiring in anticipation of future AI productivity gains — despite leading analysts warning employers not to make workforce decisions based on "AI potential that hasn't been proven yet." When AI-triggered layoffs backfire, Gartner now forecasts that 50 per cent of companies that attributed customer service headcount reductions to AI will rehire staff to perform similar functions by 2027.
The Meta protest compresses these tensions into a single visible moment. Employees were not resisting monitoring per se. They were resisting the perceived combination of impending mass layoffs, opaque data collection, and the inference — reinforced by the company's own public statements — that their daily work patterns are being harvested to build the systems that will make their roles redundant.
The Employment Lawyer's View
HRD has previously spoken with employment lawyer Barry Levitt, who warned that AI use in the workplace is a "legal minefield." His counsel on monitoring is direct: "Employees need to know they are being monitored, and from a surveillance perspective they need to consent to that use." Monitoring that touches on protected categories under human rights law could itself be unlawful, he added.
READ MORE: AI in the workplace can be a legal minefield, warns employment lawyer
The legal landscape for AI in Canadian workplaces is still catching up. Bill C-27, which would introduce the Consumer Privacy Protection Act and the Artificial Intelligence and Data Act (AIDA), is still moving through Parliament. When enacted, it will impose significant new obligations on organisations using AI systems that affect individuals — including employees — and will carry penalties of up to the greater of five per cent of global revenues or $25 million for the most serious offences.
The message for HR is not to wait for the legislation. Quebec's Law 25 is already in force. Ontario's electronic monitoring disclosure requirements are already operative. PIPEDA's principles on collection, consent, and purpose limitation apply today.
What HR Leaders Should Do Now
The Meta situation is not primarily a legal compliance story. It is a trust story — and it is one that HR has both the authority and the responsibility to influence before it reaches the flyer-and-petition stage.
York University professor David Weitzner, speaking to HRD, put it plainly: "The job of the HR folks would be to really listen to the concerns of those who have not widely adopted AI. As long as we privilege the human aspect first, and we view AI as a tool that supports the human, and not let the tool drive the human, then you're on the right track."
READ MORE: Top 6 issues for employers in year of AI agents
The obligations are practical: disclose monitoring clearly and completely, including how data will be used downstream; consult employees before introducing technology that constitutes a significant change to working conditions; ensure any AI-related data collection satisfies the purpose limitation requirements under PIPEDA, Law 25, or applicable provincial legislation; and take seriously the psychosocial risk that intensive surveillance creates — particularly when layoffs are already underway or anticipated.
Meta's employees did not reach for flyers and labour law because the mouse-tracking software was technically impermissible. They reached for them because no one had made them feel the organisation was on their side.
That is, at its core, a people management failure — and it is one that Canadian HR leaders have every tool to prevent.