Inside Amazon's "tokenmaxxing" scandal lies a textbook warning about what happens when organisations measure AI adoption rather than AI value
Amazon employees are using an internal AI tool to run unnecessary, low-value tasks — not because the work needs doing, but because the activity inflates their scores on a company leaderboard tracking artificial intelligence usage.
The practice, which workers have taken to calling "tokenmaxxing," was reported yesterday by the Financial Times and has since spread as a case study across the technology industry. It exposes, in unusually vivid terms, what happens when a productivity metric becomes a target: the metric stops measuring productivity, and starts measuring the human capacity for gaming.
The episode is not merely a Silicon Valley curiosity. For HR leaders currently designing, implementing or being asked to justify AI adoption programmes — and in Singapore, that covers most of the profession — it is a live demonstration of a failure mode that is easy to build and hard to reverse.
What Amazon built, and what went wrong
Amazon had been widely deploying MeshClaw, an in-house agentic AI product that allows employees to create software agents capable of connecting to workplace tools and completing tasks on a user's behalf. The bot can initiate code deployments, triage emails and interact with applications including Slack, according to the Financial Times. An internal memo described it in terms that will be familiar to anyone who has sat through an AI all-hands: it "dreams overnight to consolidate what it learned, monitors your deployments while you're in meetings and triages your email before you wake up."
More than three dozen Amazon engineers worked on the tool. The company positioned it as empowering "thousands of Amazonians to automate repetitive tasks each day."
But Amazon had also introduced targets requiring more than 80% of its developer workforce to use AI tools each week, and had begun tracking token consumption — the units of data processed by AI models, essentially a meter of how much the tools are being run — on internal leaderboards. Team-wide statistics were initially visible to all staff before being restricted so that only employees and their managers could view them.
The result was predictable to anyone who has studied organisational behaviour. "There is just so much pressure to use these tools," one Amazon employee told the Financial Times. "Some people are just using MeshClaw to maximise their token usage." Another said the data was being watched regardless of official policy: "Managers are looking at it. When they track usage it creates perverse incentives and some people are very competitive about it."
Amazon told staff that token statistics would not be used in performance evaluations. Workers did not believe it.
A pattern across the technology sector
Amazon is not alone. Meta employees engaged in similar tokenmaxxing behaviour, competing on an internal leaderboard called "Claudeonomics" that ranked the company's roughly 85,000 workers by token consumption. In a 30-day window, total usage on the dashboard exceeded 60 trillion tokens. The leaderboard was taken down after reporting by The Information, but Meta's CTO Andrew Bosworth publicly endorsed the underlying logic — pointing to his best engineer spending the equivalent of their salary in AI tokens as evidence of a productivity multiplier.
At Microsoft, a senior leader sent an internal memo stating AI use was "no longer optional, it's core to every role and every level." A company spokesperson later clarified there was "no formal review of an employee's AI usage" — the kind of clarification issued when an original message lands harder than intended.
A May 2026 CNBC report noted that "almost every Fortune 500 is tracking overall AI usage," with tokens, prompt counts, licence activations and seat-utilisation rates becoming standard surveillance inputs alongside older metrics like badge-swipe and keyboard activity.
The financial stakes behind all this pressure are enormous. Combined 2026 capital expenditure from Amazon, Microsoft, Alphabet and Meta is tracking between $650 billion and $700 billion. Every executive who has signed off on those commitments has an investor relations problem if adoption numbers look weak. Token counts are the answer — unless employees are manufacturing the counts themselves.
Why this matters in Singapore
Singapore is not the United States, but the pressures being described here are entirely familiar.
As HRD Asia has reported, just last week Singapore's Ministry of Manpower released an inaugural report finding AI adoption across local firms remains "uneven" — with only 28.5% of companies having adopted AI at all, and of those, only 3.8% integrating it into core processes. In response, the government announced a new Tripartite Jobs Council, backed by NTUC, MOM and the Singapore National Employers Federation, specifically to help employers and employees navigate AI transformation in a fair and inclusive way.
That tripartite context is significant. Singapore's approach to workplace transformation has historically been built on consensus, transparency and trust between employers, employees and the government. The Amazon story is precisely what happens when a company skips those principles and goes straight to the scoreboard. The Tripartite Guidelines on Fair Employment Practices already require that employment decisions — including performance management — be merit-based and transparent. As Singapore data and tech lawyer Darren Grayson Chng told HRD: "If these risks eventuate, will the affected person even know that they have been subject to a decision made by AI that perhaps discriminates against them? Can they contest the decision? Who is accountable?"
Those are not abstract questions in the Amazon context. They are precisely what Amazon employees were asking when they saw a leaderboard they were told would not influence their careers, but widely believed would.
The Singapore legal dimension
There is a specific and growing legal dimension that HR leaders in Singapore need to internalise.
HRD has reported in detail on AI-driven dismissals and what HR must get right under Singapore's emerging fairness laws. Singapore's Personal Data Protection Act (PDPA) includes an evaluative purpose exception that allows employers to use personal data to assess employees without consent — but that does not remove the obligation to inform. "There's still an obligation to notify employees if AI systems are being used to monitor performance," says Zhao Yang Ng, principal at Baker McKenzie Wong & Leow. He recommends employers update employee handbooks to reflect how data is collected, what it is used for, and what tools may be deployed in performance appraisals.
The Workplace Fairness Act, due to come into force by 2026 or 2027, will explicitly prohibit discrimination based on 11 protected characteristics. While no AI-specific guidelines currently exist, Ng is clear: employers must still be transparent about how performance is assessed and how tools operate. "Employers should explain how the model works, what data it was trained on, and what checks are in place to catch bias." Disclosing reliance on AI while being unable to explain the system's output, he warns, significantly increases the risk of a wrongful dismissal claim.
Amazon's situation — telling employees their token data would not inform performance reviews while managers informally tracked the leaderboard — is precisely the transparency failure that creates that exposure. Singapore HR leaders building similar systems should treat this story as a rehearsal for conversations they may soon be having with MOM inspectors or the Tripartite Alliance for Fair and Progressive Employment Practices.
HRD has also reported on Singapore's push to upskill 40,000 tech workers in AI, announced just this week. The government's investment signals both the scale of the AI transition ahead and its commitment to managing that transition through structured skills development — a fundamentally different approach from chasing token leaderboards.
The measurement problem Is also a security problem
Multiple Amazon employees told the Financial Times they were alarmed by the security profile of MeshClaw itself. The tool was granted permission to act on a user's behalf — initiating code deployments, interacting with internal systems, sending communications. One employee said: "The default security posture terrifies me. I'm not about to let it go off and just do its own thing."
This concern sits alongside the gaming problem rather than beneath it. An AI agent that employees are running on unnecessary tasks to inflate usage scores is an agent taking real actions in real systems — creating code deployments that did not need to happen, sending emails that did not need to be sent. The perverse incentive structure does not just produce misleading productivity data; it produces real operational noise.
As HRD Asia has reported, APAC employers have been urged by BCG to formalise AI governance before shadow AI takes over: "Leaders must establish clear policies, risk controls, and sanctioned platforms to ensure innovation happens safely." Tokenmaxxing is what happens when those policies do not exist, or exist only on paper — corporate-sanctioned shadow activity, running in the background, taking actions, with accountability belonging to no one.
Singapore's 56% of leaders already using AI agents to automate workstreams means the security surface described by Amazon employees exists here too. The difference in outcome will be determined by governance, not enthusiasm.
What Singapore HR leaders should do now
The Amazon episode arrives at a moment when the pressure on Singapore HR leaders to demonstrate AI adoption is real and intensifying. ADP reports that 51% of Singapore organisations view AI as important to improving productivity, and 82% of local leaders say they are ready to scale with AI agents. That readiness creates the conditions for exactly the incentive structure Amazon built. It also creates the opportunity to build something better.
Several practical observations for Singapore people professionals:
Adoption metrics are not productivity metrics. As HRD Asia has reported directly to Singapore organisations, 97% of the workforce are using AI poorly or not at all by the standard of genuine value creation. Only 2.7% qualify as true AI practitioners. Adoption metrics — logins, usage rates, seat utilisation, token counts — tell you whether the tools are being used. They tell you nothing about whether the work has improved. Across corporate Singapore, as that HRD piece observed, "AI isn't failing because people refuse to use it. It's failing because they don't have the right kinds of use cases, support or expectations." A token leaderboard addresses none of those problems.
The disconnect between leaders and employees is widening. Singapore's own data reveals a striking gap: 80% of business leaders are highly familiar with AI agents, compared to only 41% of employees. Only 28% of individual contributors believe there is a clear, actionable AI policy — a 53-point gap from C-suite perception. HR's role is to close that gap with communication and genuine capability building, not to paper over it with a leaderboard.
Transparency changes behaviour; ambiguity creates gaming. Amazon's ambiguity — "the data won't be used in performance reviews" — was not believed because the leaderboard contradicted it. ADP's guidance for Singapore is direct: AI governance frameworks must ensure "transparency, fairness, and human-centric design." "Human oversight provides purpose and guardrails," says ADP's chief data officer. "Together, they deliver scalable automation that's trustworthy, compliant, and resilient." That is not a description of a leaderboard.
The Tripartite approach exists for a reason. Singapore's industrial relations framework — grounded in consultation between employers, employees and government — was designed precisely to manage transitions of this kind. The Amazon story is a case study in what happens when an organisation skips consultation and goes straight to targets. Singapore's new Tripartite Jobs Council is the institutionalisation of that consultative approach. HR leaders who use it will build adoption programmes that last. Those who do not may find themselves managing the equivalent of flyers on the vending machine.
The Workers of the Future Fund and SkillsFuture signal the direction. The government's investment of more than S$1 billion in AI research and the plans to upskill 40,000 tech workers are not simply training programmes. They are a policy signal that Singapore's approach to AI adoption is capability-led, not metric-led. HR leaders who align their internal programmes with that signal are building with the current, not against it.
Amazon spent $200 billion this year to make AI central to how its employees work. The tokenmaxxing problem did not cost a dollar to build. It came free, with the leaderboard.