Golden serpent around a brass gauge Large

How to Prevent AI Tokenmaxxing and Avoid Goodhart’s Law

Companies now grade staff on AI token usage. Goodhart's Law predicts exactly what happens next: AI tokenmaxxing.

Table of contents

Silicon Valley has a new status symbol, and it costs $1.4m.

That’s roughly what the top token users at some tech companies spend on AI prompts every month. At Meta, an employee built an internal leaderboard called “Claudeonomics” that ranked all 85,000 staff by token consumption.

In a single 30-day window, total usage exceeded 60 trillion tokens, with the highest-ranked individual averaging 281 billion tokens. The leaderboard awarded titles like “Token Legend” and “Cache Wizard.”

It lasted two days after the press found out.

The leaderboard is gone, but the incentive structure behind it isn’t. Meta announced that “AI-driven impact” would be a core expectation baked into every employee’s performance review starting in 2026, regardless of role. 

Amazon set targets requiring more than 80% of its developers to use AI tools each week and tracked consumption on internal leaderboards. 

Nvidia CEO Jensen Huang proposed giving engineers an AI token budget on top of their base salary, calling tokens “one of the recruiting tools in Silicon Valley.” 

The message across every major tech company is the same: use more AI, get rewarded.

This is Goodhart’s Law in its most expensive form.

Beware Goodhart’s Law

Goodhart’s Law, named after the British economist Charles Goodhart, holds that when a measure becomes a target, it ceases to be a good measure. People stop chasing the underlying goal and start chasing the metric. 

The British colonial government, trying to reduce Delhi’s cobra population, paid a bounty for every dead snake. Hunters obliged. 

Then enterprising locals started breeding cobras to kill them for profit. When the government caught on and scrapped the programme, the breeders released their snakes, and the wild cobra population rose higher than before. 

The incentive, well-intentioned and sensibly designed, produced the opposite of what it sought.

Tokenmaxxing is the corporate equivalent.

Some Amazon employees admitted to using an in-house agent platform to maximise their token numbers, because managers were watching the data. Some Meta employees put AI agents to work for hours simply to inflate their usage figures. 

The cobras aren’t escaping into Delhi’s streets anymore. They’re burning through GPU capacity at $5 per million tokens while producing nothing.

Make measurement meaningful

None of this means AI isn’t useful, or that tracking adoption is wrong. The problem isn’t measurement, but choosing the wrong thing to measure and then tying careers to it

Salesforce has tried a different approach, measuring completed tasks rather than token volume, arguing that what agents finish tells a more honest story than how much they processed. Imperfect, but at least it points at outcomes rather than activity.

Every metric eventually gets gamed. The only question is what breaks first: the metric, the behaviour it shapes, or the organisation that built the incentive. 

The British Raj got more cobras. Silicon Valley is getting more tokens. The lesson hasn’t changed.

Get a free audit

Book a 30-minute call to see where AI could help your business.

Virtual personal assistant from Los Angeles supports companies with administrative tasks and handling of office organizational issues.