
Andrew Yang wants an AI tax, Bernie Sanders wants a robot tax, and Dario Amodei wants a token tax.
These instincts aren’t wrong.
Washington lawmakers, agencies, and policy thinkers are beginning to define what exactly AI economic activity is, how it should be measured, and whether it can even be taxed coherently.
Before Congress can seriously build tax policy around AI, it needs a much clearer view of what AI agents are actually doing inside organizations. Today, agents connect to all kinds of outside tools, rack up hidden costs, and often work without a clear understanding of whether they are making money or quietly losing it. Right now, almost no one has that full picture.
Consider the scale. Global payroll runs north of $40-$50 trillion annually. That is the economic base that income taxes, payroll taxes, and social safety nets are built on. If AI agents displace even a fraction of that labor over the next decade, the fiscal implications are existential for government revenue models that have never had to account for a non-human workforce. Yang, Sanders, and Amodei are not being alarmist. They are doing the math. The question is whether Washington has the right data. Limit the conversation to tokens and you’ve already limited the policy.
The urgency is now backed by the industry itself. On April 6, OpenAI published “Industrial Policy for the Intelligence Age,” warning that as AI reshapes work and production, corporate profits and capital gains may expand while reliance on labor income and payroll taxes shrinks, potentially eroding the tax base that funds Social Security, Medicaid, SNAP, and housing assistance. When the companies building these systems are raising the alarm about fiscal erosion, Washington’s window to get sequencing right is narrowing fast.
Why AI taxation is harder than it looks
The logic seems clean: stop taxing labor, start taxing the economic value AI produces. Simple in principle. Nearly impossible in practice.
Traditional taxes are built around familiar categories such as payroll, corporate income, asset sales. Policy makers already know that an effective AI tax may need a new approach entirely. Visibility into AI agents needs to come first.
We are talking to hundreds of companies about their AI agents, and it’s not a simple landscape. A single agentic workflow might involve a model from one vendor, an orchestration layer from another, external APIs from three more, a human reviewer somewhere in the loop, and a customer in a different jurisdiction. When that workflow generates value or displaces a worker, who owes what, to whom, and where?
Then layer on agent sprawl. Add the unresolved tool-vs-worker classification problem, where two functionally identical agents face radically different tax treatment based on structure rather than function.
Far from simple. And you cannot design policy for a transition this large without first understanding what is actually being displaced, by what, and at what cost.
Taxing agents is the right idea. We just need to collect the best data possible to have a serious conversation.
Policy collapses at implementation
While the components of the problem are new, the problem itself isn’t. In fact, I’ve seen this play out many, many times.
A large enterprise decides to get serious about cloud cost governance. It issues a mandate, for example, that any cloud resource not tagged with a valid cost center and project code within 30 days gets automatically terminated. Sounds reasonable.
In practice, this mandate and others like it often create a policy without anyone auditing the existing infrastructure or building a reliable tagging foundation. Finance dumps the unallocated costs onto the IT department’s budget to make the invoices balance. Engineers start applying junk tags just to keep the lights on. Compliance reports 95% tagging coverage. The result of the peanut-buttering compliance? The underlying data is completely useless.
You can’t just create policies and hope they work. AI agent taxation is heading for exactly the same trap the cloud governance teams fell into. The framework is arriving before the data does.
What “seeing” AI actually requires
Here is the part that surprises most people: enterprises already can’t see what their agents actually cost. Token spend is the number everyone knows. It shows up on the invoice. But token cost is often the smallest part of the bill.
I’ve seen single agentic workflows call Docusign, Stripe, Equifax, Experian, and Plaid. Each of those tool calls carries a cost. And for now, people still review agents, which also carries a cost. None of these API calls or human-review cycles show up in the model invoice. To get taxing right, you need deep visibility into costs, integrations, and basically a full P&L of agent activity.
The full costs of agents aren’t always known. And how agents impact business outcomes is often a completely separate discussion. Those two things: what an agent costs and what it delivers aren’t connected in most organizations today. That’s the gap. And it’s not a finance problem or an engineering problem. It’s both, and right now neither team has the full picture.
And beyond the challenges of building a P&L for every agent, we have the Shadow AI problem. Agents are running inside company walls right now that no compliance team has logged or regulator could audit. Tax authorities cannot track what enterprises themselves cannot see. Any framework built today would capture only the compliant surface of a much larger iceberg.
The right question, asked first
The first question Washington should be asking is the most important: what does an AI economic record even look like? What’s the receipt of any autonomous agent workflow end-to-end?
Technologists and policymakers need to sit down around the measurement problem first, not the governance question. Get measurement right, and an effective tax framework becomes possible. Skip it, and Washington will spend the next decade taxing the part of the iceberg it can see, while the rest of the economy runs underneath, ungoverned and uncounted.
Washington has a window to get the sequencing right. Build the ledger first. Then write the law.
Author
Bailey Caldwell is the Chief Strategy Officer at Revenium.ai, whose AI Economic Control System embeds financial governance directly in the execution path of autonomous AI agents. He spent five years at McKinsey & Company building the FinOps practice, where he also served on the FinOps Foundation board of governors. A strategist and operator in enterprise technology, he spent nearly a decade at RightScale, helping lead the company to its acquisition by Flexera.


