The crypto industry is building the financial system for non-humans, and it is doing so without a legal framework capable of handling the consequences.
As artificial intelligence agents grow more autonomous, developers are giving them crypto wallets — allowing software to hold assets, pay for services, trade tokens, and hire other agents without human approval at each step. The technical architecture is advancing rapidly. The accountability architecture is not. At a recent panel at NEARCON 2026, Electric Capital’s Avichal Garg put the problem plainly: “What happens if there’s not a human behind it at all? It’s some piece of code that owns a wallet, executing code to make more money… How does liability work in that case? I actually don’t know.” [1]
That admission — from one of the most respected voices in crypto venture capital — should be the headline. It is not a confession of ignorance. It is an honest acknowledgment that the industry has built something it does not yet know how to govern.
The Scale Is Already Here
This is not a theoretical concern about a distant future. VanEck predicted that over one million new AI agents would be created in 2025 alone [2]. The crypto AI market is projected to grow from $5.1 billion in 2025 to $55.2 billion by 2035 [3]. Multiple analysts project that AI agents could handle 30% of all on-chain transactions by late 2026 [4]. The autonomous wallet is not coming. It is already here, and it is already moving money.
What is not here is any coherent legal framework for what happens when these agents fail, are exploited, or are deliberately weaponised. If an AI agent drains a DeFi protocol through a misconfigured rule set, launders funds through a dozen chains in seconds, or executes a fraudulent transfer while its human operator sleeps, the question of who bears responsibility has no clear answer under existing law.
The Exploit Risk Is Not Hypothetical
In February 2026, researchers at Cyfrin reported that AI agents autonomously exploited more than half of a 405-contract benchmark, extracting $550 million in simulated funds [5]. This was a controlled experiment. The next one may not be.
The speed advantage that makes AI agents valuable in legitimate finance is precisely what makes them catastrophic in adversarial scenarios. As TRM Labs documented in February 2026, “when software can transact independently, layering and cross-chain fund movement can occur in seconds, narrowing detection windows” [6]. In the $1.46 billion Bybit breach — the largest single crypto hack on record — the speed of post-compromise fund movement materially shaped investigative and recovery outcomes. Autonomous agents will make future incidents move faster still.
A compromised or misconfigured AI wallet manager can fragment funds across dozens of addresses, convert assets through multiple liquidity pools, and route value across blockchains before a human operator becomes aware of anomalous activity. What previously required coordinated manual effort can now be executed as preprogrammed logic. The window between compromise and irreversible dispersion is collapsing.
The Accountability Gap Has No Easy Fix
Some have proposed granting AI agents a form of legal personhood — a digital equivalent of the limited liability corporation, which unlocked industrial-scale capital formation in the 19th century. Garg himself drew this comparison at NEARCON, noting that blockchain enables AI agents to act independently “much like how limited liability companies” operate [1]. But the analogy breaks down at the point of enforcement. A corporation can be fined, dissolved, and its directors prosecuted. As Garg acknowledged, “You can’t punish an AI. You can turn them off, but they don’t care.”
The SEC’s Crypto Task Force has begun mandating that algorithmic agents operate under “explicit, examiner-ready mandates with defined risk limits and kill authority” [7], but this guidance does not address the deeper question of who bears civil and criminal liability when an autonomous agent causes harm. The regulatory framework is being written in real time, and the industry is deploying faster than the rules can follow.
TRM Labs offers the clearest framework currently available: responsibility rests with the developers who designed the system, the operators who deployed it, the beneficiaries who profited from it, and the infrastructure providers who knowingly enabled its use [6]. “Autonomy changes how actions occur,” TRM writes. “It does not remove the duty of care attached to those actions.”
This is the right principle. The problem is that it is a principle without enforcement teeth. Tracing delegated authority across distributed development teams, layered infrastructure providers, and algorithmic execution pathways that span multiple jurisdictions is an investigative challenge of a different order of magnitude. Jurisdiction does not disappear in an autonomous environment — it becomes layered and distributed in ways that existing cross-border enforcement frameworks were not designed to handle.
The Industry Must Stop Outsourcing This Problem to Regulators
The venture funds backing AI agent infrastructure have a direct financial interest in resolving this question before a catastrophic failure forces a blunt regulatory response. The pattern is familiar: the industry innovates, regulators react, and the resulting framework is written by people who do not fully understand the technology they are governing. The crypto industry has lived through this cycle with ICOs, DeFi, and NFTs. It cannot afford to repeat it with autonomous agents.
The responsible path requires concrete action from those closest to the technology. Developers must build governance architecture — kill switches, risk limits, escalation pathways — into AI agents as a baseline requirement, not an afterthought. Investors must demand that portfolio companies articulate clear liability frameworks before deploying agents into live financial environments. Exchanges and protocols must establish standards for what disclosures are required when an AI agent, rather than a human, is the counterparty to a transaction.
None of this is technically difficult. The difficulty is institutional. It requires the industry to accept that the same transparency and accountability standards it has resisted applying to its own capital structures must now be applied to the software it is deploying into the financial system.
The Window Is Narrowing
The autonomous wallet is a genuine innovation. Programmable, borderless, instant settlement creates real economic value, and AI agents that can navigate that environment efficiently will unlock capabilities that human operators cannot match. The question is not whether to build this technology. It is whether to build it responsibly.
The industry has a narrow window to establish accountability norms before the first billion-dollar AI-driven heist makes the decision for it. When that event occurs — and the trajectory of both AI capability and crypto crime suggests it will — the response from regulators and legislators will not be nuanced. It will be fast, broad, and punitive.
The time to answer Avichal Garg’s question is now, while the answer can still be written by the people who understand the technology. The alternative is to wait until it is written by people who do not.
Note: The views expressed in this column are those of the author and do not necessarily reflect those of the publication or its affiliates.
About the Author: Romeo Kuok serves on the board of a Singapore-based venture capital firm and has over a decade of experience in go-to-market strategy, brand development, and early-stage investing. In addition to his leadership at BGX, Romeo is an active angel investor, backing high-potential startups including Puffer Finance, Sonic, Solv Protocol, and various crypto media. He also serves as the Chairman of the Board for OT Inc.
References
[1] A. Garg, quoted in “Crypto wallets for AI agents are creating a new legal frontier, says Electric Capital,” CoinDesk, Feb. 24, 2026. [Online]. Available: https://www.coindesk.com/business/2026/02/24/crypto-wallets-for-ai-agents-are-creating-a-new-legal-frontier-says-electric-capital
[2] VanEck, “VanEck’s Top 10 Predictions for Crypto in 2025,” Seeking Alpha, Dec. 14, 2024. [Online]. Available: https://seekingalpha.com/article/4744289-vaneck-10-crypto-predictions-for-2025
[3] Jenova AI, “AI Crypto Market Intelligence Agent: Institutional-Grade Digital Asset,” Jan. 27, 2026. [Online]. Available: https://www.jenova.ai/en/resources/ai-crypto-market-intelligence-agent
[4] Kefpreneur, “The $200B Machine Economy Boom: How AI Agents Are Printing On-Chain,” Medium, Feb. 24, 2026. [Online]. Available: https://kefpreneur.medium.com/the-200b-machine-economy-boom-04a35ff1bae6
[5] Cyfrin, “AI Agents Exploit 207 Smart Contracts, Uncover Zero-Day,” LinkedIn, Feb. 18, 2026. [Online]. Available: https://www.linkedin.com/posts/cyfrin_website-leads-form-activity-7429872439637159938-GcJv
[6] TRM Labs, “Autonomous AI Agents and Financial Crime: Risk, Responsibility, and Accountability,” Feb. 26, 2026. [Online]. Available: https://www.trmlabs.com/resources/blog/autonomous-ai-agents-and-financial-crime-risk-responsibility-and-accountability
[7] U.S. Securities and Exchange Commission, “Crypto Task Force Written Input,” Mar. 2026. [Online]. Available: https://www.sec.gov/featured-topics/crypto-task-force/crypto-task-force-written-input

