Future of AIAI

AI Regulation Needs to Encourage Decentralization and Transparency

By Himanshu Tyagi is a professor at the Indian Institute of Science and a cofounder of Sentient

The recent New York Times v. OpenAI lawsuit runs far deeper than any copyright dispute. The problem stems from the fact that we cannot verify how closed-source AI systems are built, what data they use, or how they make decisions. When one of the world’s most prestigious news organizations must sue to discover whether its content was used in training GPT models, what does that mean for ordinary citizens? Current regulatory frameworks are ill-equipped to address the existential risks to democratic societies that centralized AI architectures pose. Centralizing AI development in a handful of corporations will inevitably lead to a systemic failure of transparency.

The “Open” Illusion in Centralized AI

From a computer science perspective, centralized AI systems create multiple layers of unverifiable computation. The training process occurs on private infrastructure using proprietary datasets. The resulting models are served through APIs that reveal nothing about their internal state. Even when companies claim to follow ethical guidelines, we have no technical means to verify these claims. We would be wrong to assume that this opacity is merely a technical limitation; it’s a deliberate architectural choice that concentrates power in the hands of a few corporations while leaving society vulnerable to manipulation, bias, and misuse.

The EU AI Act attempts to address this problem through documentation requirements and conformity assessments. But documentation means nothing without verification. A company can claim its model was trained ethically, but without cryptographic proofs and open audits, these are merely words on paper. These frameworks fundamentally misunderstand the nature of the problem. They assume that centralized entities can be trusted to self-report and self-regulate, despite overwhelming evidence to the contrary. When AI systems can influence elections, shape public opinion, and make life-altering decisions about individuals, concentrating that power in the hands of a few corporations risks creating outcomes that are deeply destabilizing, not by design, but through complacency and lack of accountability.

AI Development as a Geopolitical Weapon

The escalating U.S.-China AI competition adds another layer of urgency to this discussion. Both nations are racing to develop increasingly powerful AI systems, and businesses are deemed rational for prioritizing capability over safety. In this environment, centralized AI becomes a vulnerability because a single point of failure, whether through adversarial attacks, corporate malfeasance, or state interference, could compromise entire societies.

Centralized systems can be co-opted for surveillance and social control. Proprietary models can embed hidden capabilities that emerge only under specific conditions. Without transparency and distributed oversight, we’re essentially building digital infrastructure that could be weaponized against the very populations it claims to serve.

Information-Theoretic Limits of Centralized Governance

These centralized AI systems create an information asymmetry that no amount of regulation can overcome. While the regulator has access to outputs and documentation, the company has access to the entire computational history, training data, and architectural decisions. 

You cannot regulate what you cannot observe. Current frameworks assume good faith reporting from companies that have every incentive to obscure their practices. These frameworks assume that post-hoc audits can catch problems that emerge from millions of training iterations, and that companies will voluntarily limit their capabilities when competitive pressures demand the opposite.

The Case for Decentralized AI Architecture

True AI safety requires restructuring how AI systems are built, trained, and deployed. Decentralized AI offers a path forward that addresses the root causes of our current crisis rather than merely treating symptoms.

Decentralization in AI means distributing power across multiple stakeholders rather than concentrating it in corporate monoliths. It means open-source development, where code can be audited; community governance that ensures decisions are transparent; and cryptographic guarantees that ensure models behave as intended. This is how to ensure that AI is a public good rather than a private asset.

Foundations of Trustworthy AI

Building trustworthy AI requires three design changes that contrast with current centralized systems:

Auditable Training Pipelines: Every aspect of model development, from dataset curation to training procedures, must be verifiable by independent parties. The cryptographic hash of each training batch, the model checkpoints, and the optimization trajectory can be recorded immutably. This allows third parties to verify that a model was trained according to stated principles without accessing the raw data.

Federated Learning with Guarantees: AI systems must maintain clear records of their information sources and decision-making processes. This goes beyond simple explainability to include full provenance tracking, which means understanding not just what decision was made, but why and based on what information. Secure multi-party computation ensures that no single party can access the complete dataset, while differential privacy provides mathematical guarantees about individual privacy

Community-Driven Oversight: Rather than trusting corporate boards to make decisions about AI deployment, we need mechanisms for democratic participation in AI governance. When model architectures are open source, thousands of researchers can analyze them for safety issues, biases, and hidden capabilities. Just as is true for the Linux kernel, open systems are more secure because vulnerabilities cannot hide.

Model Fingerprinting Can Prove Identity and Ownership

One breakthrough that shows the potential of decentralized AI is model fingerprinting, a technique that embeds verifiable signatures within AI models. This allows creators to prove ownership and track usage without sacrificing the openness that drives innovation. Unlike a licensing system, fingerprinting works at the model level and makes it impossible for bad actors to claim others’ work or use models without authorization. These signatures are undetectable during normal use but can be revealed with specific queries to prove model ownership and usage.

This is game-changing technology for how we think about AI ownership and control. Rather than locking models behind corporate APIs, a fingerprinted model can be freely distributed for research while maintaining commercial controls. It simply entails embedding key-response pairs in the model’s weight space such that they survive fine-tuning but remain hidden from adversarial detection. When we can trace which model produced which output, we can assign responsibility. When we can verify model ownership, we can ensure proper licensing. And when we can track model lineage, we can understand how capabilities evolved.

A Framework for the Future

Regulators must move beyond the assumption that AI governance means regulating a few large corporations. They should focus on creating frameworks that incentivize decentralization and transparency, which in turn helps ensure proper regulation; one good example is U.S. Senator Cynthia Lummis’ Responsible Innovation and Safe Expertise (RISE) Act of 2025, which establishes liability and transparency guidelines for the use of AI systems. 

In general, these frameworks should include:

Mandatory Open Audits: Any AI system deployed at scale should be subject to independent technical audits, with results publicly available. This can be done in a way that doesn’t expose trade secrets and ensures that systems affecting public welfare meet minimum safety standards.

Global Cooperation and Interoperability Requirements: AI systems should be required to work with decentralized verification and governance mechanisms to prevent vendor lock-in and ensure that no single entity can control critical AI infrastructure. Furthermore, governments should invest in open-source AI development and decentralized infrastructure. Doing so supports the legal and technical frameworks that allow decentralized systems to thrive and operate safely.

Liability Frameworks for Opacity: Organizations that choose to deploy opaque AI systems should bear full liability for their outputs. This creates market incentives for transparency while allowing innovation in open systems.

The challenges of AI governance transcend national boundaries. A model trained in one country can affect users worldwide. Data collected in one jurisdiction can train systems deployed globally. This interconnectedness demands international cooperation, but not the kind embodied in traditional treaties between nation-states.

Instead, we need protocols—technical standards and governance mechanisms that work across borders without requiring centralized control. The internet itself provides a model: TCP/IP doesn’t care about national boundaries, yet it enables global communication. Similarly, decentralized AI protocols can create worldwide standards for safety, transparency, and accountability without requiring global government.

Safety by Design, Not by Decree

The current trajectory of AI development, toward ever-larger models controlled by ever-fewer entities, is unsustainable. It concentrates power in ways antithetical to democratic values and creates systemic risks that no amount of regulation can fully address. The alternative isn’t to halt progress but to change its direction.

We cannot secure what we cannot verify. We cannot govern what we cannot observe. We cannot align what we cannot control. These are not policy preferences. They are mathematical constraints.

Decentralized AI provides a system where power is distributed, transparency is built-in, and safety emerges from design rather than regulatory compliance. 

The New York Times lawsuit shows what happens when we allow critical infrastructure to develop in darkness. The question isn’t whether we need AI regulation. It’s whether we have the courage to regulate for the right future. That future is decentralized, transparent, and aligned with human values. 

Author

Related Articles

Back to top button