AI

Why smart regulation is the key to unlocking AI’s true potential

By Jane Smith, Field Chief Data & AI Officer, EMEA, ThoughtSpot

There’s a prevailing narrative in the technology sector that regulation stifles innovation. Technology leaders often warn that regulatory frameworks will slow progress, constrain creativity, and ultimately hand a competitive advantage to less regulated markets. But if we look at history, this narrative does not hold up to scrutiny. 

Consider banking in the early 20th century. It was a risky business where depositors had no certainty that their money was safe. In the US, the Banking Act of 1933 introduced deposit guarantees for the first time, marking the beginning of regulation that would legitimise the industry.  

It did not kill banking. By giving savers confidence to trust banks with their money, regulation enabled the sector to scale into the trillion-dollar industry we know today. 

The pharmaceutical sector tells a similar story. Once a wild west of snake oil cures and unproven remedies, the industry was transformed by regulation demanding proof of safety with the Federal Food, Drug and Cosmetic (FD&C) Act in the 1930s, followed by The Kefauver–Harris Amendments mandating clinical trials in the 1960s. The assurance that medicines are not lethal became the foundation upon which the modern pharmaceutical industry was built. 

The pattern is clear: regulation does not kill innovation – it legitimises industries and fuels adoption. If history is our guide, the bigger risk for AI is not over-regulation, but rather the absence of it. 

The trust barrier holding AI back 

Today, AI represents perhaps the most significant catalyst for economic growth in generations. It has the potential to transform how organisations operate, how decisions are made, and how value is created across every sector. Yet, despite this promise, organisations remain hesitant to adopt the technology at scale. 

The primary barrier is trust. The black box nature of many AI systems, where decisions are made without clear visibility into how or why, creates uncertainty among both enterprises and consumers. Organisations worry about deploying systems they cannot explain.  

Consumers hesitate to embrace technologies they don’t understand. This trust deficit is a significant obstacle preventing AI from reaching its full transformative potential. 

Smart regulation addresses this challenge head-on. By establishing clear frameworks around data access, portability, and AI explainability, regulation creates the transparency that unlocks genuine adoption. When organisations have regulatory legitimacy and clear guidelines for responsible AI deployment, they gain the confidence to implement these systems at scale. 

Creating accountability and opportunity 

Recent regulatory developments in Europe, such as the EU Data Act, demonstrate how thoughtful policy can simultaneously drive accountability and create opportunity. New frameworks are mandating that organisations take responsibility for data access, sharing, and compliance.  

Device manufacturers must now make data generated by connected devices available to users. Cloud providers face requirements to ease customer switching and reduce vendor lock-in through improved portability and interoperability. 

These obligations create pressure, certainly. Manufacturers whose business models relied on data exclusivity must adapt. Cloud providers can no longer depend on lock-in as a retention tactic.  

But this pressure is productive. It breaks down the data silos that have stifled competition and innovation, particularly for smaller firms and startups that previously lacked access to the data economy. 

More importantly, these requirements create a foundation of high-quality, accessible data that AI needs to thrive. When data becomes more portable and interoperable, when its use must be explainable and transparent, AI systems can be deployed with confidence. Organisations can leverage AI to navigate complex datasets from multiple platforms and devices, generating insights with a complete view rather than fragmented glimpses. 

AI as both beneficiary and enabler 

The relationship between regulation and AI is particularly interesting: AI benefits from regulation while simultaneously helping organisations meet regulatory requirements. The technology becomes both more valuable and more necessary in a regulated environment. 

AI can be the tool that helps businesses reach new standards for data management, transparency, and compliance. It can analyse vast datasets while maintaining explainability. It can ensure that data sharing meets regulatory requirements while protecting privacy and security. This way, regulation defines the problem that AI is uniquely positioned to solve. 

Meanwhile, the guardrails that regulation provides give enterprises what they need most: confidence. Instead of fearing opaque systems they cannot explain to stakeholders, organisations now have a framework for responsible adoption. Clear rules create both trust and scalability, enabling AI deployment that would otherwise seem too risky. 

Building for trillion-pound valuations 

The UK government has just announced an ambitious plan to build the UK’s first trillion-dollar company. Building transformative companies, the kind that can reach a trillion-pound valuation, is not just about brilliant technology or ambitious entrepreneurs. It requires creating a regulatory foundation that allows massive scale, combined with public trust. 

The opportunity before us is significant. With AI adoption accelerating across Europe and beyond, we’re witnessing the emergence of a new data economy built on principles of access, portability, and transparency. Organisations that embrace both the opportunities and obligations of this new landscape will be positioned to lead. 

Some will continue to argue that regulation kills innovation. But the evidence suggests otherwise. Regulation legitimises industries and fuels adoption.  

For AI, the real risk is not too much regulation, it is too little. Smart regulation provides the trust and clarity that will ultimately determine whether AI reaches its transformative potential or remains perpetually constrained by uncertainty. 

The foundation is being laid. Now it’s time to build. 

Author

Related Articles

Back to top button