
Everyoneย seems to understandย the power of AI to transform business and life as we know it.ย
The global ambition to develop AI responsibility is also shared widely.ย
Butย thereโsย a growing global fracture about whether, when and how to regulate AI, which raisesย difficult questionsย about the future of innovation,ย sovereigntyย and sustainable growth.ย
On one side of this expansive divide are Canada, France, Germany,ย Indiaย and dozens of other countries that recentlyย signed a declaration committing to open, ethical and inclusive AI.ย
On the other side of the chasm are the United State and United Kingdom, two of the worldโs most powerful AI players, both of which have declined to take part in this new global accord.ย ย
Meanwhile,ย a provisionย in the โbig, beautiful billโ making its way through the U.S. Congress aims to limit state power in regulating AI. This, at a time in which California, Colorado and Utah have passed sweeping AI laws, more than a dozen other U.S. states are working on similar laws, and federal U.S. legislation and regulations governing the development and use of AI do not exist.ย ย
How and why did these different countries land on opposite sides of this regulatory divide?ย
For some nations, regulation is the foundation for long-term competitiveness, publicย trustย and sovereignty in the digital economy. In the European Union, AI regulation has become a tool of economic strategy as much as ethics. This follows in the footsteps of other EU initiatives like the Digital Services Act and the General Data Protection Regulation, which reflect Europeโs bid to shape the global digital rulebook to prioritize accountability, humanย rightsย and transparency.ย
Yet some believe that even the most well-intentioned frameworks could create risk by slowing technological advancement and adoption. Speaking at the AI Action Summit in Paris, where other leaders signed the AI declaration, U.S. Vice Presidentย J.D. Vance remarkedย that to restrict AIโs development at this time โwould mean paralyzing one of the most promising technologies we have seen in generationsโ and could โkill a transformative industry just as itโs taking off.โ Echoing this sentiment, the U.K. government in February issued a brief statementย indicatingย that itย didnโtย sign the declarationย due to concerns about national security and global governance.ย ย
The fundamental disagreement about when regulation fits in the AI lifecycle raises questions like: Is it best to regulate before large-scale adoption to prevent harm? Or is the most beneficial approach to address AI safety when risks are better understood but may be more entrenched?ย
However, our customers are provingย itโsย not a binary choice. With the right data infrastructure, sector-specific insights, sustainable practices and a system-wide understanding of consequence, impact and use case, it is possible to govern AI responsibly and scale it ambitiously.ย ย ย
Building an Infrastructure of Trust Drives Innovation and Complianceย
Regardless of where they fall on the regulatory spectrum, nations and companies must all grapple with the fact that AI is only as effective and as safe as the data and platforms behind it.ย
Well more than a third (38%) of IT leaders believe data quality is the mostย important factorย in AI success, according to a recentย Hitachi Vantara report. Yet many organizations stillย operateย using fragmented and siloed data. Thisย doesnโtย just create a technicalย bottleneck,ย it canย impactย trust. Without clean, reliable data, AI decisions become opaque, error-proneย and difficult to audit.ย
By partnering with a supplier withย expertiseย in hybrid cloud platforms, industry-specific AI use cases and digital services, organizations get a blueprint for success. Now disjointed datasets turn into actionable intelligence, helping organizations meet both innovation and governance goals. As a result, AIย doesnโtย just get deployed, it performs reliably in some of the worldโs most high-stakes environmentsโ from mining and energy to transportation and manufacturing.ย
Addressing Sustainability is Smart Ethically and Economicallyย
Whatever the sector, organizations need to address AIโs growing environmental footprint. AI models are power-hungry, consumingย exponentially more energyย than traditional computing. The AI explosion is a key reasonย global data center electricity use is poised to double by 2026.ย
To scale AI affordably and responsibility, organizations need to adoptย new approachesย and infrastructure. Yet only about a third of organizations factor sustainability into their AI strategy.ย ย
Thatโsย a troubling gap whether you are working to grow your AI opportunities whileย containingย your costs, mustย comply withย AI regulations or want to prepare for the growing trend as more governments look to implement environmental reporting requirements and net-zero mandates.ย
Either way, you will want to adopt a Sustainable Operations (SustOps) approach, which will enable your organization to embed carbon monitoring and optimization directly into digital systems. Embedding green coding practices that improve software efficiency and AI-driven cooling systems that reduce power usage in dataย centersย into your operationsย isnโtย just smart ethics,ย itโsย smart economics. Building AIย thatโsย sustainable by design will lower your energy costs, increase your operational resilience and future-proof you from emerging regulations.ย
Collaborating is Key to Ensuring Responsible Innovationย
At their core, the AI Action Summit and declaration in Paris were diplomatic events. But they spotlighted the fact that no government,ย organizationย or sector can build safe AI in isolation.ย
Collaboration around AI is critical, as many organizations seem to understand. Our research reveals that nearly halfย of IT leaders (46%)ย now cite partnerships as critical to integrating AI into their operations.ย Thatโsย why we are partnering with Nvidia and other AI industry leaders and applying our deep sector-specific experience and operational technology capabilities for energy, finance, healthcare, manufacturing, media and entertainment, and transportation customers.ย
Rather than simply selling tools and leaving it to customers to do the rest, or trying to sell people on the idea on a one-size-fits-all platform, we drive AI innovation and ease of use by working with other AI ecosystem leaders and align with best practices, infuse our own innovation and SLAs, and ensure optimal simplicity, support and ROI for enterprises.ย
This involves understanding the landscape and working with stakeholders in business,ย governmentย and the broader community to ensure that AI delivers value and earns trust.ย
Looking Ahead: A Shared Future or Fragmented Fates?ย
The real riskย in light ofย the global divide on whether, when and how to regulate AIย isnโtย that countries disagree,ย itโsย that their divergence becomes irreversible. If major economies continue to carve out incompatible AI standards, the global ecosystem could fracture โ limiting interoperability, delaying cross-borderย innovationย and making governance harder for everyone.ย
Finding common ground without compromising national priorities requires a new kind of leadership that acknowledges the value of regulationย andย innovation,ย sovereigntyย andย cooperation. The smartest path forward is to think big, but start small, building understanding and trust through use and transparency; prioritize sustainability upfront as a business enabler; and ground AI in real-world use cases that solve real-world problems.ย



