
Everyone seems to understand the power of AI to transform business and life as we know it.
The global ambition to develop AI responsibility is also shared widely.
But there’s a growing global fracture about whether, when and how to regulate AI, which raises difficult questions about the future of innovation, sovereignty and sustainable growth.
On one side of this expansive divide are Canada, France, Germany, India and dozens of other countries that recently signed a declaration committing to open, ethical and inclusive AI.
On the other side of the chasm are the United State and United Kingdom, two of the world’s most powerful AI players, both of which have declined to take part in this new global accord.
Meanwhile, a provision in the “big, beautiful bill” making its way through the U.S. Congress aims to limit state power in regulating AI. This, at a time in which California, Colorado and Utah have passed sweeping AI laws, more than a dozen other U.S. states are working on similar laws, and federal U.S. legislation and regulations governing the development and use of AI do not exist.
How and why did these different countries land on opposite sides of this regulatory divide?
For some nations, regulation is the foundation for long-term competitiveness, public trust and sovereignty in the digital economy. In the European Union, AI regulation has become a tool of economic strategy as much as ethics. This follows in the footsteps of other EU initiatives like the Digital Services Act and the General Data Protection Regulation, which reflect Europe’s bid to shape the global digital rulebook to prioritize accountability, human rights and transparency.
Yet some believe that even the most well-intentioned frameworks could create risk by slowing technological advancement and adoption. Speaking at the AI Action Summit in Paris, where other leaders signed the AI declaration, U.S. Vice President J.D. Vance remarked that to restrict AI’s development at this time “would mean paralyzing one of the most promising technologies we have seen in generations” and could “kill a transformative industry just as it’s taking off.” Echoing this sentiment, the U.K. government in February issued a brief statement indicating that it didn’t sign the declaration due to concerns about national security and global governance.
The fundamental disagreement about when regulation fits in the AI lifecycle raises questions like: Is it best to regulate before large-scale adoption to prevent harm? Or is the most beneficial approach to address AI safety when risks are better understood but may be more entrenched?
However, our customers are proving it’s not a binary choice. With the right data infrastructure, sector-specific insights, sustainable practices and a system-wide understanding of consequence, impact and use case, it is possible to govern AI responsibly and scale it ambitiously.
Building an Infrastructure of Trust Drives Innovation and Compliance
Regardless of where they fall on the regulatory spectrum, nations and companies must all grapple with the fact that AI is only as effective and as safe as the data and platforms behind it.
Well more than a third (38%) of IT leaders believe data quality is the most important factor in AI success, according to a recent Hitachi Vantara report. Yet many organizations still operate using fragmented and siloed data. This doesn’t just create a technical bottleneck, it can impact trust. Without clean, reliable data, AI decisions become opaque, error-prone and difficult to audit.
By partnering with a supplier with expertise in hybrid cloud platforms, industry-specific AI use cases and digital services, organizations get a blueprint for success. Now disjointed datasets turn into actionable intelligence, helping organizations meet both innovation and governance goals. As a result, AI doesn’t just get deployed, it performs reliably in some of the world’s most high-stakes environments— from mining and energy to transportation and manufacturing.
Addressing Sustainability is Smart Ethically and Economically
Whatever the sector, organizations need to address AI’s growing environmental footprint. AI models are power-hungry, consuming exponentially more energy than traditional computing. The AI explosion is a key reason global data center electricity use is poised to double by 2026.
To scale AI affordably and responsibility, organizations need to adopt new approaches and infrastructure. Yet only about a third of organizations factor sustainability into their AI strategy.
That’s a troubling gap whether you are working to grow your AI opportunities while containing your costs, must comply with AI regulations or want to prepare for the growing trend as more governments look to implement environmental reporting requirements and net-zero mandates.
Either way, you will want to adopt a Sustainable Operations (SustOps) approach, which will enable your organization to embed carbon monitoring and optimization directly into digital systems. Embedding green coding practices that improve software efficiency and AI-driven cooling systems that reduce power usage in data centers into your operations isn’t just smart ethics, it’s smart economics. Building AI that’s sustainable by design will lower your energy costs, increase your operational resilience and future-proof you from emerging regulations.
Collaborating is Key to Ensuring Responsible Innovation
At their core, the AI Action Summit and declaration in Paris were diplomatic events. But they spotlighted the fact that no government, organization or sector can build safe AI in isolation.
Collaboration around AI is critical, as many organizations seem to understand. Our research reveals that nearly half of IT leaders (46%) now cite partnerships as critical to integrating AI into their operations. That’s why we are partnering with Nvidia and other AI industry leaders and applying our deep sector-specific experience and operational technology capabilities for energy, finance, healthcare, manufacturing, media and entertainment, and transportation customers.
Rather than simply selling tools and leaving it to customers to do the rest, or trying to sell people on the idea on a one-size-fits-all platform, we drive AI innovation and ease of use by working with other AI ecosystem leaders and align with best practices, infuse our own innovation and SLAs, and ensure optimal simplicity, support and ROI for enterprises.
This involves understanding the landscape and working with stakeholders in business, government and the broader community to ensure that AI delivers value and earns trust.
Looking Ahead: A Shared Future or Fragmented Fates?
The real risk in light of the global divide on whether, when and how to regulate AI isn’t that countries disagree, it’s that their divergence becomes irreversible. If major economies continue to carve out incompatible AI standards, the global ecosystem could fracture – limiting interoperability, delaying cross-border innovation and making governance harder for everyone.
Finding common ground without compromising national priorities requires a new kind of leadership that acknowledges the value of regulation and innovation, sovereignty and cooperation. The smartest path forward is to think big, but start small, building understanding and trust through use and transparency; prioritize sustainability upfront as a business enabler; and ground AI in real-world use cases that solve real-world problems.



