
AI now decides who gets a loan, who gets hired, and which patients get prioritized for care. It’s no longer experimental – it’s embedded in the core of business. Nearly 8 in 10 organizations already use it, and half have made it central to their strategy. McKinsey reports that 78% of organizations use AI in at least one business function, and PwC found that 49% of technology leaders said that AI was fully integrated into their companies’ core business strategy. What started as pilots is now daily practice in critical sectors like financial services and healthcare, where I’ve seen firsthand how companies are implementing AI, testing it, and building audit trails to prove its reliability.
The appeal is undeniable: Who doesn’t love the idea of automating complex decisions to optimize scarce employee skills? The economics are compelling – deploy AI, streamline infrastructure, boost productivity. But in the rush to transform, some companies moved a little too quickly, and put livelihoods at risk. In finance, algorithms now decide who gets credit. In hiring, AI screens resumes, ranks candidates, and even conducts interviews.
Governments have started to catch up. The NIST AI Risk Management Framework offered early guidance, and ISO 42001 added a voluntary set of controls. Europe went further. The EU AI Act, passed in 2024, was the first to classify model risks and attach real penalties – up to 7% of worldwide revenue or €35M, whichever is higher.
In the US, states are filling the gap left by uneven federal action. New York through Local Law 144 requires audits of Automated Employment Decision Tools. Colorado has enacted its own AI Act. And California is weighing Senate Bill 53 which would fine companies up to $1M for failures in transparency.
Faced with a patchwork of new rules, most firms treat AI governance as a compliance hurdle – something that slows them down. However, in the very best companies, AI controls and governance is being treated as a value differentiator and a source of competitive advantage, driven by several factors:
1. Financial
Good governance pays for itself. The penalties for getting it wrong are steep: Clearview AI was fined €70.5M across 3 jurisdictions for unlawful facial recognition, and the EU AI Act allows regulators to fine up to 7% of worldwide revenue. The damage goes beyond fines.
Bad press makes investors and lenders skittish, raising the cost of capital. Procurement is shifting too. Government tenders increasingly demand proof of AI governance, and corporate clients are writing disclosure clauses into contracts. Firms that can demonstrate strong controls don’t just avoid losses, they win business.
2. Speed to Market
Good governance accelerates the flywheel of innovation across your AI initiatives. Once the governance is in place and the board is aligned, two new factors come into play to free up resources: (1) New projects slot into the framework. Approvals become routine and faster. (2) compliance with regulators becomes easier.
Instead of scrambling to justify decisions, firms can show evidence up front and build trust. Google set up its Responsible AI and Human Centered Technology (RAI-HCT) team to pre-empt risks. Microsoft created an Office of Responsible AI with clear principles and sub-goals. On IBM’s AI Ethics Board, where I served, we regularly reviewed use cases before launch.
By addressing issues early, we kept projects moving forward and ensured the company had a defensible posture with regulators – avoiding delays that stalled speed to market.
3. Brand Value
In crowded markets, clients and customers have endless alternatives, and reputation separates winners from also-rans. Clients won’t gamble on a partner with a record of privacy failure or opaque practices. If you compete internationally, it’s even harder because you’re up against home-market firms. A single misstep such as mishandled personal data, biased decisions, or one bad headline, can stall expansion.
The temptation is to move fast and deal with risks later. But we’ve seen this movie before and it doesn’t end well. In 1999 Napster upended the music business by leveraging new technology to distribute songs illegally. Apple saw the opportunity and launched iTunes with governance built in: clear rules, licensing, and accountability. They turned disruption into a growth engine and the same holds true for AI.
So what can you do, today, to start driving your own competitive advantage?
First, inventory and classify your AI systems – official and shadow. You need to build an inventory of all the AI tools being used, with named owners for each major application and assessed risk based on potential impacts to customers and operations. Don’t forget to consider informal employee AI usage. Your aim should be to have a comprehensive view of all the AI being used in the organization.
Second, publish clear principles and escalation paths. You need a documented AI policy that covers the main concepts guiding your company’s use, with relevant guideposts, practical escalation paths, and a defined group that will be accountable for ethical AI use. Don’t forget to evaluate third-party AI tools if you rely heavily on those.
Third, audit outputs, test for bias, and brief your board. Your AI processes and tools need to be evaluated and tested for compliance, with results documented. That includes ensuring outputs are as expected (testing for bias and data leakage), that periodic audits are performed, and that the board is updated regularly.
Large firms may manage this in-house; smaller firms may need outside help to cover gaps without overloading internal teams. Either way, the goal is the same: a defensible, practical governance model that drives competitive advantage.
The AI landscape isn’t standing still. Multi-modal AI systems that process text, images, and video simultaneously are already entering enterprise workflows, creating new blind spots around data lineage and output verification. Agentic AI, systems that can take actions independently, will soon handle everything from contract negotiations to customer service escalations, demanding governance frameworks that can oversee decision-making in real-time.
Meanwhile, the regulatory environment remains fragmented but active. The EU’s General Purpose AI obligations under the AI Act took effect in August 2025, establishing comprehensive lifecycle requirements and systemic risk assessments for foundation models. In the US, the federal government has stepped back from comprehensive AI regulation, but states continue pushing forward.
Companies need to build flexible governance now to avoid falling behind. But governance isn’t just about compliance – it’s the mechanism that lets you accelerate with control to drive competitive advantage. It’s the seatbelt that lets you drive faster. In the AI era, the companies that master it won’t just keep up, they’ll set the pace.


