
Any UK business selling into the EU should be well aware by now of their regulatory obligations. But with the EU AI Act, things are rarely as simple as they seem on paper. The difference in requirements for an organisation developing AI models from scratch, one that uses them with minor tweaks, and one that imports AI into the region can be significant. And it’s not always easy to understand where regulators will draw the line between them.
Yet with potential fines of €15m (£13m) or 3% of global annual turnover in the offing from 2 August 2026, this is no time to hope for the best. AI is not just another software tool. Its risks must be managed continuously with a dedicated governance strategy.
Where we’ve come from
The EU AI Act has been a long time coming. Since it passed into law on 1 August 2024, various deadlines have come and gone. The first major tranche came six months later, in February 2025. That’s when we saw new rules on employee AI literacy come into force, alongside the launch of the AI Office and AI Board at an EU level. It was also when AI systems posing an “unacceptable risk” were officially banned.
These systems sit at one end of the tier-based risk approach adopted by the European Commission to regulate the technology. Use cases like social scoring, “emotion recognition in workplaces” and real-time facial recognition for law enforcement are considered beyond the pale and are prohibited. At the other end are minimal or no-risk applications like spam filters, which the commission claims account for the majority of systems. These are effectively left unregulated.
One tier above these are “limited risk” cases like chatbots and deepfakes, where regulators want users to be clearly informed that they are being exposed to AI, in order to “preserve trust”. Then comes the most complex area: high-risk systems. Assuming a belated attempt by the commission to minimise the compliance burden on businesses doesn’t take effect, new rules governing these systems will come into force in early August. This is where the challenges begin.
Why risk matters
High-risk AI could cover anything that poses “serious risks to health, safety or fundamental rights”. What does this mean in practice? This is where Annex III of the legislation comes in. It lists eight broad areas of coverage: biometrics, critical infrastructure management and operation, educational training, employment and worker management, access and use of essential services, law enforcement, border control, and criminal justice/democracy.
These are use cases where AI decision making could have a major impact on an individual’s life -such as CV selection during a job application, or robot-assisted surgery. It’s important to recognise that these classifications are not theoretical. Concrete steps must be made to ensure an AI product or service passes a conformity assessment, before it can be introduced to the market.
Regulators demand risk assessments, human oversight, logging to ensure traceability, data governance controls to minimise AI bias, detailed documentation, and robust security and accuracy, among other things. It’s not enough to merely claim compliance. Organisations must be able to demonstrate it.
How UK firms could be caught out
For UK organisations providing products and services for the EU market, there’s a second dimension to compliance – their role in the AI value chain. It pays to first understand how the regulation defines these.
“Providers” are saddled with the greatest compliance burden. These are firms that develop AI systems and sell them in the EU. They must complete the steps listed above as part of a conformity assessment and appoint an Authorised Representative to serve as a first point of contact for EU regulators.
“Deployers” are organisations that use third-party AI systems. An organisation will be classed as a deployer even if its AI is based in the UK, so long as the output impacts an EU citizen. The good news is that the compliance obligations are much reduced – amounting to human oversight, risk monitoring, and ensuring that the provider’s “instructions for use” are followed.
Other roles include “importers” that bring AI systems from outside the EU into the region, such as an EU subsidiary of a UK firm. And “distributors”, which are most likely to be resellers or consultancies that make third-party tools available within the bloc. Both have obligations to ensure the AI system in question has the correct CE markings, indicating conformance.
The difficulty comes with understanding which grouping the organisation falls into. A deployer could be classed as a provider if it substantially modifies a system or rebrands the product as its own. Increasingly, organisations also occupy more than one role simultaneously. A SaaS business, for example, might integrate a third-party foundation model, fine-tune it, and deploy it to customers across multiple jurisdictions. This complexity makes informal governance unsustainable.
Building a framework for solid AI governance
So where do UK businesses go from here?
The first step is visibility. Build an AI inventory to understand where the technology is being used across the organisation, and which products/services will be directly sold into the EU or impact EU citizens. Next, assess each system’s risk exposure against those Annex III use cases. Third, define the organisation’s role in the value chain for each discrete system.
After establishing this foundation, it’s time to start mapping controls to ensure the high-risk system meets the regulation’s prescriptive requirements. This is where ISO 42001 can be invaluable. It offers a management system with which organisations can operationalise AI governance.
As the world’s first global standard for AI Management System (AIMS), ISO 42001’s Annex A controls map closely to the AI Act’s requirements. It requires organisations to identify AI systems, assess risk and define accountability. It includes controls on human oversight and documenting decisions. And requires complying organisations to monitor performance and continuously improve controls.
In short, ISO 42001 provides the scaffolding that enables compliance with the EU AI Act. And delivers a repeatable process to help streamline compliance for new services or regulations. For UK firms looking to answer difficult questions from regulators or prospective customers or partners, it’s a powerful signal of trust and best practice.
The bigger picture
There’s still a chance that the AI Act’s 2 August deadline could be pushed back to December 2027. But there’s no guarantee. Risk-averse businesses would be better off acting now to meet the original date. Those that already operate mature governance systems, particularly those aligned with ISO standards, will find this relatively straightforward.
In any case, this is the general direction of travel for AI compliance. As the technology matures and proliferates, it cannot be isolated with technical teams. It requires structured oversight, documentation, supplier governance and board-level accountability. That’s what ISO 42001 was built for.
Managing AI risk and gaining supply chain visibility aren’t simply a case of ticking EU compliance boxes. They’re essential for building accountability, resilience and long-term trust. Organisations that understand their AI systems, where they sit, what they affect, and how they’re controlled, will be in a far stronger position as global regulations mature. It’s time to turn compliance into competitive advantage.



