
AI adoption in trading is accelerating faster than surveillance capabilities can keep pace. Data shows 11% of UK financial institutions already use AI in trading, with another 9% planning adoption by 2027. This creates a fundamental challenge: firms are deploying AI to gain trading edges, but their surveillance systems – and often their compliance teams – aren’t equipped to detect the manipulation patterns AI-driven trading can create.
The irony is that regulators expect firms to use AI as part of their surveillance strategy, particularly in cases of technically sophisticated activity such as cross-market manipulation, microsecond timing strategies, and adaptive trading behaviour that learns from market responses. These all fall outside the capabilities of traditional rules-based detection. Meanwhile, regulators remain concerned about what AI might miss and are intensifying scrutiny of surveillance system failures, repeatedly citing firms’ inability to detect sophisticated manipulation schemes in enforcement actions.
This is AI’s double-edged reality: it’s both the problem and the solution. Firms need AI-powered surveillance to match AI-powered trading, but deploying it creates new compliance obligations around explainability, auditability and human oversight. The firms navigating this successfully aren’t choosing between AI and traditional surveillance. Instead, they’re deploying hybrid systems where AI enhances detection whilst rules-based frameworks maintain regulatory defensibility.
The many levels to AI in trade surveillance
While AI hasn’t created new typologies of abuse, it has made it significantly more complex and harder to detect. Traditional rules-based trade surveillance systems rely on fixed thresholds for detecting abuse and typically look at single instruments or platforms. But AI-driven trading can conduct subtle cross-market and cross-asset manipulation, spreading small, seemingly unrelated trades across multiple products or venues. These forms of manipulation are the Achilles’ heel of many trade surveillance systems, as the various dynamic patterns are incredibly difficult, if not impossible, to detect.
What’s more, this means surveillance systems are prone to generating streams of false positive alerts, swamping compliance teams and creating so much ‘noise’ that genuine threats become easier to miss. These changes reveal why AI-driven surveillance systems with dynamic thresholds have become increasingly necessary – and why regulators have warmed up to AI-enhanced tools for surveillance. However, the use of such regulatory technology is lagging behind the ongoing advancement of AI-powered trading.
Our research of 300 compliance professionals around the world revealed that only 16% of firms have AI fully deployed as part of their trade surveillance strategy – a further 29% don’t have a formal strategy in place or a plan to use it at all. This AI gap could create notable risks for firms. Regulators are becoming more skilled at understanding complex cross-venue and cross-product behaviours and therefore expect firms to provide a fuller picture of this activity. If firms are failing to account for these shifts, they could face penalties and fines, as well as the damage from any missed abuse taking place.
The regulatory paradox
When firms were asked what they saw as the most significant compliance risk over the next 12 months, the most selected response was the use and adoption of AI (69%). This came above regulatory uncertainty and geopolitical instability. But why is this the case?
Even though regulators like the FCA are now advocating for AI use, they are also demanding strict auditability and human oversight. However, many systems on the market suffer from an ‘AI black box’ model. With these systems, AI tooling is often deployed in a number of ways – such as automated risk scoring and alert clearing. However, the rationale of how AI is making these decisions is often unclear or not readily accessible to regulatory teams. In the compliance space, this is a particular problem, as firms need to be able to show not just what decision was made, but how they arrived at that conclusion.
Without this audit trail, firms lack the explainability that regulators now expect and this puts them at threat of enforcement action, regardless of whether abuse has taken place. Regulatory bodies expect AI to be deployed as a tool to help identify risk more efficiently but also reject ‘black box’ approaches. This paradox offers a core reason as to why more firms haven’t fully adopted AI – they want to hand over tasks to AI to take advantage of the undoubted benefits it offers, but they are also concerned about falling short of expectations.
So, how do firms build AI surveillance systems that satisfy both detection and explainability requirements?
Re-shaping the build vs buy debate
The regulatory paradox has reignited the build-versus-buy debate. In the last few years, trade surveillance systems and control deficiencies have been a core focus of regulatory scrutiny. Simultaneously, integrating trade and eComms surveillance and detecting cross-market abuse is becoming more complex. As such, the ability to maintain, let alone build, in-house trade surveillance infrastructure at regulatory-grade standards is becoming harder to justify.
Theoretically, AI means firms can build their own proprietary tech more efficiently. But the practical reality of enterprise-grade deployment means the need for specialist partners who can deliver capability, assurance and control is growing in importance. So, if firms are looking to buy, what are some of the key requirements they should look for?
As outlined earlier, regulatory explainability is non-negotiable. Any trade surveillance system should let you track each output to its corresponding source data and demonstrate how conclusions were reached. Then, it should offer dynamic thresholds that can adapt to changing trading behaviours and spot cross-market manipulation, while also reducing false positives.
Crucially, it should provide a robust data foundation for AI to work from. The consolidation of trade and eComms data is pivotal for managing the modern world of AI-driven trading and abuse. This combination has become a regulatory expectation as it provides detailed reasoning around cases.
Balancing efficiency with control
While our research would suggest a potential hesitancy of firms to adopt AI for trade surveillance, the data also tells an alternative story. When firms that are actively rolling out solutions or planning deployment within the next two years were included, the data shows more than three quarters (77%) of the market is either already using AI or moving toward it. What matters now is how this use and deployment takes place.
Regulators’ stance towards AI and trade surveillance technology has become increasingly positive, when it’s used responsibly to improve detection outcomes and strengthen governance. The challenge for firms is how to govern it effectively and deploy it safely. Given the complexities involved in achieving this balance, working with a specialist technology vendor is emerging as the only viable solution for the vast majority of firms. In doing so, they can rely on their technology partner to offer them a trade surveillance system that combines AI efficiency, the latest technological advances, and robust explainability and control.



