Future of AIAI

Using AI to strengthen surveillance against market abuse

By Alexander Parker, CTO, eflow Global

AI is reshaping how compliance teams spot potentially manipulative trading activity. If used effectively, it should offer the opportunity to work smarter, not harder. Behavioural analysis, dynamic risk scoring, and smarter alert tuning mean unusual trading patterns can be flagged faster and more accurately than ever before, helping teams focus on the cases that matter most. 

That said, AI isn’t a magic wand. While it isn’t creating entirely new types of market abuse, there are potential risks to be aware of. Misaligned priorities or poorly coded systems could, in theory, allow AI platforms to collude, which makes ethical development and clear accountability critical. It’s not about AI being inherently risky; it’s about understanding the limits of what it can do and ensuring it’s used responsibly. 

Perception is another challenge. Many firms don’t fully realise how much AI is already embedded in their surveillance systems. This lack of awareness can leave gaps in oversight, which is why bridging that gap through training, explainability, and closer collaboration is so important. Teams need to know not only what AI is doing, but why it is doing it – and where human judgement must intervene. 

How AI can strengthen surveillance  

Applied correctly, AI gives surveillance teams the ability to better identify and prevent market abuse. The use of AI itself by bad actors is making market abuse more complex to spot, so AI’s use in compliance is becoming increasingly necessary. That’s because legacy trade surveillance systems typically rely on static rules and thresholds for detecting abuse. This can consequently generate a lot of noise and false-positive alerts, potentially overwhelming compliance teams and contributing to a reactive, rather than proactive, strategy.  

AI, on the other hand, can spot subtle deviations in trading behaviour or communications that these rules-based systems would miss – think layering, spoofing, insider trading indicators. It enables ‘normal’ behavioural and pattern recognition to be dynamically defined per trader, desk or client, rather than relying on static thresholds.  

What’s more, it gives teams the ability to perform dynamic risk scoring. AI systems can assess not only the activity itself, but also contextual factors like trader history, time of day and cross-market activity. These risk scores can evolve as behaviour does, allowing compliance to prioritise cases.  

Therefore, when it comes to alert tuning, AI can help to reduce false positives by filtering out cases that don’t meet these thresholds. Ultimately, the efficiency achieved from using AI allows smaller teams to handle higher alert volumes without burnout. Rather than spending their time clearing alerts, they can use it for conducting deeper investigations of cases. 

Where AI could amplify risks  

If priorities and accountability for AI implementation aren’t clearly established, there are ways AI can create risks. Misaligned priorities, for example, can undermine surveillance processes. If a firm were to prioritise speed over accuracy, this could present its own forms of compliance risk – perhaps even more than they had previously. There is also a world in which two AI systems could communicate in ways that mimic collusion without human intent. So, who holds accountability when AI systems interact in unintended ways?  

Last year European financial markets regulators “warned that trading algorithms that use artificial intelligence (AI) models can engage in collusive behaviour, which constitutes market abuse”. This collusion could also extend to compliance models using AI too. 

It’s why a potential over-reliance on AI presents one of the largest risks to financial markets. If compliance professionals depend on AI tools completely, it may obscure their ability to independently assess what behaviour equates to potential abuse and the accuracy of insights for quality assurance purposes. Placing trust entirely in the hands of AI can reduce ‘vigilance’ and underestimates the value of human judgement.  

Crucially, AI models are also known to ‘hallucinate’ incorrect information, so automating the process completely without human oversight could lead to significant errors being made. And if a firm is unable to explain why a trade was flagged or, indeed, ignored, it will quickly run into trouble with regulators.  

Explainability, training and collaboration are essential 

In recent years, there has been a noticeable shift in expectations from regulators. Firms are now expected to be able to understand and explain the output of their surveillance systems, rather than simply demonstrate that they have controls in place. Regulators have made it clear that they expect compliance teams to articulate the decision-making and rationale behind system-generated processes – simply stating ‘the AI sets the alert parameters’ won’t cut it during an investigation.  

Therefore, training compliance teams is the first step. Teams need a deep understanding of AI’s capabilities, and this involves upskilling staff in areas such as data literacy and how to interpret AI outputs and insights.  

Collaboration is also key to effectively using AI to its full potential. Compliance, IT and business departments must all work together to build company-wide knowledge and awareness of best practices. For example, AI engineers can explain the mechanics while compliance professionals can outline regulatory expectations.  

What’s crucial is that firms embed governance structures that outline who owns aspects like model updates, quality assurance, oversight or alert escalation. This all helps to build a culture of accountability.  

A(I) tool to augment human judgement 

AI can strengthen many areas of compliance, even allowing teams to identify instances of market abuse they might have missed altogether without it. But the technology only works effectively if it has the necessary priorities, human oversight and governance wrapped around its use.  

Above all, AI should be seen as a powerful tool to augment, not replace, human decision-making. By combining advanced technology with clear accountability, ethical coding, and strong governance, firms can create a more proactive and resilient approach to market surveillance. 

AI doesn’t eliminate the need for human judgement, but it helps ensure it is applied where it matters most. 

Author

Related Articles

Back to top button