FinanceFuture of AIAI

Still Holding Back: Why Financial Services Can’t Afford to Hesitate on AI

By Donald McElligott, VP, Compliance Supervision at Global Relay

Over the past few years, we’ve witnessed an extraordinary acceleration in artificial intelligence capabilities, with AI now enhancing to the way employees work across multiple industries. Yet despite its advantages, the financial services sector remains hesitant to fully adopt AI – even when this reluctance comes at the cost of compliance. Our State of AI in Surveillance Report 2025 revealed that only 31% of industry respondents are currently using AI or plan to do so within the next 12 months.

This is surprising given the advantages that AI-powered tools can offer communications surveillance teams. In the US, a record-breaking wave of enforcement actions related to data management and off-channel communications have hit financial institutions over the last four years – and the penalties have been eye-watering. In January alone, the SEC issued more than $63 million in combined fines to 12 firms for recordkeeping failures. There’s a strong argument that these breaches might have been detected – and prevented – with the help of AI-powered surveillance tools.

Across the pond, the FCA has so far taken a comparatively hands-off approach to off-channel communications compliance, reflecting pressure to balance risk mitigation with economic growth. However, its recently published Five Year Strategy highlights AI’s potential to “transform financial services.” While the roadmap acknowledged AI’s “potential to bring increased volatility and market abuse,” it also pointed to the technology’s ability to deliver “greater efficiency” and “faster reactions.” The regulator isn’t just servicing this opportunity with words though – it’s actively investing in it through its AI Lab, a platform designed to support the safe and responsible use of AI in financial markets.

With firms hesitant to fully embrace AI until they’ve seen it prove its value, one way to overcome hesitancy around adoption is to demonstrate how forward-looking surveillance teams are already integrating these tools – and the tangible benefits of doing so.

The machines are listening — and it’s a good thing

The top reason firms are adopting AI, according to our report, is due to its ability to reduce false positives (23%). Generative AI, working in tandem with other AI models, is proving to be a force multiplier for compliance teams – not only filtering out “noise” from irrelevant words but also identifying genuine risks by understanding the context behind communications.

As well as reducing false positives and enhancing risk identification, our report found that financial services firms cite voice transcription (14%) as the third biggest reason for implementing AI in their surveillance operations. This likely comes as no surprise to those following regulatory developments closely.

In January 2025, the FCA stated in its ‘Dear CEO’ letter that firms are expected to maintain “effective and comprehensive risk and control oversight frameworks to detect and prevent harm from occurring and penalize undesirable behaviour” – explicitly noting that this includes trade and communication surveillance. Meanwhile, in the U.S., the CFTC fined a firm $650,000 in September 2024 for “recordkeeping deficiencies and failure to obtain customer authorizations.”

To mitigate the risk of future fines, firms must rigorously follow internal procedures across all operations. AI is a clear enabler in this space. Leveraging solutions that accurately capture all communications, and securely store them for regulatory access, will be a game changer for firms, especially as AI continues to evolve.

Is explainability holding back accuracy?

There are also still valid concerns seeing firms remain cautious about integrating AI into their operations, namely budgetary restrictions and data security.

For the latter, a key barrier here is explainability – the ability of AI systems to clearly articulate the rationale behind their decisions in a way that users (and regulators) can understand. Firms need to be assured that AI and generative models are not covertly storing or processing information when analyzing data. Yet with recent leaps forward in capability producing significantly better outcomes and surpassing traditional methods of surveillance, it may be time for firms to reconsider whether prioritizing explainability at the expense of accuracy is still the right approach.

Moving forward with AI — carefully

Of course, firms using AI models within their surveillance function need to feel confident that they are carefully assessed and authenticated. Building trust in the technology starts with understanding how it works. To feel secure in their investments, organizations must have a clear grasp of how their data is processed, where it is stored, and ultimately who owns it. With ongoing uncertainties and ethical questions surrounding AI, a cautious approach is needed.

Regulators and financial examiners have made it clear: firms adopting AI must understand the technology they are using and demonstrate that they have the knowledge and skills to manage it effectively. This also places responsibility on AI vendors to provide more than just software. They must also offer robust training and documentation to enable firms to use their solution in line with regulatory expectations.

What the does the future hold?

For communications surveillance, AI has proven its credentials as an efficiency-enhancing and time-saving tool, reliably reducing false positives and successfully identifying risks by understanding the context behind written communications.

Governments on both sides of the Atlantic are already responding. In early 2025, the U.S. administration issued an executive order to remove specific AI policies, designed to retain its competitive advantage on innovation. Meanwhile, in the UK, Prime Minister Keir Starmer recently laid out a vision for AI that includes the creation of dedicated AI growth zones designed to reinvigorate former industrial areas, signalling a broader ambition to use the technology as a driver of regional economic renewal. These moves highlight a growing political consensus around AI’s transformative potential and the need for firms across all sectors to keep pace.

While the risks and rewards of AI continue to be weighed, one thing is clear – it’s here to stay. The financial services industry must begin preparing for its broad adoption by equipping the workforce with the knowledge and skills to use AI safely and effectively, ahead of its integration into everyday operations.

Author

Related Articles

Back to top button