Future of AIFinance

Risk and reward: The use of AI in communications monitoring

By Shaun Hurst, Principal Regulatory Advisor

The use of artificial intelligence (AI) within organisations has undoubtedly seen an exponential increase across industries. Now it’s fully in the world’s lexicon, and businesses are more focused on evaluating how they can use AI to improve their operations, and a key priority is how they use it in their communications.

There is a multitude of ways AI can be used to enhance a business’s communications, from bolstering customer service interactions to overseeing internal stakeholder engagement. It’s clear that AI is positioned to be central to everyday business communications, particularly in industries such as financial services. But as regulation naturally fails to keep pace with AI’s development, it’s vital that businesses learn how to navigate this whilst ensuring their own compliance and adherence to the rapidly changing requirements.

A fast-evolving regulatory landscape

The UK is currently at a crossroads when it comes to AI regulation, with the financial services sector closely monitoring the potential for significant changes. Historically, the UK hasn’t developed specific legislation for AI, opting instead for a pro-innovation, principles-based approach that utilises the expertise of existing sector-specific regulators. This method has allowed for a degree of flexibility and has encouraged innovation within the space.

However, last year the new Labour Government was brought in with early indications of a new approach to AI regulation. References to legislation in the King’s Speech and ongoing discussions around an AI Bill, as well as the issue’s inclusion in the Data (Use and Access) Bill, suggest that a more structured regulatory framework may be fast approaching.

While we still lack specific AI guidance from the FCA, Prime Minister Sir Keir Starmer’s AI Opportunities Action Plan emphasises the Government’s commitment to fostering innovation while ensuring responsible AI development.

Despite this, industry advocacy groups such as SIFMA and AFME continue to argue against additional regulation, favouring a principles-based, technologically agnostic approach that would see existing regulatory obligations extend to cover AI use cases. They maintain that this would prevent the stifling of innovation through overzealous regulation.

In this environment of regulatory uncertainty, financial services firms must anticipate and prepare for potential regulatory changes. By staying abreast of AI regulatory developments and ensuring that their risk controls are robust and adaptable, firms can not only comply with new obligations as they arise but also gain a competitive edge.

Getting ahead of the game

In the financial services sector, getting ahead of AI regulation is not just about compliance, it’s a strategic necessity for the future of the business. The integration of cutting-edge communication technologies into business processes is vital for swift and effective collaboration in today’s rapid market. Yet, the challenge is to adopt these technologies while ensuring current and future risks are mitigated.

Firms should cultivate a culture that leverages the potential of AI Agents, drawing lessons from previous experiences with communication technologies. The financial sector’s struggle with encrypted messaging services like WhatsApp, WeChat and Signal is instructive. Organisations hesitated to monitor these platforms due to concerns over data security and operational complexity, leading to regulatory scrutiny and significant penalties.

With AI Agents, firms might similarly resist due to increased regulatory scrutiny and the complexities of managing additional data storage and exposing non-compliant behaviour. Finding technology solutions that securely and effectively capture necessary communications is key to mitigating these issues.

The breadth of risks to be managed has expanded, encompassing regulatory, data privacy, information security and intellectual property concerns. Importantly, firms may need to create jurisdiction-specific AI systems or services to comply with these regulations and uphold global standards.

The need for proactive AI governance

Developing a structured AI governance framework is essential for managing risk, ensuring transparency and maintaining high data quality with appropriate human oversight and thorough documentation. This approach is critical for the effective oversight of AI systems. It’s important to update AI development processes to reflect ethical principles, including transparency and bias mitigation, which are fundamental for ethical AI deployment. Robust data governance is also crucial to ensure the data used for AI training is representative and reliable.

Human oversight must be clearly defined, with responsibility assigned at key decision points where AI is used in communications. Firms should ensure that communications data is properly archived and monitored to provide auditable records of AI-driven interactions, supporting both operational oversight and regulatory compliance. This enables appropriate human judgement to intervene where necessary and ensures alignment with regulations such as the FCA’s Consumer Duty and Operational Resilience requirements.

Engaging with regulators and industry bodies allows firms to remain at the forefront of compliance and best practices, and educating stakeholders on the importance of proactive AI governance is key to mitigating regulatory and reputational risks.

As AI becomes more prevalent in financial communications, it’s necessary to adapt compliance frameworks to ensure AI-driven communications are rigorously overseen. Financial institutions should establish clear policies, conduct regular risk assessments and implement effective monitoring and reporting systems.

Compliance should be viewed as a strategic asset, essential for sustainable business practice and not just a means to avoid penalties. Proactive AI governance enables innovation, safeguards integrity and ensures competitiveness. Conveying to stakeholders that proactive governance facilitates responsible AI integration is vital. This approach ensures that AI use in communications is prepared for current and future regulatory challenges, therefore fostering responsible AI integration within the financial sector.

Author

Related Articles

Back to top button