While generative AI has dominated much of the public conversation in recent years, a new AI frontier is emerging in fintech: Agentic AI. Unlike passive systems, Agentic AI can act autonomously on behalf of users to plan and execute tasks for defined objectives. Use cases in financial services include managing stakeholder mapping and onboarding, monitoring portfolios and detecting fraud. The systems are often integrated into existing platforms such as Salesforce, Microsoft Dynamics, or Experian, and powered by third-party LLMs. Their autonomous nature allows them to act in near-real time, delivering eye-watering operational efficiencies across a whole host of business use cases.
However, to reap these rewards, businesses must ensure that Agentic AI is deployed in a way that is legally robust, operationally resilient and aligned with the evolving regulatory landscape.
Navigating a patchwork of regulations
Navigating an evolving landscape of UK and EU AI-related law is a significant challenge for businesses in the UK. For a start, the extraterritorial effects of the EU’s Artificial Intelligence Act (the “EU AI Act”) will mean it governs any UK-based firms offering AI systems within the EU or making decisions that affect EU citizens. The AI Act introduces a risk-based classification framework, with high-risk systems subject to strict requirements. Some applications of Agentic AI in finance, such as those used for credit assessments, anti-money laundering and compliance monitoring, may fall into this high-risk category, triggering obligations related to transparency, human oversight, documentation and risk management.
Domestically, the UK is also updating its regulatory approach through reforms like the Data (Use and Access) Bill (the “Bill”) which is set to become law later in 2025. Whilst ostensibly designed to update the UK’s data protection regulation framework, it contains many new provisions that directly impact the use of AI. For example, the Bill clarifies rules surrounding automated decision-making, making it easier for AI models to make decisions using personal data without human oversight.
The Bill also establishes a new “smart data” scheme which will make it easier for businesses to access personal databases to build AI systems (similarly to the open banking regime). These changes are widely perceived as a relaxation to the existing regime, and ones that will make it easier for AI firms to develop and deliver innovative AI systems in the UK market, but must still be carefully reviewed.
In addition, upcoming obligations under the Digital Operational Resilience Act (DORA) will apply to financial entities serving the EU market. DORA will impose strict operational resilience standards, including business continuity, ICT security and incident reporting. These standards will be especially relevant for Agentic AI, which may make decisions or take actions without human input, potentially increasing the risk of unexpected or harmful outcomes.
The FCA’s “Supercharge Sandbox”
To help firms develop and test innovative models without the hindrance of regulatory risk, the UK Financial Conduct Authority (FCA) has launched a new initiative called the “Supercharge Sandbox.” This programme, scheduled to open for testing in October 2025, is designed to support experimentation with cutting-edge AI technologies (particularly Agentic systems). The sandbox also offers participants access to NVIDIA’s advanced computing infrastructure and a supervised environment to test proof-of-concept tools with fewer regulatory burdens than a full market launch would typically require.
The Supercharge Sandbox initiative is welcomed. It presents a valuable opportunity to engage regulators during the development phase, assess legal and compliance risks in a controlled environment, and better understand the supervisory expectations for autonomous AI. Firms can apply through the FCA AI Lab website.
Ensuring watertight contracts for your Agentic AI tools
As businesses increasingly build, procure, or license in Agentic AI systems, the relevant customer/vendor contracts must reflect new operational and legal realities. IP rights in AI-generated outputs should be clearly defined. Customers will often expect ownership or broad usage rights, particularly where the system produces custom reports, risk analyses, or marketing materials. Vendors, meanwhile, must ensure they are not overpromising outcomes. Given the probabilistic and non-deterministic nature of AI systems, warranties related to performance, accuracy, or reliability should be carefully drafted to avoid unintended liability.
Security and data handling obligations must also be clearly articulated. Contracts should specify cyber security standards, data anonymisation practices, and the responsibilities of each party in relation to UK GDPR compliance. Customers should also consider including audit rights and explainability provisions in AI-related contracts, which require vendors to provide tools or interfaces that allow them to understand, monitor and interrogate the rationale behind automated decisions.
Ultimately, Agentic AI is both a huge opportunity and a substantial legal challenge for the financial services industry. Engagement with regulatory requirements, early legal reviews of product development and careful contractual drafting will all be critical in the coming years.