Innovation Spotlight

The 95%: Why Most AI Projects in Banks Fail at the Pilot Stage

By Susie MacKenzie, Head of Legal Analysis at Corlytics

Across every industry, AI has left its mark. Smaller, nimbler firms areย leveragingย it to compete at levels previously reserved for industry giants. Yet at larger firms, the story is strikingly different.ย 

Recent MIT researchย suggests that 95% of generative AI pilotsย fail toย scale.ย  So, the question arises, whenย AI promises efficiency and speed, why canโ€™t the majority of banking projects make it past the starting line?ย 

The answerย shouldnโ€™tย be too surprising. Banks face a unique, complex mix of regulatory,ย legalย and operational constraints, where the cost of a single failure is materially higher than in any other sector. One error can trigger regulatory enforcement, litigation, remediationย obligationsย and significant reputational harm. As a result, AI projects can become bogged down in extensive risk assessments, model governanceย reviewsย and compliance processes, long before they approach deployment.ย 

Stuck between innovation and regulationย 

Banks are under pressure toย moderniseย and adopt AI-driven solutions at speed, whileย operatingย in one of the most tightly regulated environments. The introduction of any technological system must withstand the most rigorous and intense scrutiny.ย 

The first barrier is data. Banksย possessย vast amounts of it, but much of it sits in legacy systems, inconsistent data formats and requires significant work before it can be used in AI models. At the same time, financial institutions are bound to strict rules around data accuracy,ย completenessย and reliability in any decision-making.ย 

Feeding fractured or inconsistent data into AI models puts them at risk of breaching duties under consumer protection laws, anti-discrimination rules, anti-money laundering (AML)ย and fraud risk requirements, as well as record-keeping and audit standards. Tasks such as data cleansing, lineage tracking, metadata management and preparing data for model ingestion are not merely operational hygiene; they are legal safeguards.ย 

All in all, this slows implementation, as banks must ensure defensibility at every stage.ย 

The challenge of explainabilityย 

The second barrier is explainability. Financial regulators require firms to understand andย demonstrateย how a model arrives at a particular outcome. This is not simply best practice, it is essential for meeting obligations under consumer credit rules, anti-bias safeguards, prudential modelling standards, and the broader legal principle that firms must treat customers fairly and avoid vague decision-making.ย 

This creates tension, as AI systems may produce highlyย accurateย outputs, but their decision-making logic is often opaque. That opacity translates directly into legal risk: the risk of enforcement action, consumer redress, litigation, or findings of unfair or discriminatory treatment. Many projects flounder when theyย encounterย this hurdle.ย 

Outsourcing doesnโ€™t remove accountabilityย 

The final barrier is governance. Large banksย operateย across multipleย jurisdictions, each with evolving and fragmented AI regulatory positions. This regulatory divergence creates uncertainty, leading some institutions to delay deployment until expectations become moreย harmonisedย or supervisory guidance becomes clearer.ย 

At the same time, banks rely on external vendors such as cloud providers, dataย aggregatorsย and specialist AI firms to supply infrastructure or sophisticated models. However, outsourcing does not transfer accountability.ย 

Regulators require banks toย maintainย stringent oversight of third-party arrangements, including due diligence, contractual controls, audit and access rights, contingency planning, and ongoing monitoring. If an external system produces unlawful,ย discriminatoryย orย erroneousย outcomes, the bankย remainsย fully accountable.ย 

As a result, institutions often cannot onboard AI vendors at the pace they would like, simply because the legal and governance requirements are so demanding.ย 

How can banks break the deadlock?ย 

Despite these challenges, AI adoption can still deliver the returns predicted, but only for institutions willing to take a different approach from the outset.ย 

  • Implement ensemble approaches with human oversight.ย As AI adoption accelerates, financial institutions mustย prioritiseย accuracy and precision alongside speed. The most effective will be those that understand AI alone is not enough. Ensemble AI models combined with careful model validationย helpย strike this balance. It allows institutions to harness automation whileย maintainingย accountability.ย 
  • Empower subject matter experts, not just central AI labs.ย Successful AI adoption for any sector requires subject matter experts to work alongside data scientists toย maintainย accuracy. This is especially true for industries where mistakes have materially far largerย consequencesย and accuracy must be exceptionally high.ย 
  • Invest in line managers. Firms need to invest in mentoring and training to ensure line managers who understand specific business processes and regulatory requirements drive deployment decisions, with central teams providing governance frameworks and technical support.ย 

Joining the 5%ย 

AI mayย representย the future of financial services. However, for banks, the journey to deployment is less about technological capability and more about navigating a complex matrix of legal obligations, supervisory expectations andย crossborderย regulatory uncertainty. Until those tensions ease or frameworks become clearer, many AI projects will remain stuck in pilot mode, waiting for the regulatory green light required to move ahead.ย 

Author

Back to top button