AI Business Strategy

Closing the trust gap: When it comes to AI, the banking industry is solving the wrong problem

By Julien Villemonteix, CEO of UpSlide

In banking, trust is not an abstract value—it is a commercial asset, earned slowly and lost quickly. 

Yet as artificial intelligence rapidly reshapes financial services, that trust is facing a new kind of stress test.  

The industry’s enthusiasm for AI is understandable. Generative models promise faster analysis and unprecedented productivity gains.  

But in the rush to deploy these tools at scale, banks risk overlooking a more fundamental question: not whether AI can make work faster, but whether it can be trusted to safeguard the relationships and reputations on which the industry depends. 

Recent warnings from global institutions underline this concern. The World Economic Forum has cautioned that financial services could lose public confidence if AI adoption outpaces governance. Similarly, a joint report from the Chartered Institute for Securities & Investments (CISI) and the Association of Chartered Certified Accountants (ACCA) warns that unchecked use of generative AI could undermine the trust at the heart of client relationships.  

These are not abstract, theoretical risks. They are already materialising inside banks today. 

The uncomfortable truth is that when it comes to AI, much of the banking industry is solving the wrong problem. 

Trust is the real currency of banking 

Every day, investment banks produce vast amounts of client-facing and public content: pitch books, valuation reports, market updates, and IC memos. These documents are not merely informational. They are statements of competence. A single error—an incorrect EBITDA, a miscalculation, a citation that does not exist—can call into question the quality of the entire institution. 

Nowhere is this more evident than in mergers and acquisitions. In M&A, trust is both fragile and decisive. Billion-dollar deals can be derailed by a flawed financial model, inconsistent assumptions, or poor attention to detail. Against this backdrop, placing complete faith in generative AI is not innovation—it is a risk. 

And yet, that is precisely what many firms may be tempted to do. 

The productivity obsession 

The dominant narrative around AI in banking is one of speed. Faster drafting. Faster analysis. Faster turnaround times. In a highly competitive market, where responsiveness is often linked to excellence, these benefits are compelling. 

In a recent survey we conducted among M&A professionals, 92% said they believe AI tools will shorten deal cycles over the next two years. By automating early-stage analysis, accelerating document preparation, and generally reducing friction across workflows, many believe AI could fundamentally reshape how deals are done. 

There is truth in this. AI can undoubtedly remove friction from repetitive tasks. It can summarise documents, extract data, and support analysts under intense time pressure. Used correctly, it can free up human expertise for higher-value work. 

But there is a crucial caveat. 

The same survey revealed that 82% of those that believe AI will shorten deal cycles also believe AI increases the risk of errors reaching clients. This tension—between speed and reliability—is at the heart of the trust gap facing financial services. 

When speed becomes a liability 

We are already seeing the consequences of prioritising speed over trust. Inside major financial institutions, stories of AI-related errors are becoming increasingly common. 

These could be documents circulated internally with hallucinated citations—references to reports, studies, or data sources that simply do not exist or are out of date.   

At first glance, these decks and reports look polished and professional. Only closer inspection reveals the flaws. 

Even firms with the strongest governance and professional standards have hit the headlines for submitting reports that contained fabricated citations generated by AI. If firms like this can fall victim to such errors, banks are certainly not immune. 

Clients may not always spot these errors immediately. But when they do, the damage is lasting. Trust, once broken, is difficult to rebuild. 

This is why speed, on its own, is a vanity metric. 

Whether an AI tool can summarise a document in three seconds instead of ten is largely irrelevant if the output cannot be trusted. Productivity gains that increase the likelihood of errors are not progress—they are introducing more risk which is a negative step. 

The future of AI in banking will not be decided by marginal improvements in turnaround time. It will be decided by whether institutions can deploy AI in a way that enhances accuracy, consistency, and reliability. 

Governance is necessary, but not sufficient 

Much of the current debate around AI in financial services focuses on governance. And rightly so. Clear policies, regulatory oversight, and ethical frameworks are essential. Banks must understand where data comes from, how models are trained, and how outputs are used. 

But governance alone will not close the trust gap. 

The real challenge lies in operational reality: how AI is embedded into everyday workflows, how outputs are checked, and how responsibility is assigned. Too often, AI tools are introduced as standalone solutions—bolted onto existing processes without sufficient thought about quality control or human oversight. 

This creates a dangerous dynamic. Under pressure to deliver faster, teams may rely too heavily on AI-generated outputs, assuming they are “good enough.” Over time, this normalises a lower standard of scrutiny, precisely when scrutiny should be increasing. 

AI should not replace human judgment in banking. It should augment it. But augmentation only works if roles are clearly defined. 

Humans are responsible for context, judgment, and accountability. AI excels at pattern recognition, automation, and scale. Problems arise when these roles blur—when AI outputs are treated as authoritative rather than provisional. 

In high-stakes environments like M&A, there must always be a clear line of accountability. Someone must be responsible for every number, every statement, every assumption. If that responsibility is implicitly shifted to an algorithm, trust is undermined by design. 

Banks therefore need to rethink not just which AI tools they use, but how they use them. 

Building trust into AI workflows 

Closing the AI trust gap will require a combination of tools, talent, and processes. 

First, tools must be purpose-built for the realities of financial services. Generic generative AI models, trained on broad internet data, are not designed for the precision and accountability that banking demands. AI systems need guardrails: controlled data sources, auditability, and integration with existing quality controls. 

Second, people matter more than ever. Skilled professionals who understand both finance and finance-specific AI technology to automate elements of review are essential. Banks need teams who can interrogate AI outputs, understand their limitations, and intervene when necessary. Training is critical—not just on how to use AI, but on when not to use it. 

Third, processes must evolve. Quality assurance cannot be an afterthought. Review, validation, and version control need to be embedded into AI-enabled workflows from the start. The goal is not to slow teams down, but to ensure that speed does not come at the expense of reliability. 

When these elements work together, AI can genuinely enhance productivity—without compromising trust. 

Protecting the most valuable asset 

At its core, this is about protecting what matters most to banks: client relationships. 

Clients do not choose advisors just because they are the fastest to produce a deck. They choose them because they also trust their judgment, strategic advice and their attention to detail. AI should strengthen these qualities, not weaken them. 

Used responsibly, AI can free bankers from manual tasks and allow more time for strategic thinking and client engagement. But this future is only achievable if trust is treated as a first-order design principle, not a secondary concern. 

The banking industry is at a crossroads. One path leads to faster workflows but fragile trust. The other leads to sustainable productivity grounded in accuracy, accountability, and confidence. 

Closing the AI trust gap will not be easy. It requires resisting the temptation of quick wins and focusing instead on long-term value. It means recognising that in finance, trust is not a by-product of innovation—it is its prerequisite. 

As we move through 2026, the institutions that succeed will not be those that deploy AI the fastest, but those that deploy it most thoughtfully. They will understand that when it comes to AI, the real challenge is not speed. It is trust. 

 

Author

Related Articles

Back to top button