FinanceData

Operational risks: Why the banking industry is still nervous about AI

By Rowena Rix, Head of Innovation and AI for Dentons UK, Ireland and Middle East

Artificial intelligence has huge potential to boost accuracy and efficiency in banking, but the sector harbours deep-seated reservations about adopting the technology for some applications, writes Rowena Rix.

Artificial intelligence (AI) is regarded as an increasingly important technology for the banking sector.

When used as a tool to power internal operations, proponents of the technology urge that AI can help banks and other financial service providers improve customer service, fraud detection and operational management, among other areas.

According to a recent survey by Dentons, 74% of financial services sector respondents said they were already using AI for IT and cybersecurity purposes, while 72% had deployed it for customer service and support functions.

The other major areas for AI adoption, according to the survey, were sales and marketing, research and development and finance and accounting – all at 69%.

However, beyond these predominantly back office functions, many banks still have concerns about deploying AI in key operational areas of their businesses, particularly their transactional departments, resulting in a lack of clear direction on the technology.

Dentons’ survey found that just 29% of financial services sector respondents survey had a formal AI roadmap or internal strategy in place, as of mid-2024.

Broadly, the banking sector is working at varied stages and speeds of AI adoption.

Where banks are reticent to deploy AI in core business areas, anecdotal reasons put forward include limitations on the technology’s capabilities in certain use cases and organisational risk tolerance levels that aren’t flexible enough to accept the margins of error associated with AI.

Dentons’ survey also revealed some more specific apprehensions that banks and other financial service providers harbour about AI.

The biggest concern, expressed by 57% of sector respondents, was that lack of human influence on certain tasks would lead to errors, which in turn raises question about liability for AI generated mistakes.

The next most prominent concerns were weakening of the human talent pipeline through reliance on technology (52%) and skills gaps (49%), the latter of which suggests banks do not yet feel they have sufficient expertise in house to deploy the technology safely and effectively.

There is an acceptance that these fears will be allayed – partially or fully – through the development of internal frameworks, including strong governance structures, for ensuring visibility of AI usage, and appropriate risk assessment and mitigation.

These are areas where, again, anecdotal evidence indicates there is significant variation in progress – partly because of the novelty and complexity of the task at hand.

There a shared sense of challenge for banks and other financial service providers in creating unified governance approaches across different business areas, addressing different tools and being deployed for diverse use cases with varying legal issues and associated risk tolerances.

One solution to this, that is more easily suggested than executed, is to implement processes that have the flexibility to evolve over time, as the extent and nature of AI usage within an organisation develops and as regulation of the technology inevitably changes.

Setting up triage processes for review and risk-rating of AI tools before they are adopted is becoming regarded as a best practice approach to AI implementation among businesses in general, and financial services is following suit.

Factors that feed into risk-rating methodologies included extent of external usage and visibility of an AI tool’s output; commercial and/or regulatory sensitivity of the input data deemed necessary to ensure valuable output; reputational factors; and the broadest possible consequences of AI-driven decisions.

In tandem with triage and risk-rating processes, many large financial institutions are setting up internal technology ‘sandboxes’, which cater for AI and other types of financial technology (fintech), for testing, feedback and informed decision making,

One advantage of a cautious approach to AI adoption is it presents an opportunity to build an AI inventory as tools become adopted, which is advisable and likely to become a legal requirement under developing AI regulations.

There are also broad and extremely thorny change management considerations and risks, not least the pressing need to handle the impact of AI on workforces, including job replacement and effective training.

Despite its nervousness of the technology, the banking and financial services sector seems very sure of one thing: according to Dentons’ survey, 78% of sector respondents believe that organisations that fail to embrace AI will become increasingly unviable as time goes on.

This sets in sharp relief the pressure on banks to close the gap between their AI ambitions and the actions they are taking to achieve them.

Author

Related Articles

Back to top button