Future of AIAI

Steering Clear from AI Blackboxes in Modernisation Plans

By Steve Morgan, Banking Industry Market Lead at Pegasystems

Generative AI (GenAI) has taken the market by storm, helping sectors modernise and streamline their processes. Despite the many benefits the technology offers though, errors still occur with its impact being more critical from industry to industry.   

For example, when leveraging GenAI to create a new advertisement for say a car, the image is great and the headline eye-catching but you realise too late that the car depicted it’s in a colour that’s not available and the copy mentions a feature that’s clearly not there. The advertisement must be pulled and fixed but the damage caused is limited.   

In marketing, being only 98% accurate might be fine (probably not though!) and can be used as a learning curve – learning from your mistakes is what you might expect. But, this is not the case for the banking sector. Any small errors in how you process a mortgage lending request or a credit card dispute, can be significantly damaging for the financial services firm. In fact, they can cause financial losses, reputational damage, and customer dissatisfaction. 

Customer expectations are high and prone to understandable high levels of stress when it concerns major financial investments like a home, savings for retirement or even time sensitive payments for goods and services. This increases the pressure on service to be of a timely and high quality nature. With the advent of GenAI being applied to say customer service in banking, it raises the bar on being able to execute in a seamless, frictionless manner and be able to easily get to a person if the customer wants to for any reason. 

With this in mind and as the sector is starting to further use advanced technologies, the concerns about generating even the smallest inaccuracies are leading some businesses to push the pause button on investing in new technologies including agentic AI. Or if not a pause, then an internal or a ‘friends and family’ launch test before wider release. 

What you need to know about agentic AI   

Agentic AI is an interesting topic. In the past, agents were only successful in narrowly defined domains; however, with the power of GenAI, agents can now comprehend goals and context. Think of them being exactly like a human agent. They need to make decisions and take action in the context of guidelines, policies and procedures. They need to be able to explain, justify, just as with a human agent, as to why a certain decision or action was taken. They are able to develop and execute plans using various tools, effectively transforming GenAI services from passive entities into active agents. This means, they need to be transparent in how an agent reached a goal or conclusion, what steps were taken, on the basis of what data and intermediate conclusions. 

Agentic AI Apprehension   

Numerous leaders in the banking sector are rightly reconsidering the existing rule-based and policy-based frameworks their teams work within. There is no difference in what is needed to justify an internal audit team or a regulator, or escalated management levels for Agentic AI or a human Agent. While frameworks can make discretionary decisions, such as approving mortgages or business loans, they operate within predefined or escalation parameters. This structured approach is also needed for a successful agentic AI adoption.   

Those who are enthusiastic about using Agentic AI are making proud claims that they are looking at 95%+ agent accuracy. This accuracy rate would certainly not be acceptable to a CIO if success was only guaranteed 95% of the time. Similarly, would you trust a bank that got 5% of your transactions wrong? The real benchmark for any process which has Agentic AI applied is against human decisions and action completion prior to any changes. A major global bank has achieved 98%+ accuracy levels and 99.5% service level attainment in a complex operational area of legal operations. Before the application of GenAI and workflow automation the accuracy level was at best 95% and the service level attainment between 40-50%. There is no question the new process is an improvement, with humans in the loop for certain levels of risk checking and quality assurance. 

Regulating Agentic AI 

To ensure agentic AI is not aimlessly spread in the sector but instead properly supports businesses, financial firms need to familiarise themselves with the first flush of agentic AI technology available. It can work but there needs to be expert change management and testing to ensure success. 

But how do you achieve this? Harnessing the powerful new cognitive and active abilities of agentic AI within the structured approach of workflow software is key. These platforms will ensure regulated and complex processes comply with predetermined guidelines, creating synergy with established rules, policies, and escalation frameworks that serve as protective boundaries to prevent errors. While humans are also susceptible to mistakes, the goal is to minimise and better these occurrences by having reasonable checks and balances.   

There has always been apprehension about opaque or blackbox AI solutions that cannot be opened up to solve errors or make essential edits that would align with updated rules and regulations. Undoubtedly, predictive AI has the ability to be clear in how it assessed the task and how it got to its conclusion. 

The same needs to apply to agentic AI. For the technology to successfully work and support the banking sector, agentic AI solutions must be transparent and regularly inform businesses that every agent is predictable, thoroughly audited, and optimised for success. 

Author

Related Articles

Back to top button