FinanceAI Leadership & Perspective

Explainable AI: Revolutionizing Lending and Personalized Financial Planning 

By Jay Nair, EVP, Industry head, Financial Service and Public Sector, Infosys 

That we are in the AI era couldn’t be more obvious than in the financial world. When a major Australian bank sought an AI-based solution for credit decisioning challenges, it went beyond the typical credit bureau experience to enable comprehensive credit ratings. It waded into social media and live data feeds for a public sentiment analysis and holistic understanding of a specific entity.  

This is the power of AI at work for financial institutions. Coupled with asking nuanced questions, AI-driven insights arm a bank’s credit officers with a more complete picture for determining their final credit responses. Overall, AI helps achieve a credit outcome for a commercial entity more efficiently and speedily with integrated features like chat-bots, leading to an elevated experience for credit officers. 

AI in the realm of financial services 

As the financial services landscape continues to shape-shift in response to the many dynamic political, economic, and market forces at work, AI is becoming pivotal.  

Already, AI is enriching areas from high-net-worth-individual relationship management, bundle contracts negotiation and management, and proactive payments fraud detection into leakage prevention. An AI-enabled portfolio analysis advisory gives relationship managers a well-rounded understanding of the portfolio in a cost-effective manner, impossible before. https://www.juniperresearch.com/press/ai-enabled-financial-fraud-detection-spend/ 

The scope of AI can be extended to any other area that is data-intensive and requires quick and comprehensive consolidation of knowledge, summarisation, explainers etc.  

Shortcomings of the current approach to AI 

Initially, a black-box approach was the norm for AI implementation. However, a lack of clarity, transparency, and fairness around decisions is inherent to this approach. Clients would lose trust in their bankers if denied a loan or given a low credit score without a convincing rationale. 

Moreover, regulators demand an audit trail for decisions and attribution to sources, which may be difficult. These requirements push the need for explainability to the fore. Another reason is the question of source data availability since a great part of the data is confidential and protected. This restricts the AI model’s functioning as it lacks the source data. If a risk decision system is applied without reasons explaining the decision, then it will be unacceptable. 

While AI’s applicability in core business operations of financial institutions is limited currently, applying advanced techniques to solve the explainability question will expand AI’s scope. 

However, the regulatory demands are trickier. Within the black-box approach, AI systems can introduce biases and transparency challenges, which might lead to inadvertent regulatory violations. Therefore, moving from the black-box AI model to an explainable AI model is the logical way forward. 

AI for AI – the game-changer 

Making AI models more explainable is pivotal to solving different challenges that financial institutions face, be it in fraud prevention, payments, cybersecurity, or asset allocations. Auditability and traceability are crucial in keeping with core values of integrity and ethics, and a bank’s role in upholding the social fabric. 

It’s also vital for financial institutions to understand the complex relationships between variables – asset allocations, mortgages, lending etc. While AI algorithms are powerful in unearthing these correlations, ensuring that these models also diagnose the poor performers, is important. 

AI for AI is a powerful idea, central to achieving explainable AI in practice. Its framework relies on transparency, clarity, fairness, bias elimination, human rights protection, equal access, and inclusivity to bridge the digital divide. 

AI for AI as the basis for building explainable AI models will engender confidence while eliminating biases, thus opening up the path to customer trust and regulatory compliance. Moreover, it lays the foundations of responsible AI adoption empowering ethical innovation. 

Building explainable AI into the flow 

Rather than being a separate exercise, explainable AI must be integrated proactively into model development ensuring it is by design. Implementation can be bolstered by placing technical guardrails at each engineering lifecycle and ecosystem support. 

It’s heartening to see many financial institutions overcoming the limited outlook of black-box AI. Thanks to techniques such as reinforcement learning and fine-tuning, today’s Retrieval-Augmented Generation (RAG) models can continuously improve and adapt depending on interactions and feedback with users.  

Here, the role of data scientists, responsible AI consultants, domain experts, legal and regulatory experts, ethicists etc. becomes critical. The effectiveness of processes used to clean data, and address biases and regulatory compliances will also depend on the technology used to train and build the model.  

Existing open-source models may not be powerful enough prompting financial institutions to custom-create their models or through a consortium. For example, when a Danish MNC bank implemented key guardrails and data governance measures as per explainable AI standards, it put in an LLM proxy layer at the core – a gatekeeper monitoring data entering and leaving the AI system. This was crucial to safeguarding data transmission to Open AI APIs with in-built telemetry and data security measures, ensuring responsible AI usage.   

Importantly, Considering the heavy mandate on compliance, financial institutions must engage with regulators early in the process of developing advanced AI applications.  

The path to explainability 

To get explainability out of AI, institutions must be ready. As models advance, requirements around investments, infrastructure and data availability will evolve too. AI applications will become more affordable and accessible, in turn, driving more innovation and powerful benefits. This is possible as long as responsible AI guardrails are in place and explainability increases.  

Shifting to explainable AI models may not be straightforward but is necessary. Steps must be laid now to pave the way for robust, explainable AI frameworks. Documentation is important, especially for financial services institutions on how the AI system will function, what data has been used to train it and the algorithms applied. This will promote explainability, knowledge transition, accountability and regulatory compliance.  

Finally, continuous monitoring is essential to keep pace with the rapidly advancing technology and models to ensure they adhere to the parameters of fairness, unbiasedness, and compliance amid the change. 

Author

Related Articles

Back to top button