AI

Why Model Risk Management Needs to Be a 2026 Priority

By Andrew Pery, AI Ethics Evangelist, ABBYY

The rapid pace of AI adoption in financial services has created new, complex risks that traditional Model Risk Management (MRM) frameworks struggle to address. 

Over the past two years, generative AI (GenAI) and advanced machine learning (ML) models have transformed trading, credit scoring, and fraud detection, with a PwC study revealing that over 70% of financial institutions have now e integrated advanced AI models into their business-critical applications.   

While advanced AI models offer transformative benefits, the Bank for International Settlements (BIS) suggests that AI use may heighten some existing risks, such as model risk (e.g., lack of explainability makes it challenging to assess appropriateness of AI models) and data-related risks (e.g., privacy, security, bias.) 

Unlike deterministic models that produce predictable and explainable results, advanced AI models are probabilistic, generating plausible but sometimes inaccurate outputs that can lead to fair lending risks and adverse impacts. 

A January 2025 Consumer Financial Protection Bureau Supervisory Highlights Report that analysed the underwriting practices of financial services organisations based on more advanced AI algorithms, for example, found a disproportionately high rate of adverse outcomes based on models that used more than a thousand variables, resulting in overfitting of model outcomes.  

Converging Model Risk Management with Enterprise-Wide AI Governance  

Both the Basel Committee on Banking Supervision (BCBS) and the U.S. Federal Reserve have underscored the importance of integrating AI into existing MRM frameworks. The Federal Reserve’s long-standing SR 11-7 guidance on model risk explicitly applies to advanced algorithms, including AI and ML, emphasising governance, validation, and “effective challenge.”  

Likewise, BCBS publications—such as its 2022 AI and Machine Learning Newsletter and 2024 Digitalisation of Finance Report warn that AI introduces new dimensions of model, governance, and financial-stability risks, particularly through opacity, and data bias. 

These developments signal a shift in supervisory expectations: firms must modernise their MRM frameworks to address the complexity and scale of AI-driven systems or risk falling behind both technologically and regulatorily. Regulators now expect financial institutions to treat AI models as in-scope for MRM, applying robust testing, documentation, and oversight to ensure reliability and accountability.  

Global Regulatory Developments in AI and Model Risk 

Regulators are converging around a unified principle: AI risks must be managed under the same rigour as traditional model risk, but with additional focus on transparency, bias mitigation, and accountability. While the Federal Reserve SR 11-7 and SR 13-19 continue to provide the backbone for MRM, additional guidance adds emphasis on AI model interpretability and third-party oversight.  

The EU AI Act, adopted in 2024, classifies financial AI systems as “high-risk”, mandating continuous monitoring, documentation, and human oversight. The Bank of England’s AI Model Governance Principles (2025) now require firms to maintain AI model inventories, bias assessment logs, and validation reports similar to traditional MRM documentation. Regulators in Singapore and Hong Kong have introduced AI Ethics and Governance Codes requiring firms to demonstrate MRM alignment for any AI-based decision-making process. 

These developments are driving a shift from model validation to model assurance, where MRM not only tests technical soundness but ensures ethical and regulatory compliance across AI lifecycles. 

MRM’s New Role: From Validation to Continuous AI Assurance 

Traditional MRM has long mitigated risk by validating models and ensuring explainability. But as models become more dynamic, they are retrained in real time with streaming data. MRM must evolve into a continuous assurance function. 

Modern MRM must include Explainable AI (XAI) to show how black-box models produce their outputs. AI-driven tools should monitor model drift, bias, and data quality issues in real time. AI models need third-party oversight to manage vendor risks. Internal governance, cybersecurity, and data privacy measures, including data lineage, encryption, and security validation, must be integrated into all MRM processes. 

Elevating MRM 

Now, an integrated response is needed to ensure MRM remains fit for purpose in an era of technological change. Some enterprises are already collaborating to minimise risk and make explainable AI for MRM a possibility.  

These partnerships are vital to ensure finance firms avoid the risks and remain viable as AI continues to develop apace. Leveraging technology to augment MRM is not just about compliance; it’s about building resilience, ensuring trust, and preparing financial institutions for the next generation of intelligent, data-driven decision-making. 

Companies should also consider a Risk Management Policy for AI, which can promote the responsible and ethical use of the technology. This could help to identify and mitigate the potential risks, whilst taking into account the developing regulations and aligning with ethical and legal standards. 

Model Risk Management is evolving from a compliance checkpoint into a strategic differentiator. Firms that treat MRM as a dynamic, AI-enabled framework rather than a static control function will be best positioned to build resilience, trust, and competitive advantage. 

As regulators, investors, and customers demand greater transparency in AI decision-making, MRM will become the cornerstone of responsible AI adoption in finance. 

Author

Related Articles

Back to top button