Future of AIAI

AI is your new underwriter – but can you trust it?

By Scott Weller, CTO, EnFi

Ask any lender what makes a good underwriter, and they’ll likely say human instinct and experience. However, finding those experts is becoming increasingly difficult, which is why many lenders are now turning to generative AI. 

In a recent McKinsey survey of 24 financial institutions, including nine of the top ten US banks, 20 percent had already implemented at least one gen AI use case, and a further 60 percent expect to do so within a year. But while AI promises to boost efficiency, are lenders placing too much trust in their AI underwriter? 

Risk Revolution 

AI is now embedded in nearly every phase of the lending process, from relationship management to underwriting to ongoing monitoring. Lenders use traditional machine learning to assess credit risk, neuro-linguistic programming (NLP) to extract insights from financial documents, and increasingly, generative AI to support analysts in drafting memos or interpreting borrower disclosures. While not all AI in lending is “generative,” the broader category of AI and trusted analytics has been a part of credit decision-making for over a decade. 

It isn’t just speed or cost efficiency that’s driving adoption; it’s the ability to process massive volumes of data, ensure more consistent decisions, and stay competitive in fast-moving markets. But hype does outpace reality in some areas. Generative AI, for instance, still requires a tremendous amount of supporting infrastructure to achieve explainability and factual accuracy, which keeps it on the boundary of automated decision making. The real cost of this shift is both technical and ethical. As AI takes on a bigger role in financial decision-making, one thing’s clear: we need explainable models, industry standards, and people in the loop every step of the way. 

Fast decisions = faster mistakes 

“Move fast and break things” was Meta’s motto during the early days of Facebook, championing the need for rapid innovation and experimentation. But as Meta learned, faster doesn’t always mean better. 

Similarly, in the case of AI, it allows lenders to make real-time or near-instant risk decisions, but that isn’t always a good thing. Speed can become a liability when it outpaces judgement, especially in edge cases or unfamiliar borrower profiles. Commercial Credit decisions aren’t just math problems; they’re trust exercises that require context, nuance, and sometimes a healthy dose of skepticism. 

But on the flip side, there are times when AI might help spot something human teams miss. In one case, an AI model flagged inconsistencies in a borrower’s financials. These were subtle discrepancies between stated income and transactional behavior that a human might have missed without digging. In turn, that insight led to a deeper review and ultimately prevented a bad loan. 

In another, AI conducted deep research on a private company borrower, surfacing a litigation risk buried in an obscure regulatory filing. This is something a busy credit team may never have time to investigate. These moments highlight AI’s strength: by revealing what’s hidden, overlooked, or too time-intensive to catch manually. But the bottom line is that a human should be in the loop to make that final call. 

Automation Without Accountability 

AI-led decision-making still requires accountability. In commercial credit, explainability is both qualitative and quantitative, often rooted in institutional knowledge that takes years to fully internalize. It’s not just about surfacing a model’s decision path; it’s about designing workflows that keep humans in control. This means building systems where underwriters can review, provide feedback, and make final judgment calls, preserving critical thinking and ethical oversight while compressing repetitive or mechanical tasks. 

If a regulator asks what shouldn’t be automated, the answer is this: the act of evaluating creditworthiness and assessing risk. These are processes that must remain human-led. AI can enhance visibility and surface risks faster, but final accountability must rest with people. The future lies in AI-powered “systems of action”, tooling that brings risk to light continuously, while enforcing review, context, and judgment every step of the way. 

Regulatory frameworks are largely keeping pace with AI adoption in credit, not just because of the regulators themselves, but thanks to a growing community of thoughtful practitioners who understand that trust isn’t a feature you add later. It’s fundamental. From explainability requirements to fairness audits, we’re seeing a concerted effort to build responsible AI into the core of underwriting systems. That said, the work is far from done, especially as generative and autonomous systems introduce new complexities around traceability and oversight. 

Will AI in credit cause a “model collapse” moment? Possibly, but not in the way people expect. The greatest risks are less about automating underwriting decisions and more about using AI to obscure intent or exploit scale. If a scandal happens, it’s more likely to stem from misuse, such as bypassing regulations or committing fraud with AI’s help, than from well-governed credit models making bad calls. 

Responsible Risk 

Responsible AI in underwriting should function like an aircraft’s autopilot: a valuable aid, but never a substitute for human oversight. Just as pilots must be able to monitor, understand, and intervene when necessary, credit professionals need full visibility into AI models. That means ensuring transparency, auditability, and robust human-in-the-loop controls. Responsible AI begins with clear benchmarks for accuracy and fairness and is sustained by workflows that keep experts actively involved, such as reviewing outputs, applying judgment, and making final decisions. Warning signs that a model isn’t ready for lending include undetected model drift, lack of performance tracking, narrow training data, no override procedures, absent validation frameworks, missing feedback loops, and inconsistent performance across borrower profiles. If a bank can’t explain how or when a model is reviewed, challenged, or corrected, the system simply isn’t ready to be trusted. 

AI is moving finance from friction to flow 

AI is on track to take over the bulk of underwriting. Not to replace human judgment, but to remove the friction that slows it down. Tasks like data entry, document parsing, and early-stage analysis can be done faster and with fewer errors. The result? Quicker decisions. Greater accuracy. A smoother experience for customers. And credit access for businesses that once slipped through the cracks. 

Still, speed means nothing without trust. Just as customer service lines offer the choice to speak to a real person, AI in lending must preserve a human path. Opting out of fully automated decisions shouldn’t be a luxury. It should be a given. Transparency and choice aren’t just best practice. They are the foundation of responsible credit. 

But here’s the part no one wants to admit. The problem isn’t just AI. It’s the systems around it. Many lenders are still running on spreadsheets, PDFs, and broken workflows. Delays, errors, and inconsistencies are already baked in before a model ever makes a call. Years of transformation have promised change but delivered complexity instead. 

AI won’t fix everything, but it can help the banking industry finally catch up to the future it keeps talking about. 

Author

Related Articles

Back to top button