
Ask any lender what makes a good underwriter, and theyโll likely say human instinct and experience. However, finding those experts is becoming increasingly difficult, which is why many lenders are now turning to generative AI.ย
In a recent McKinsey survey of 24 financial institutions, including nine of the top ten US banks, 20 percent had already implemented at least one gen AI use case, and a further 60 percent expect to do so within a year. But while AI promises to boost efficiency, are lenders placing too much trust in their AI underwriter?ย
Risk Revolutionย
AI is now embedded in nearly every phase of the lending process, from relationship management to underwriting to ongoing monitoring. Lenders use traditional machine learning to assess credit risk, neuro-linguistic programming (NLP) to extract insights from financial documents, and increasingly, generative AI to support analysts in drafting memos or interpreting borrower disclosures. While not all AI in lending is โgenerative,โ the broader category of AI and trusted analytics has been a part of credit decision-making for over a decade.ย
It isnโt just speed or cost efficiency thatโs driving adoption; itโs the ability to process massive volumes of data, ensure more consistent decisions, and stay competitive in fast-moving markets. But hype does outpace reality in some areas. Generative AI, for instance, still requires a tremendous amount of supporting infrastructure to achieve explainability and factual accuracy, which keeps it on the boundary of automated decision making. The real cost of this shift is both technical and ethical. As AI takes on a bigger role in financial decision-making, one thingโs clear: we need explainable models, industry standards, and people in the loop every step of the way.ย
Fast decisions = faster mistakesย
โMove fast and break thingsโ was Metaโs motto during the early days of Facebook, championing the need for rapid innovation and experimentation. But as Meta learned, faster doesnโt always mean better.ย
Similarly, in the case of AI, it allows lenders to make real-time or near-instant risk decisions, but that isnโt always a good thing. Speed can become a liability when it outpaces judgement, especially in edge cases or unfamiliar borrower profiles. Commercial Credit decisions arenโt just math problems; theyโre trust exercises that require context, nuance, and sometimes a healthy dose of skepticism.ย
But on the flip side, there are times when AI might help spot something human teams miss. In one case, an AI model flagged inconsistencies in a borrowerโs financials. These were subtle discrepancies between stated income and transactional behavior that a human might have missed without digging. In turn, that insight led to a deeper review and ultimately prevented a bad loan.ย
In another, AI conducted deep research on a private company borrower, surfacing a litigation risk buried in an obscure regulatory filing. This is something a busy credit team may never have time to investigate. These moments highlight AIโs strength: by revealing whatโs hidden, overlooked, or too time-intensive to catch manually. But the bottom line is that a human should be in the loop to make that final call.ย
Automation Without Accountabilityย
AI-led decision-making still requires accountability. In commercial credit, explainability is both qualitative and quantitative, often rooted in institutional knowledge that takes years to fully internalize. Itโs not just about surfacing a modelโs decision path; itโs about designing workflows that keep humans in control. This means building systems where underwriters can review, provide feedback, and make final judgment calls, preserving critical thinking and ethical oversight while compressing repetitive or mechanical tasks.ย
If a regulator asks what shouldnโt be automated, the answer is this: the act of evaluating creditworthiness and assessing risk. These are processes that must remain human-led. AI can enhance visibility and surface risks faster, but final accountability must rest with people. The future lies in AI-powered โsystems of actionโ, tooling that brings risk to light continuously, while enforcing review, context, and judgment every step of the way.ย
Regulatory frameworks are largely keeping pace with AI adoption in credit, not just because of the regulators themselves, but thanks to a growing community of thoughtful practitioners who understand that trust isnโt a feature you add later. It’s fundamental. From explainability requirements to fairness audits, weโre seeing a concerted effort to build responsible AI into the core of underwriting systems. That said, the work is far from done, especially as generative and autonomous systems introduce new complexities around traceability and oversight.ย
Will AI in credit cause a โmodel collapseโ moment? Possibly, but not in the way people expect. The greatest risks are less about automating underwriting decisions and more about using AI to obscure intent or exploit scale. If a scandal happens, itโs more likely to stem from misuse, such as bypassing regulations or committing fraud with AIโs help, than from well-governed credit models making bad calls.ย
Responsible Riskย
Responsible AI in underwriting should function like an aircraftโs autopilot: a valuable aid, but never a substitute for human oversight. Just as pilots must be able to monitor, understand, and intervene when necessary, credit professionals need full visibility into AI models. That means ensuring transparency, auditability, and robust human-in-the-loop controls. Responsible AI begins with clear benchmarks for accuracy and fairness and is sustained by workflows that keep experts actively involved, such as reviewing outputs, applying judgment, and making final decisions. Warning signs that a model isnโt ready for lending include undetected model drift, lack of performance tracking, narrow training data, no override procedures, absent validation frameworks, missing feedback loops, and inconsistent performance across borrower profiles. If a bank canโt explain how or when a model is reviewed, challenged, or corrected, the system simply isnโt ready to be trusted.ย
AI is moving finance from friction to flowย
AI is on track to take over the bulk of underwriting. Not to replace human judgment, but to remove the friction that slows it down. Tasks like data entry, document parsing, and early-stage analysis can be done faster and with fewer errors. The result? Quicker decisions. Greater accuracy. A smoother experience for customers. And credit access for businesses that once slipped through the cracks.ย
Still, speed means nothing without trust. Just as customer service lines offer the choice to speak to a real person, AI in lending must preserve a human path. Opting out of fully automated decisions shouldnโt be a luxury. It should be a given. Transparency and choice arenโt just best practice. They are the foundation of responsible credit.ย
But hereโs the part no one wants to admit. The problem isnโt just AI. Itโs the systems around it. Many lenders are still running on spreadsheets, PDFs, and broken workflows. Delays, errors, and inconsistencies are already baked in before a model ever makes a call. Years of transformation have promised change but delivered complexity instead.ย
AI wonโt fix everything, but it can help the banking industry finally catch up to the future it keeps talking about.ย

