Future of AIAI

The Emperor Has No Clothes. Or: The Robot Is Naked.

By Michelle Johnson

The AI Governance Gap in Financial Servicesย 

JD Vance recently remarked that โ€œthe AI future will not be won by handwringing about safety.โ€ He was speaking in the context of the international race to produce the fastest generative AI models: a race run by OpenAI, Anthropic, Google, and others. Yet, intriguingly, trustworthy, explainable, or risk-resilient AI models are not part of the finish line many are racing toward. Safety, it seems, is a second-order concern.ย 

That may be acceptable when deploying chatbots for benign tasks like FAQs or menu planning. But when those same models are deployed inside compliance- and regulation-heavy industries such as financial services, the stakes change instantly. Banks and insurers must be able to explain how their systems work. Increasingly, they cannot.ย 

Even the builders acknowledge the risk. Sam Altman has warned that biometric authentication is already broken, and banks face a deepfake-driven fraud crisis. Yet safety is too often cast as โ€œslowing innovation.โ€ย 

In financial services, ignoring AI governance isnโ€™t naรฏve. Itโ€™s career-ending. Reputational fallout can be swift and significant. Fraud, mis-selling, AML breaches: these are not hypothetical. They are already here.ย 

The Critical vs. Non-Critical Mythย 

Traditional governance divides systems into โ€œcriticalโ€ (like fraud detection, AML, credit scoring) and โ€œnon-criticalโ€ (chatbots, marketing automation, HR tools). That distinction is fast becoming obsolete.ย 

Take the Air Canada chatbot incident in 2024. A bereaved customer interacting with the airlineโ€™s chatbot was misled about refund eligibility. The chatbot promised a policy that did not exist. When Air Canada refused to honour it, the British Columbia Civil Resolution Tribunal ruled the airline was responsible and ordered it to pay damages.ย 

Now ask: if this had been a financial institution, would the regulator have let it slide?ย 

Closer to home, the FCA has issued fines linked to automated marketing tools mis-selling investment products. Generative AI in marketing, often seen as โ€œnon-critical,โ€ can cross into regulated territory in an instant.ย 

Moral: Any AI system can go critical the moment it touches customers, compliance, or fraud vectors, especially when that system is lacking context, guardrails, or oversight.ย 

Invisible Risks: Drift, SaaS, and Poisoned Contextย 

Even for systems meant to be โ€œcritical,โ€ control is slipping:ย 

  • Credit risk drift: A retrain at JPMorgan reportedly led to a 7% mis-scoring spike before being caught.ย 
  • AML drift: National Australia Bankโ€™s AML systems produced false negatives that led to millions in remediation costs.ย 

Then there are the hidden threats:ย 

  • Silent SaaS updates: OpenAI shifted all users onto a new ChatGPT model without notice in 2025, altering behaviour overnight before partially reversing course.ย 
  • Context poisoning: Microsoft 365 Copilot was recently patched after a zero-click prompt injection vulnerability (dubbed โ€œEchoLeakโ€) allowed attackers to exfiltrate sensitive data.ย 

Put drift, SaaS opacity, and poisoned context together, and you have a compliance nightmare.ย 

Black Box vs Glass Boxย 

Modern generative AI often functions as a black box: results appear without clear logic trails or auditability. Regulators will not accept โ€œthe AI told us so.โ€ย 

Some institutions are experimenting with explainability:ย 

  • Capital One has invested in in-house tools for credit scoring transparency.ย 
  • HSBC has launched AML explainability initiatives.ย 

Methods like SHAP and LIME help, but they are post-hoc fixes and not governance. In highly regulated industries, the only defensible path is glass box AI: traceable, explainable, and reproducible.ย 

If you canโ€™t explain what your AI did, you canโ€™t defend it.ย 

Context as Governance (and Performance)ย 

Generative AI rarely fails from lack of computing power. More often it fails from lack of context.ย 

Jennifer Bemert highlights โ€œcontext engineeringโ€ as the missing discipline:ย 

  • Static: rules and policiesย 
  • Dynamic: transaction feedsย 
  • Latent: customer historiesย 
  • Temporal: shifting fraud patternsย 

Without orchestrated context such as filtering, versioning, securing, AI systems fail. One corrupted document or poisoned feed can tip the balance. Governance collapses without explainable context flows.ย 

Your Organisation and AIย 

Ask yourself (and be honest):ย 

  • Do you know every AI model in use (including SaaS or employee-adopted tools)?ย 
  • Are those modelsโ€™ version-locked and monitored for drift?ย 
  • Could you explain every AI decision to a regulator tomorrow?ย 
  • Who owns AI governance? And do they have authority across your stack?ย 

Few financial services firms can answer โ€œyesโ€ to all four. That alone should spark urgent action.ย 

Vance may be right that the AI race isnโ€™t won with handwringing. But in financial services, it will be lost through hand-waving.ย 

Perhaps a wiser note belongs to DeepMindโ€™s Demis Hassabis: โ€œI would advocate not moving fast and breaking things.โ€ย ย 

In financial services, that isnโ€™t caution. Itโ€™s survival.ย 

Author

Related Articles

Back to top button