
In February 2024, Klarna announced its AI assistant handled 2.3 million customer service conversations in a month, about two-thirds of its chats, and claimed it was doing the equivalent work of 700 full-time agents.ย ย
By 2025, Klarna announced it was rebuilding human capacity in customer service, with the CEO acknowledging quality issues after pushing too far towards an AI-heavy service model. Whatever the exact internal metric was, the external signal was that absorbing volume isย not the same asย holding a service standard.ย
Thatโsย the real point of the story. The question is not whether AI can produce output at scale. The question is what makes AI-assisted work dependable in the moments that matter.ย
The mistake is treating headcount reduction as the operating modelย
Some tasks will be automated and some roles will shrink. Those are outcomes that most organisations will see.ย
But labour substitution becomes fragile when it becomes THE strategy. Because then the organisation tries to remove the very layer that currently makes probabilistic systems workable: judgement, verification,ย escalationย and clear ownership.ย
AI can be fluent and wrong at the same time.ย Soย reliability is not a property of the model alone. It is a property of the surrounding system: how work is framed, what gets checked and who is accountable when it goes wrong.ย
Once you accept that usage will happen and errors will happen, the next question becomes sharper: What happens when you remove the people who would normally catch those errors before they become customer reality?ย
โSpeed versus safetyโ is usually the wrong framingย
Leaders often talk about AI as a trade-off: move fast or stay safe. In practice, that framing pushes organisations towards brittle extremes.ย
If you try to govern AI before people have enough lived competence, you get one of two outcomes. Either the rules block delivery or the rules are so abstract that teams ignore them.ย
People do not stop working because leadership is uncertain. They route around friction to meet deadlines.ย
The UK National Cyber Security Centre makes the same point about shadow IT: workaroundsย indicateย that policy and user needs are misaligned. They go on to say that the fix is to refine the rules and address the underlying need so that activity can be broughtย above board.ย
To put that into context, Salesforce reports that more than half of generative AI adopters use unapproved tools at work, tied to unclear policies and guidance.ย Soย the first leadership task is not โwrite stricter rulesโ.ย Itโsย โcreate a safe lane that is easier than the workaroundโ.ย
Governance that holds starts with empowermentย
Many AI programmes build governance artefacts and mistake them for governable behaviour. Policies, approvals,ย cataloguesย and training completion can all exist while real work stays uneven.ย
Youโllย see one team using AI carefully, another using it sloppily and a third staying quiet because disclosure feels politically risky. That variance is where quality drops and riskย goesย quiet.ย
AI empowerment is the alternative posture. Build human capability in a real work โplaygroundโ first, then lock it in through systems and governance so the organisation can rely on it.ย
This is the congruence point that many programmes miss: governance becomes executable only when it is embedded in how work isย actually done. Until then, โgovernanceโ either becomes theatre or friction.ย
The unit of durable change is the workflowย
Tool rollouts create activity, but they do not create organisational reliability.ย
If AI stays a personal productivity trick, you get pockets of excellence and pockets of failure. Effectively, capability fragments instead of compounding.ย
Durable change happens at the workflow level. A workflow has defined inputs, a quality standard, explicitย gatesย and an accountable owner. That is where verification stops being an individual virtue and becomes a built-in step.ย
This is also where the โsubstitute firstโ narrative often collapses. Work that looks automatable from a distance oftenย containsย edge cases, judgment calls and accountability moments that only become visible when you map the workflow properly. If you automate before you understand the workflow, you scale misunderstandings instead of outcomes.ย
Workflow design is not bureaucracy. It is how you make AI-assisted work repeatable,ย defensibleย and safe under load.ย
The practical deliverable is an approved pathwayย
Organisations need more than a manifesto. They need an approved way to use AI that is faster than the workaround.ย
An approved pathway removes guesswork. It answers where AI is encouraged and where it is refused, what must be verified, who owns the output and what happens when stakes are high or uncertainty appears.ย
This pathway is not a guardrail bolted onto the side of work. It is the fastest route to consistent output because it gives teams a default operating framework that already includes checks and accountability.ย
It also restores leadership visibility. When the approved path is usable, peopleย discloseย usage instead of hiding it and managers can reinforce standards instead of improvising local rules.ย
This is why shadow AI data matters, but only as a diagnostic tool. If large numbers of people are using unapproved tools, the system is telling you the sanctioned path is unclear, tooย slowย or not fit for real work. An approved pathway addresses that by meeting both the delivery need and the risk need simultaneously.ย
โBut automation is inevitableโย
Yes, some automation is inevitable. The question is whether you automate from strength or from fragility.ย
The sustainable sequence is simple: empower first, then automate what is stable. Stability means the workflow is understood, quality gates are explicit and owners are accountable. Until that is true, automation increases your blast radius instead of reducing your burden.ย
That is the operating lesson from Klarna. AI can absorb volume, but business is more than volume. It is about trust,ย continuityย and the ability to handle exceptions without compromising quality. Those outcomes do not appear by subtracting the control layer. They appear by designing it.ย
The principle that survives contact with realityย
The future is not โAI everywhereโ or โhumans everywhereโ.ย It is whether AI-assisted work becomes dependable.ย
The decision rule is the one most organisations try to skip: automation is downstream of empowerment. If you cannot rely on how AI is being used by people today, scaling it will scale inconsistency tomorrow.ย
The organisations that do well will not be the ones that pursue headcount reduction fastest. They will be the ones that build an approved pathway, embed AI inย workflowsย and make governance executable in lived work. That is what โAI through peopleโ means. Andย itโsย why empowerment beats substitution as a strategy.ย
ย


