
The resignation of West Midlands Police Chief Constable Craig Guildford has brought renewed attention to a question many organisations have been reluctant to confront. As Artificial Intelligence becomes more embedded in decision-making, who carries responsibility when its output influences outcomes?
This particular controversy followed the use of AI-generated material in a report connected to discussions around the attendance of Maccabi Tel Aviv supporters at a football match in Birmingham. Elements of that material were later found to be inaccurate, prompting political pressure, public scrutiny and a rapid loss of confidence in the constabulary leadership. However, the local story cast light on a wider examination of judgment, oversight and accountability.
The most unsettling fact was how easily the technology entered a process where context, sensitivity and public trust were fundamental. Information produced by an automated system was able to influence decision-making without sufficient challenge, raising uncomfortable questions about review, verification and ownership. Yet, in situations like this, accountability should remain with the individual who chooses to rely on what is presented, particularly when decisions carry social or political consequences.
When adoption moves faster than education
Across both public and private sectors, AI tools are being introduced at speed as organisations respond to pressure to modernise and improve efficiency. In fact, almost 70% of UK businesses either already have AI function or are actively exploring it. While the initial use has often been informal, with employees using AI to prepare for a meeting or quickly summarise materials, over time, these practices became embedded, shaping workflows before organisations have agreed where boundaries should sit.
In many organisations, the rules and expectations around AI tend to arrive after people have already started using the tools. By that point, habits are in place. In heavily regulated workplaces, leaders don’t get to say ‘the AI said so’. Accountability still sits with the leader who chooses to rely on it.
Indeed, AI can produce content that sounds convincing even when key context is missing or details are wrong. If no one is clearly responsible for questioning it, that material can then move through further review processes simply because everyone assumes someone else has already checked it.
The same pressure is building inside organisations, particularly in HR teams. AI is being used to help with recruitment screening, interview preparation, performance reviews and employee relations work. With HR functions often stretched thin with heavy workloads, anything that brings structure or saves time quickly becomes appealing.
When system-generated recommendations appear well-reasoned and well-presented, decision-makers may accept them without a second thought. Indeed, some documents that have just been AI-generated may get included in other AI processes too. Over time, factual statements and, as a result, responsibility become harder to locate. But the impact of decisions remains deeply personal.
Technology has no awareness of organisational history, power dynamics or emotional context. It cannot anticipate how decisions will be received by employees or how they might influence trust across teams. Those considerations remain human responsibilities, regardless of how advanced the tool appears.
Why education matters more than policy
Many organisations respond to AI risks by writing specific policies. That’s understandable, and in many cases necessary. But paperwork on its own doesn’t change how people behave when they’re under pressure or short on time.
Therefore, the missing element is education. Leaders are being encouraged to use AI tools as part of everyday work, yet very few have had any real guidance on how those systems produce results. We’ve all seen headlines that talk about AI bias, but when something sounds confident, it’s tempting to trust it. Familiarity breeds confidence, and confidence can turn into reliance, particularly under time pressure.
This becomes especially problematic when decisions involve data privacy, security or reputation. In those situations, efficiency provides little protection. Discernment matters far more. Leaders need the confidence to pause, interrogate outputs and decide when automated assistance should be set aside entirely.
Leadership in an AI-enabled environment
As AI becomes embedded in organisational life, leadership itself continues to evolve. Senior leaders may no longer hold the deepest technical expertise in the room, but their influence remains central. In practice, this means leaders spend more time clarifying how decisions should be reviewed, challenged and signed off when AI is involved rather than having answers to everything. Much of this is shaped by example. When leaders show they’re comfortable questioning what comes back from a system, teams feel able to do the same. When they don’t, it becomes easier for outputs to pass through without much discussion.
This is essentially how trust is built. Employees and stakeholders want to understand how decisions are made, particularly when technology is involved. When explanations feel thin, the team’s confidence weakens. To sustain their trust, leaders need to make sure their ownership is visible.
Indeed, the way the West Midlands incident unravelled demonstrates how quickly confidence can collapse with lack of judgment. While public institutions experience this through media headlines, organisations generally experience it through disengagement, scepticism and attrition. In both cases, recovery is slow.
There is no doubt that AI will continue to influence how information is produced, summarised and presented. That change is already embedded in modern work. What remains, though, is the expectation placed on those in leadership roles. When outcomes carry organisational, social or reputational consequences, responsibility rests with the individual who chose to act.


