
Generative AI has reached most workplaces. Co-pilots, agents, assistants – theyโre available, licensed and sitting in toolbars across HR, finance, operations and more. But in many organisations, adoption has flatlined.ย
The technology is in place, but real-world use is patchy. Some employees explore its capabilities while others ignore it entirely. In most cases, expensive tools are underused, and the promised productivity gains are nowhere near being realised.ย
So, whereโs the disconnect?ย
When adoption doesnโt follow investmentย
Enterprise-grade AI platforms arenโt cheap. Licences often cost hundreds of pounds per person per year. Multiply that across large teams, and the total spend quickly runs into the millions. That cost is easier to justify when people are using the tools regularly, and harder to explain when uptake stalls after the initial launch.ย ย
In one case, a major business invested in AI co-pilots for a large part of its workforce. But among the staff expected to use it, hardly anyone had opened the app. No one knew what it was for, and no one had been shown how it could help. As a result, the tools sat unused, while work continued as before.ย ย ย
That situation is far from rare. Itโs becoming common across sectors: the licences are active, but the habits havenโt changed. The gap between potential and reality is widening, and for many businesses, the numbers no longer add up.ย
The issue isnโt the software. In most cases, the tools function exactly as promised. The problem lies in how theyโre introduced. Too often, AI is treated as something to layer on top of existing processes. Thereโs no rethinking of how work gets done, who does what or how tasks flow from one team to another. While roles and deadlines stay the same, AI becomes just another tab – one that many people quietly ignore.ย
Thereโs also a tendency to treat all departments the same. But different functions have very different needs. What works in HR wonโt necessarily suit finance, and what helps one team might create friction in another. Without time to understand those differences and design solutions that reflect them, itโs hard to make AI stick.ย
Some teams are open to trying new tools, but they arenโt given the right support. Others are unsure whether theyโre even allowed to experiment. The result is slow, uneven uptake and rising internal questions about whether the tools are worth the money.ย
Making progress means going function by functionย ย ย
The organisations seeing the most meaningful results arenโt rushing in with all-company rollouts. Theyโre starting with specific teams, identifying clear problems and building tailored solutions. That might involve working with HR to reduce admin-heavy tasks or helping operations staff automate low-value data entry. What matters is that each intervention is shaped around actual work rather than what a vendor demo suggests is possible.ย
Yet, it’s important to remember that progress wonโt come fast. It requires internal discovery, practical testing and a willingness to work iteratively. But it avoids the trap of rolling out tools that look impressive on paper and deliver very little in practice. It also forces teams to be honest about capacity. Developing AI solutions, even simple ones, takes time, and each step depends on access to skilled people who understand both the technology and the business. Without that resource in place, AI projects are left unfinished or quietly abandoned.ย
Coordinating people, tools and processย
In most businesses, AI tools have been added faster than theyโve been integrated. That creates friction: people donโt know where tools fit into their daily work and systems donโt talk to each other.ย
The other missing link is coordination. No one is responsible for linking tools together in a way that supports actual workflows. That kind of coordination work takes time, but it pays off. Itโs the difference between isolated pilots and meaningful change. Where organisations have made progress, itโs usually because someone has taken the time to map out processes, reassign tasks and make sure the right inputs and outputs flow to the right places.ย
In the early days of GenAI, some companies took a wait-and-see approach, either blocking tools entirely or allowing widespread experimentation with minimal control. Neither approach has worked particularly well. Where oversight is too tight, adoption grinds to a halt. But where thereโs no structure at all, risks multiply, along with inconsistency, confusion and duplicated effort.ย
More recently, some organisations have started building internal forums that can review new tools quickly, share learning between teams and offer guidance on how to deploy AI safely and effectively. These groups often sit across functions, pulling in voices from IT, legal, HR and the wider business.ย
With this kind of structure in place, teams can move faster with clearer decisions and easier-to-manage risks. As a result, employees feel more confident that theyโre using the tools the right way. Training also doesnโt need to be complex, but it does need to be practical. Instead of generic tutorials, the focus should be on everyday tasks such as writing reports, cleaning data and planning meetings. Once people see where AI helps with something they already do, confidence grows. And once a few team members start to see value, others follow.ย
It’s no secret that the hype from Gen AI has faded. The tools are out there, and most employees have heard of them, if not used them. However, across the board, the challenge remains the same: turning availability into adoption. That means slowing down enough to ask better questions: What are we trying to improve? Whatโs not working today? Where do people need help?ย
The answers wonโt be found in dashboards or demo reels. Theyโll be found in calendars, shared drives and team check-ins โ in places where real work is getting done.ย
Once thatโs understood, AI stops being a feature and starts becoming useful.ย


