Future of AIAI

Why generative AI tools are stalling at work, and how to fix it

By Martin Colyer, Director of Innovation & AI and Aaron Lynch, Manager, Innovation & AI - LACE Partners

Generative AI has reached most workplaces. Co-pilots, agents, assistants – they’re available, licensed and sitting in toolbars across HR, finance, operations and more. But in many organisations, adoption has flatlined. 

The technology is in place, but real-world use is patchy. Some employees explore its capabilities while others ignore it entirely. In most cases, expensive tools are underused, and the promised productivity gains are nowhere near being realised. 

So, where’s the disconnect? 

When adoption doesn’t follow investment 

Enterprise-grade AI platforms aren’t cheap. Licences often cost hundreds of pounds per person per year. Multiply that across large teams, and the total spend quickly runs into the millions. That cost is easier to justify when people are using the tools regularly, and harder to explain when uptake stalls after the initial launch.  

In one case, a major business invested in AI co-pilots for a large part of its workforce. But among the staff expected to use it, hardly anyone had opened the app. No one knew what it was for, and no one had been shown how it could help. As a result, the tools sat unused, while work continued as before.   

That situation is far from rare. It’s becoming common across sectors: the licences are active, but the habits haven’t changed. The gap between potential and reality is widening, and for many businesses, the numbers no longer add up. 

The issue isn’t the software. In most cases, the tools function exactly as promised. The problem lies in how they’re introduced. Too often, AI is treated as something to layer on top of existing processes. There’s no rethinking of how work gets done, who does what or how tasks flow from one team to another. While roles and deadlines stay the same, AI becomes just another tab – one that many people quietly ignore. 

There’s also a tendency to treat all departments the same. But different functions have very different needs. What works in HR won’t necessarily suit finance, and what helps one team might create friction in another. Without time to understand those differences and design solutions that reflect them, it’s hard to make AI stick. 

Some teams are open to trying new tools, but they aren’t given the right support. Others are unsure whether they’re even allowed to experiment. The result is slow, uneven uptake and rising internal questions about whether the tools are worth the money. 

Making progress means going function by function   

The organisations seeing the most meaningful results aren’t rushing in with all-company rollouts. They’re starting with specific teams, identifying clear problems and building tailored solutions. That might involve working with HR to reduce admin-heavy tasks or helping operations staff automate low-value data entry. What matters is that each intervention is shaped around actual work rather than what a vendor demo suggests is possible. 

Yet, it’s important to remember that progress won’t come fast. It requires internal discovery, practical testing and a willingness to work iteratively. But it avoids the trap of rolling out tools that look impressive on paper and deliver very little in practice. It also forces teams to be honest about capacity. Developing AI solutions, even simple ones, takes time, and each step depends on access to skilled people who understand both the technology and the business. Without that resource in place, AI projects are left unfinished or quietly abandoned. 

Coordinating people, tools and process 

In most businesses, AI tools have been added faster than they’ve been integrated. That creates friction: people don’t know where tools fit into their daily work and systems don’t talk to each other. 

The other missing link is coordination. No one is responsible for linking tools together in a way that supports actual workflows. That kind of coordination work takes time, but it pays off. It’s the difference between isolated pilots and meaningful change. Where organisations have made progress, it’s usually because someone has taken the time to map out processes, reassign tasks and make sure the right inputs and outputs flow to the right places. 

In the early days of GenAI, some companies took a wait-and-see approach, either blocking tools entirely or allowing widespread experimentation with minimal control. Neither approach has worked particularly well. Where oversight is too tight, adoption grinds to a halt. But where there’s no structure at all, risks multiply, along with inconsistency, confusion and duplicated effort. 

More recently, some organisations have started building internal forums that can review new tools quickly, share learning between teams and offer guidance on how to deploy AI safely and effectively. These groups often sit across functions, pulling in voices from IT, legal, HR and the wider business. 

With this kind of structure in place, teams can move faster with clearer decisions and easier-to-manage risks. As a result, employees feel more confident that they’re using the tools the right way. Training also doesn’t need to be complex, but it does need to be practical. Instead of generic tutorials, the focus should be on everyday tasks such as writing reports, cleaning data and planning meetings. Once people see where AI helps with something they already do, confidence grows. And once a few team members start to see value, others follow. 

It’s no secret that the hype from Gen AI has faded. The tools are out there, and most employees have heard of them, if not used them. However, across the board, the challenge remains the same: turning availability into adoption. That means slowing down enough to ask better questions: What are we trying to improve? What’s not working today? Where do people need help? 

The answers won’t be found in dashboards or demo reels. They’ll be found in calendars, shared drives and team check-ins – in places where real work is getting done. 

Once that’s understood, AI stops being a feature and starts becoming useful. 

Author

Related Articles

Back to top button