AI Business Strategy

Businesses are buying AI but not teaching employees how to use it

By Charles Ross, APAC head of policy and insight at Economist Impact

Companies are spending billions on AI technology while failing to capture repeatable, scaleable business value from the technology. New Economist Impact research explores the gap between AI ambition and workforce readiness.

Corporate AI spending hit US$252 billion in 2024. Boardrooms have signed off on the budget to invest in AI. What most have not signed off on is one of the key determinants as to whether the technology captures business value: teaching their people to use it. 

Boards want to know that the expensive technology they have approved is actually making money. Yet new Economist Impact research supported by Kyocera Document Solutions — drawing on a survey of 639 senior executives across London, New York, Singapore, Sydney and Tokyo — finds that only 4% of organisations have embedded AI into core business processes to achieve repeatable and measurable value at scale. That is a striking failure rate for a technology billed as transformative. 

Arthur C. Clarke famously observed that sufficiently advanced technology is indistinguishable from magic. And it is magic that most business leaders appear to be hoping for. But there is a fundamental problem with how most executives are approaching AI: if you do not teach your people how to use new technology effectively, you should not expect transformative results. 

The ambition–investment paradox 

The data reveals a paradox at the heart of corporate AI strategy. According to the Economist Impact report, From intent to action: the leaders’ guide to building AI-powered workplaces, 88% of executives view AI skills development as a key source of competitive advantage. The ambition is clearly there. The investment in human capability is not. 

Despite that near-universal confidence in AI’s strategic importance, only 38% of companies have dedicated, sufficient budgets for AI-related training — a 50-point gap between stated belief and financial commitment.  

Where training does exist, it overwhelmingly takes informal or ad hoc forms: 54% of firms rely on mentorship and 52% on self-directed online platforms. Only 21% say they offer external partnerships with specialist training providers and 16% offer structured internal training. 

The consequence is that nearly half of executives report that fewer than 10% of their employees received any formal AI training in the past year. Nine in ten workers are being handed powerful new tools with no meaningful instruction in how to use them.   

Ticking the box rather than building the skill 

The report’s most troubling finding is not ignorance, but complacency. Almost all executives (99%) say their organisation has some kind of approach to developing AI skills. The problem is that most of those approaches are cosmetic. Optional online courses are convenient but rarely transformative.  

In our report, Muneaki Goto, chief reskilling officer of Japan’s Reskilling Initiative, describes what he calls the ‘learn-and-forget’ problem: employees complete a course but have no opportunity to apply what they have learned, and the knowledge quickly fades. 

Making AI skills development a requirement rather than a ‘nice-to-have’ is the difference between organisations that struggle to implement AI meaningfully and those that genuinely build capability.  

Our report finds that organisations doing this well share a common pattern: they treat AI training as a mandate, not a benefit; they measure skill acquisition rigorously; and they build role-specific learning pathways that connect directly to day-to-day work. Most organisations are still far from this standard. 

There is also a short-termism problem. Nearly eight in ten executives (79%) cite increased employee productivity as the primary ROI signal for AI investment. Far fewer track longer-term indicators: only 27% measure employee retention and engagement, and just 25% look at customer feedback. By focusing narrowly on near-term output, firms risk missing AI’s broader potential — and undervaluing the cultural and reputational returns of investing in their people. 

The middle-management bottleneck 

Even where senior leaders are aligned on AI strategy (which nearly 60% of executives say their leadership team is), that commitment rarely translates throughout the organisation.  

Our research identifies a structural problem stalling progress: nearly half of executives (48%) say their managers have only minimal responsibility for the AI skill development of their teams. A further 8% say managers have no accountability at all. 

One in three executives cites resistance to change from employees and middle managers as a key barrier to aligning talent strategy with AI goals. This matters because it is frontline managers — not the C-suite — who often determine whether, and how, training actually happens.  

And looking ahead, 44% of leaders expect overcoming employee fear and resistance to AI-driven role changes to be among their most important cultural challenges over the next three to five years. 

Kian Katanforoosh, co-founder and CEO at Workera put it aptly: “Leaders hesitate to block off half a day for their teams to learn AI when there’s so much else to do, but they should. It might slow them down for a day, but over a month or a year, it pays dividends.” 

The governance gap nobody is talking about 

The skills deficit compounds a second, equally serious problem: governance. Every organisation surveyed by Economist Impact has at least discussed or planned a framework for responsible and ethical AI use. Yet only 8% have implemented a comprehensive, actively enforced system. Just 24% are even in early implementation stages. 

The gaps between the importance executives place on AI safety skills and the actual proficiency of their workforces are particularly pronounced. Cybersecurity for AI is rated essential by 96% of respondents, but only 20% say their teams are highly proficient in this skill; a 76-point gap. Data privacy shows a 68-point gap; bias detection in AI outputs, a 71-point gap. Without clear standards or oversight, employees are left to manage AI risk alone; often aware of the dangers, but ill-equipped to handle them. 

The consequences are not hypothetical. The greatest risks to responsible AI adoption, as the report notes, often arise not from malicious external actors but from internal errors: poor data handling, weak oversight, and the unguarded use of sensitive information. A workforce that has never been trained in governance is a governance risk in itself. 

The cost of inaction — and who pays for it most  

The stakes are not equally distributed across firm types. Small businesses face the same AI transition with a fraction of the resources. Our data shows that executives at small firms are four times more likely than those at global organisations to cite insufficient training and development budgets as a significant barrier (18% versus 4%). Nearly two-thirds (64%) of small business leaders lack the funds to hire specialist AI talent at all. Just 2% of small businesses report having a robust governance framework. 

Without targeted public-private support, these disparities will deepen. The report points to examples of what good looks like: Singapore’s SkillsFuture programme, New York City’s AI Action Plan, and London’s Data for London Library all represent robust, city-level commitments to embedding AI capability across industries and firm sizes. In the UK, government funding mechanisms can cover up to 70% of AI implementation costs for qualifying businesses. These programmes are available, but too few firms are making use of them. 

Research cited in the report finds that businesses at the earliest stage of AI maturity recorded growth 13% below their industry average, while the most advanced saw growth 17% above it. The gap between firms that invest in human capability alongside technology and those that do not is not a future risk; it is already impacting the bottom line. 

Time to fund the other half 

For every dollar spent on investing in AI technology, a proportionate investment must follow in funding the skills development needed to use it. That means structured, role-specific training that reaches not 10% but the majority of employees. It means making manager accountability for team learning a formal, measured responsibility. It means treating governance capability: cybersecurity, data privacy and bias detection with urgency. And it means reframing training as the essential enabler of the AI transformation. 

Our research offers a clear-eyed verdict: successful AI adoption is a question of human readiness. Technology is advancing exponentially. Workforce readiness is not keeping pace. Widening gaps in skills, confidence and governance threaten to slow progress exactly when the potential rewards of AI are becoming more and more apparent. The organisations that lead the next wave of AI maturity will not be those with the largest technology budgets. They will be those that match technological ambition with skills and governance investment. 

The ‘magic’ of AI is theoretically available to all. The tools are increasingly powerful and increasingly accessible. But realising the meaningful, lasting business value of this magic requires upskilling employees and putting governance guardrails in place. 

Until organisations fund the human half of this equation with the same commitment they bring to the technical half, that elusive repeatable ROI from AI remains out of reach. 

About the author 

Charles Ross is APAC head of policy and insight at Economist Impact, the research and thought leadership division of The Economist Group. He supervised and directed the research programme for From intent to action: the leaders’ guide to building AI-powered workplaces (2026), supported by Kyocera Document Solutions. 

Report reference 

Data referenced in this article is drawn from From intent to action: the leaders’ guide to building AI-powered workplaces, Economist Impact, 2026. The report surveyed 639 senior executives across London, New York, Singapore, Sydney and Tokyo, conducted October–November 2025. 

Author

Related Articles

Back to top button