
AI Investment Is Growing but It’s Not Necessarily Strategic
If there’s been one story dominating the business pages this year, it’s been companies channelling investment into AI – and often talking this up as a rationale to scale back jobs. Recently, UK law firm Clifford Chance cited increased AI use as a factor behind reducing London staffing numbers by 10%.
15 years ago, companies touted their focus on social media; today it’s all about AI. Our research, covering business leaders in 8 countries surveyed six months apart, and assessing 123 annual reports, found that businesses primarily perceive AI as a tool to increase productivity and efficiency and cut costs. Nearly two thirds (62%) planned to increase investment in the next year, and a majority (59%) said they consider AI to be crucial to their organization’s growth, highlighting the integral role executives see AI playing in future success.
It brings to mind Jurassic Park’s caution: just because you can, doesn’t mean you should. I am completely convinced of its potential to be a force for good. But if introduced without a foundation of good governance or a long-term view of the business’ strategic needs, the benefits may not be realised. At best, it could be money wasted on duplication or tools that don’t work. At worst it could expose a business – and its customers and supply chain – to new risk.
Investment Momentum Without Strategic Clarity
In a challenging economy, with sluggish growth, there is a clear belief AI will solve challenges for businesses. Indeed, many say they are seeing value from AI already – two thirds of business’ said AI has delivered tangible benefits, such as growth, innovation or efficiencies (65%) in the last year. This falls to 58% for small businesses. How value for money or return on investment is being determined is less certain. After all, in the summer MIT researchers suggested 95% of organizations are getting zero return on AI investments, and in our study we also found that more than two fifths (43%) of business leaders say AI investment has taken resources that could have been used on other projects – raising the question of whether the opportunity cost of not doing those projects is being considered.
What becomes apparent is that businesses are so busy focusing on investing, they are not always taking that step back to consider whether it is right for them or meets a strategic need. I note that fewer than one in three have a process for avoiding duplication of AI services between departments; without that assurance, how can you know AI is worth the money?
Governance and Risk Blind Spots
Whether AI is adding value or not is down to a business’s definition of value, and therefore subjective, but perhaps the greater risks lie in how the AI performs. Our study found a striking absence of safeguards and that risk and security considerations appear to be being neglected. Just one in three (33%) have a standardized process for employees to follow when introducing new AI tools, and only one in four report that their organization has an AI governance programme in place. Although this rose modestly to just over a third (34%) in large enterprises, a pattern repeated across the research, the takeaway is still that most companies aren’t prioritizing oversight of AI. And this is across all areas of governance and management of AI risk; just three in ten have processes to assess the risks introduced by AI, while just one in five restrict employees from using unauthorized AI. Just 30% reported having a formal risk assessment process to evaluate where AI may be introducing new vulnerabilities.
What it comes down to is business leaders declining to ask questions where perhaps they should. A key component is the data that sits behind the AI; how it is being collected, stored and used to train AI models. Yet only 28% of business leaders know what sources of data their business uses to train or deploy its AI tools, down from 35% in February. Just two fifths (40%) said their business has clear processes in place around use of confidential data for AI training.
Consequences of Poor Oversight and Rising Dependency
The rebuttal might be that if the AI is working, imposing processes and policies just adds unnecessary bureaucracy. But what happens when the AI doesn’t work as planned, or when there’s an unintended consequence? For all that technology offers solutions, it can also open businesses up to new vulnerabilities; look at the recent Amazon Web Services outage as evidence of the importance of business continuity. Even with the most targeted and value-driven AI investment approach, a business can only be truly resilient if it has appropriate controls, including having a back-up in place.
AI not working, or not working as it should, is not a hypothetical situation. Consider the issues with Deloitte’s AI-produced report for the Australian government, which contained significant errors. There have been incidents where chatbots gave incorrect or dangerous customer advice, contained algorithmic bias or misused personal data. We know that for all its capabilities, AI is not flawless. Worse, AI’s tendency to ‘hallucinate’ (inventing answers when uncertain) makes errors difficult to detect. Yet just a third say their organization has a process for logging where issues arise or flagging concerns with AI tools (32%), while just 29% have a process for managing AI incidents and ensuring timely response.
Already, around a fifth (18%) admit that generative AI is so deeply embedded that if tools were unavailable for a set period, their business could not continue operating. That’s only likely to grow as a risk, as AI becomes a more central part of day-to-day operations. If AI tools replace a human, will there be someone to step back in if things go down? With cyber preparedness, over time we’ve seen business leaders recognize the need to have provisions in place for when, not if, an attack happens. The same approach is critical for AI; continuity planning that looks at what is in place to both identify issues and subsequently restore services.
Avoiding an AI Governance Crisis
As with the emergence of any new technology, we are in uncharted waters. Thinking about AI risk is critical. Otherwise, businesses are just ‘sleepwalking’ into an AI governance crisis. Now is the moment to be asking questions; about how data is being managed, whether the right (or any) formal processes are in place, and whether balance is being struck between Innovating and managing risk.
Ultimately, this comes back to two things. One, thinking beyond quick announcements about AI-driven success to what AI will mean for the business in the long-term, and two, good governance as the foundation for success. AI will not be a panacea for poor growth, low productivity and high costs without strategic oversight and robust governance – and indeed without these, new risks could emerge.
Overconfidence, coupled with fragmented and inconsistent governance, risks leaving many organizations vulnerable to avoidable failures and reputational damage. Ultimately, smart business leaders will be those who move beyond reactive compliance to proactive, comprehensive AI governance.



