Future of AICyber Security

Unlocking the full value of AI-powered Cyber Defence

By Dan Jones, Senior Security Advisor, Tanium

No one doubts the potential that Artificial Intelligence (AI) has to transform cybersecurity. The challenge is, in far too many organisations, that potential is being stifled by rules, regulations and inertia. The reason? Cybersecurity isn’t just a technology problem. When it comes to AI, it’s an operational one. Unless organisations address their internal blockers – obstacles like entrenched bureaucracy, outdated infrastructure, and resistance to automation – they’ll never get close to maximising their investment in AI.

Bureaucracy: the invisible enemy?

The truth is, many organisations still operate in a way that ties the hands of their security teams. Whether it’s protracted decision-making, overly complex approval processes or complex reporting structures, bureaucracy adds friction at every level.

We see it in organisations of all shapes and sizes, and often in the public sector. A recent National Audit Office (NAO) report into the government’s cyber resilience highlighted this concern, warning that bureaucracy can create confusion, duplication and inefficiency – both within and across departments. As a result, security teams are often caught in the middle – tasked with maintaining digital defences while mired in paperwork and constrained by manual processes that require extensive human decision making. What’s worse, this problem is magnified when deploying new technology, but it is even more acute in the context of AI.

By nature, AI tools need a constant supply of data and continuous fine-tuning. They also need both time and room to evolve and adapt. If every tweak or insight must pass through layers of approval before it can be given the green light, then any progress promised by AI is likely to be delayed or disrupted by administrative complexity.

However, it’s important to note that bureaucracy also has positive aspects, such as ensuring consistency and accountability. It can provide a structured framework for decision-making and help maintain order.

Reducing bureaucracy doesn’t mean removing governance. In this scenario, it means designing workflows that are fast, clear and fit for purpose. It means giving IT teams more autonomy. It also means empowering them to act in real time. Then – and only then – can AI start to make an impactful difference.

Legacy tech: the attack surface no one wants to talk about

Addressing bureaucracy is just the start. Legacy infrastructure is also another major impediment to progress. Organisations tend to accumulate systems and tools that, over time, are superseded and no longer up to the job.

We know that it’s not as simple as just upgrading them. In truth, these systems and tools are often too complex, too embedded or costly to replace. Which means that these ageing platforms simply become harder to patch, harder to secure, and harder to integrate with modern solutions.

The result is a fragmented estate with duplicated tools, conflicting systems and inconsistent coverage. This not only increases the attack surface – it increases the workload on already-stretched IT teams. Simply adding AI to the tech stack and hoping it will work is wishful thinking.

Investments in AI might have to be accompanied by a period of consolidation. Why? Because the smaller and more unified your estate, the easier it becomes to deploy AI with impact. That doesn’t mean starting from scratch – but it might mean retiring what’s no longer needed, rationalising toolsets, and building a foundation that’s ready for automation.  Doing so will also have other benefits in terms of reduced costs as well as less overheads for engineering teams and service owners.

Building trust in automation to strengthen cyber defence

Reluctance to automate remains a major obstacle. Despite the availability of advanced tools, many IT teams continue to rely on manual processes for critical yet repetitive tasks such as patching, vulnerability scanning, and routine monitoring. While these tasks are essential, handling them manually not only consumes time and resources, but also slows the entire security operation—introducing delays can lead to vulnerabilities that significantly increase risk.

AI-powered automation can take on much of this heavy lifting. It can accelerate detection, triage and even response – freeing up IT teams to focus on strategic analysis and decision-making. But only if it’s allowed to. Even today, organisational caution and scepticism towards automation can lead to a preference for manual oversight, limiting the value AI can deliver.

Addressing this challenge requires organisations to build trust in automation by keeping humans in the loop and designing processes that support collaboration between people and technology. Trust grows when teams and individuals understand how and when to intervene—resisting the urge to override systems unnecessarily, but remaining actively engaged where human judgment adds the most value. The more we rely on people to do what machines can handle, the less capacity we have for the uniquely human work that drives progress. Strong leadership is essential, with a clearly accountable individual owning the return on investment—this is not just about those doing the day job.

Building trust in automation with AEM

One area that’s gaining traction is Autonomous Endpoint Management (AEM). Rather than relying on manual processes to maintain visibility, control and compliance across thousands of endpoints, AEM enables these tasks to happen automatically and around the clock. By combining real-time visibility with built-in automation, AEM ensures that systems stay updated, secure and consistent without requiring constant human oversight. It reduces the operational burden on security and IT teams, speeds up response to vulnerabilities and allows businesses to scale their defences without scaling their headcount.

Importantly, AEM doesn’t just automate tasks – it builds trust in automation by giving teams the ability to set policies, define guardrails and maintain control. For me, this is a crucial step in helping organisations let go of manual processes and unlock the full value of AI-powered security. Ultimately, organisations are right to be excited about what AI can bring to cybersecurity. But its success depends on more than just deployment. It depends on a change of culture, structure and mindset.

Author

Related Articles

Back to top button