Cyber SecurityAI & Technology

AI trends that will shape the cybersecurity market in 2026

By Gregory Statton, Vice President, AI Solutions at Cohesity

2025 was a turbulent year for British businesses. The spate of cyberattacks on the UK’s retail sector left major brands reeling – from supermarkets to retailers to car manufacturers. In the past 12 months alone, 71% of UK businesses have paid a ransom. And, as things, stand, the situation is only going to get more volatile – as AI-augmented cyberattacks, deepfakes, and breaches on third-party supply chains become increasingly common.  

Adversaries are leveraging AI and automation to execute attacks with startling precision and speed. This requires organisations to rethink traditional approaches to AI implementation. Leaders should focus not only on what they can do with the technology, but on what AI can do for them. That means stepping away from seeing AI as a product. Instead, organisations should review it as a real-life problem-solver.  

We’ll take a look at the AI trends set to shape cybersecurity over the next twelve months.   

The bursting of the AI bubble 

We could see the AI bubble burst this year. Like any new technology, the industry is due for a reset. The biggest issue is that many still view AI as a product rather than a problem solver. This applies equally to those developing AI and those trying to use it. 

To make the most of the technology, we need to pivot towards business value-based solutions. Those who don’t will run out of funds and fold or run out of company resources and terminate projects. When the AI industry starts listening more to what organisations and their people want and need, that’s when AI will become truly exciting and innovative. 

AI applications are becoming multilingual 

This year we’ll see the growing importance of internationalisation of AI applications. It’s limiting for global businesses to start with English AI tools and only later consider other languages as an afterthought. In today’s global economy, our mindset must be to ensure that everything we do – externally and internally – is available in all mainstream languages from the outset. If not, we’re leaving much of the world in the dark and lagging behind. 

Sovereign AI: taking control of data’s future 

The rise of sovereign AI is poised to transform how companies, governments, and countries manage, secure, and leverage their data. 

Increasingly, organisations and governments recognise the advantages of keeping data within their own corporate or geographic borders, using sovereign AI platforms to maintain control over what types of data are accepted and how they are processed. This offers clear benefits in terms of privacy, compliance, and strategic autonomy, especially as regulations around data sovereignty become more complex and far-reaching. But it also brings challenges. In its extreme form, it can lead to fragmentation, limit the flow of data, and discourage innovation, creating new barriers to multinational operations rather than liberating them. 

While I expect the adoption of sovereign AI to accelerate in 2026, its long-term impact will depend on how it is implemented, regulated, and balanced against the need for collaboration and innovation. 

Aligning precision with LLM performance 

All data is meaningless without trust. And we must build trust in Large Language Models (LLMs). 

LLM quality and accuracy are becoming increasingly critical, yet we cannot truly evaluate the accuracy of AI-generated data. At present, the AI industry lacks a standardised way to evaluate results. While traditional metrics, such as precision and recall have served us well for years, we must innovate and change how we manage the complexities of LLMs.  

Imagine a police officer investigating criminals and asking them if they’re dishonest or not. Chances are they would all say no! In the same way, asking an LLM if it’s accurate is equally unreliable; these systems are prone to hallucinations and errors. 

Users are also feeding the inaccuracy by failing to call out when outputs and results are obviously wrong. This feedback to LLMs is essential, and without it, we are hindering progress. 

The solution lies in using human-annotated inputs and outputs, tested against traditional data science metrics, to benchmark and improve models. Only through such innovations and efforts can we hope to set strong benchmarks that will help us move to an era of better AI accuracy and quality. 

Micro SaaS: small apps, big impact 

AI will revolutionise how organisations access and use their data, making it effortless to build custom micro-SaaS applications tailored to specific departments and business challenges. 

The democratisation of data will empower teams to stitch together agents and data sources, creating bespoke solutions that address niche problems with speed and precision. 

Instead of relying on generic, one-size-fits-all products, businesses will use solution-led AI tools designed around their unique needs and workflows. This shift will drive greater efficiency, innovation, and agility, enabling organisations to unlock the full potential of their data and deliver meaningful results across every part of the business. 

Making AI work for everyone  

2026 will be pivotal for AI – not just in terms of technological advancement, but in how we approach its role in our businesses and society. The lessons from recent cyberattacks highlight the urgent need for robust standards, smarter regulations, and a renewed focus on trust and transparency.  

As AI continues to evolve, those who embrace it as a problem-solving partner – rather than a mere productivity tool – will be able to reap the benefits of the technology and thrive.  

Author

Related Articles

Back to top button