
As artificial intelligence becomes more embedded in daily life, we can no longer paint all bots with the same brush.
For years, cybersecurity teams have focused on drawing a clear line between human and automated activity. The mission was relatively simple: detect bots and block them. But that line has blurred – and in the age of AI, the more pressing question is no longer what is visiting your digital platforms, but why.
As AI agents become more capable, the binary lens of bot vs. human is no longer fit for purpose. Sophisticated automation now powers essential business functions, from search engine crawlers to price comparison bots. At the same time, human threat actors continue to develop ever more deceptive techniques. The result is a cybersecurity paradigm that demands more than just a binary approach of detecting ‘bot or not’.
Intent is the new gold standard
In today’s AI-fuelled landscape, automation is not inherently malicious; nor is human interaction automatically benign. A bot visiting your website could be a legitimate AI agent, or it could be a threat actor launching a credential stuffing attack. In the same vein, a human user might be engaging with your site using stolen credentials and malicious intent.
Traditional security methods that rely on rule-based filtering or static rate limits are ill-equipped to manage this nuance. They can’t reliably differentiate between a good bot and a bad one, nor can they identify harmful human activity masquerading as legitimate. What’s needed is a shift in mindset, one that prioritises intent analysis over mere identity.
Beating the AI bot boom
Businesses are grappling with a surge of AI agent activity that often bypasses detection entirely. Many LLM-powered crawlers and tools ignore traditional safeguards like robots.txt, making them difficult to identify using conventional methods.
While largely non-threatening, AI crawlers and scrapers are causing visibility problems for businesses. AI previews now often replace traditional pageviews, leading to higher levels of bot activity and measurable declines in legitimate on-site engagement. These shifts obscure actual user interest and undermine site experiences.
An intent-based approach can compensate for this by focusing on the behaviour and patterns of access, rather than assuming compliance with legacy standards. By centralising cybersecurity processes on intent, teams can pay attention to what the traffic is doing – not just whether it’s there.
The case for behavioural intelligence
Understanding intent begins with understanding behaviour. Behavioural analytics offers a path forward by analysing usage patterns across sessions, devices, and geographies. For example, while both a legitimate price comparison tool and a data scraper might access thousands of product pages, their browsing behaviour, speed, and interaction patterns are very different. Machine learning models can learn these differences over time and distinguish between the two with high accuracy.
This same approach can be applied to detect anomalies in user behaviour. Consider an authenticated user on a banking platform who suddenly attempts high-value transfers to new recipients in rapid succession: activity well outside their historical pattern. Despite passing all credential checks, their behaviour raises a red flag. AI-powered anomaly detection can pause such transactions before damage is done.
Adapting to context in real time
Intent-based security also means adapting to context. Static rules, like hardcoded rate limits, can misfire in dynamic scenarios. Take the example of an e-commerce retailer facing a surge in traffic during Boxing Day sales: a standard rate limit may mistakenly block genuine customers while stealthy scalper bots,designed to fly just under the radar, are allowed in.
By adopting dynamic intent-based detection models, security systems can better identify genuine browsing patterns – marked by natural pauses, exploratory clicks, and occasional cart abandonment – as opposed to the precise, high-speed navigation we typically see from bots.
This approach also enables pattern recognition at scale. A sudden spike in traffic might be distributed across hundreds of IPs, each staying within individual rate limits. But if all these sessions exhibit the same resource consumption patterns – say, zero cart abandonment and laser focus on a handful of high-demand items, like the PS5 Pro, or an exclusive trainer – it becomes clear that something is amiss.
Rather than relying on predefined rules, intent models allow systems to understand the “why” behind behaviour in real time, even across highly distributed or obscured access. This shift makes defenses more adaptive and resilient, especially as adversaries exploit loopholes that traditional systems can often overlook.
AI’s only worthy opponent is itself
As AI becomes a more powerful enabler of both productivity and exploitation, security technologies must rise to the challenge. Static defences and reactive policies will always lag behind adaptive threats. Only proactive, AI-powered systems capable of discerning intent in real time can offer sustainable protection.
The AI arms race in cybersecurity is already well underway. Intent-based detection provides a structured way to evaluate purpose, promoting nuanced and streamlined decision-making in cybersecurity teams without the need to definitively identify each visitor.
The way forward
Intent analysis isn’t a silver bullet, but it’s a valuable framework that enables organisations to better interpret user behaviour, contextualise anomalies, and adapt their defences in real time.
In the age of AI, cybersecurity needs to be just as intelligent, just as agile, and just as intent-driven as the technologies it seeks to defend against. To outpace fraud and automation abuse, businesses must adopt defences that learn, adapt, and, above all, understand.