Artificial intelligence is no longer a question of science fiction – it’s something organisations are using right now to address all kinds of challenges. But like any tool, AI can be used for positive or malicious ends – and just like the organisations they’re targeting, cybercriminals are also using it to dial up both the speed and scale of their attacks.
From automating convincing phishing scams through to using bots to overload systems, buy up inventory or scrape sensitive data, these AI-driven threats are becoming harder to detect and easier to execute, and businesses of all sizes should be sit up and take note.
For organisations already dealing with complex cyber threats, this new use of AI signals a new and dangerous shift: AI isn’t just helping businesses, it’s also making cybercrime a lot easier to carry out at scale.
The Rise of Machine-Driven Cybercrime
Thanks to AI and specifically Large Language Models, attackers don’t even need to know how to code to carry out their schemes. Provided they know how to prompt an LLM correct, they can use them to craft convincing scams, automate bot attacks and evolve their approach much faster than before. What might have once taken days or weeks to intricately build can now happen in minutes.
Advanced tools like ChatGPT, ByteSpider Bot, ClaudeBot, and others are changing how attacks are carried out. According to the Thales Bad Bot Report, ByteSpider Bot alone is responsible for 54% of all AI-enabled attacks. Other significant contributors include AppleBot at 26%, ClaudeBot at 13%, and ChatGPT User Bot at 6%. As attackers become more adept at utilising AI, they can execute a variety of cyber threats, ranging from DDoS attacks to custom rules exploitation and API violations.
Even tools originally designed for harmless tasks, such as ByteSpider’s web crawling, can be hijacked and misused. Hackers can use it to scrape websites for sensitive info such as pricing, user data and proprietary content which can then be used to train AI models and tailor future attacks. In some cases, these tools help attackers reverse-engineer defences and find weak points in a company’s IT stack.
Sneaky Bad Bots
What makes these AI-powered attacks especially dangerous is how sneaky they are. As many of the bots come from known entities – or can mimic the behaviours of legitimate human users – they often fly under the radar of traditional security systems.
The Bad Bot report revealed that automated bot traffic surpassed human-generated traffic for the first time in a decade, constituting 51% of all web traffic in 2024. Businesses relying on legacy threat intelligence may be unknowingly permitting bad bots in, as they look like trusted web crawlers.
Many bots serve useful purposes, like crawling the web to ensure search engine results are constantly up to date, gathering data to train AI models, or a wide array of other testing use cases. However, hackers have started pretending to be these harmless bots to slip past basic security defences. The real challenge is figuring out what a bot is really doing. Is it just collecting data for legitimate AI use, or is it secretly spying to prepare a phishing attack? Even if it’s the former, they’re still exerting a heavy load on bandwidth and IT resources that an enterprise may want to stop. Ultimately, without detailed analysis and current threat information, businesses often can’t be sure.
The Challenges and What Businesses Can Do to Defend Themselves
To keep up, businesses need to move beyond outdated security tools that can’t tell real AI bots from fake ones.
In today’s fast-changing threat landscape, it’s essential to adopt a proactive and adaptive approach – using advanced bot detection and behavioural analysis as part of the wider suite of cybersecurity tools they have in place to stay protected and resilient. Some actions security teams can take include:
- Find and Prioritise Hotspots: Organisation’s must locate the areas of their site that attract bot traffic. Pages for product launches, login portals, checkout forms, and pages with gift cards or exclusive inventory are strong locations to start evaluating high-risk hotspots.
- Introduce MFA and Strengthen Credential Security: Organisations should use phishing-resistant MFA on login and admin portals. Prevent credential stuffing and carding by integrating credential intelligence and rejecting breached credentials.
- Implement Intelligent Bot Mitigation and Traffic Controls: Leverage AI-driven systems to find stealthy, human-like bots as they operate. Apply flexible rate-limiting, behaviour-aware CAPTCHAs, and anomaly-based traffic analysis to manage unusual activity while preserving seamless user interactions.
- Use Adaptive Bot Detection and Rate Limits: Use AI-powered tools that detect human-like bots in real-time. Implement dynamic rate limiting, adaptive CAPTCHAs, and traffic anomaly detection to contain suspicious behaviour without limiting user experience.
- Have Regular Threat Surveillance and Proactive Testing:
Establish a baseline for normal failed login activity and watch for irregularities or sudden spikes. Use real-time bot monitoring solutions and regularly probe your own systems with simulated attacks to stay ahead of evolving threat tactics and adapt your defences accordingly.
Final Word
The message is simple: AI isn’t just a possible future problem in cybersecurity, it’s already a real and active threat today. As attackers become more sophisticated and the tools more accessible, businesses must assume that AI-enabled attacks are not a matter of if, but when. Cybersecurity is now a race between AI for offence and AI for defence. Only by embracing innovation, strengthening visibility and adopting a mindset of continuous vigilance can enterprises hope to stay ahead of this rising threat.