Cybersecurity has been on the leading edge of AI adoption. In a 2025 survey conducted by Sophos, two-thirds of organisations already use cybersecurity solutions with AI capabilities, including GenAI. While many of the use cases are automation enhancements, some are exploring more sophisticated AI use cases such as predictive security, real time threat analysis and advanced AI-driven security forecasting.
But with the increase of AI use sophistication, some issues become more pressing: transparency around its use, regulatory compliance, and the need for AI aware incident response strategies.
From Detection to Forecasting
Perhaps AI’s biggest impact so far in cybersecurity is anomaly detection. The ability to monitor vast amounts of data, look for patterns or outliers, and generate notifications when discrepancies arise, saves a huge amount of time.
Automating cybersecurity responses is another area where AI is making a difference. Automation allows organisations to quickly neutralise threats by isolating compromised systems or blocking malicious activity without human intervention. A study by ReliaQuest showed that organisations that are fully leveraging of AI and automation are able to reduce their response to security incidents to 7 minutes or less.
Time matters when responding to an attack, early detection aids in reducing the amount of data or number of systems compromised. Additionally, increasing our understanding of threats can help in activity prioritisation, addressing high-risk incidents based on threat severity. Using AI can also aid in turning formerly manual tasks into automated ones, handling them more efficiently and with greater accuracy.
We also see new techniques in the market that extend beyond threat detection and become threat forecasting. AI is being used to anticipate future risks based on historical data and attack patterns. In some instances, by having a historical record of known network or data patterns, AI can anticipate future attack trends, building new detection and prevention rules, giving businesses time to strengthen their defences against likely next threats.
But while AI can significantly speed up reaction times and improve decision making, it still relies on the quality of data it’s trained on. Flaws and biases in the data can lead to wrong decisions. While AI can be an excellent guide and make some choices on its own, human input will always be required to deal with complex situations and vital decisions.
Integrating AI into incident response plans
AI can improve the speed and accuracy of incident response, triaging incidents and prioritising in order of risk. But it’s not enough to simply add AI. It needs to be widely integrated and understood as part of incident preparation.
For example, tabletop exercises are used by security teams to “wargame” ransomware attacks and plan responses to possible issues before they arise. Teams need to include AI as a source of attacks in their planning sessions. Examples of this may include an AI phishing or social engineering attacks, where AI can mimic electronic communication methods and spoof employees or vendors. AI products which are adopted expose new vulnerabilities which need response plans.
Organisations need to treat AI-related security incidents as seriously as any other breach, ensuring they are prepared and have clear measures in place. It is important companies practice AI scenarios so that there is a gameplan in place to respond when an attack is unfolding while managing various stakeholders. The saying ‘fail to prepare, prepare to fail’ is just as relevant as we look to integrate AI and respond when threat actors do the same.
Concerns in AI cybersecurity
A common concern with AI, especially GenAI, is that the output will only be as accurate as the input. AI is less effective if trained on unrepresentative or flawed data. AI making decisions in the context of cybersecurity requires models trained on solid and representative data to build trust in its output, and to ensure that security measures are appropriate and effective.
Governments and regulatory bodies are introducing new rules to govern the use of AI, particularly when handling sensitive data. In August 2024, the European Union’s AI Act came into force, imposing strict regulations on high-risk AI systems to ensure they respect fundamental rights, safety and ethical principles. While the legislation is now in effect, its requirements will be phased in, with full applicability expected by 2026. It is important that organisations make sure that their AI models, whatever data and systems they are a part of, are compliant with applicable privacy regulations such as GDPR. Businesses will need to prioritise data privacy and security to make sure that they stay compliant with evolving regulatory frameworks.
This isn’t just a regulatory issue. The transparent use of AI is important to customers. There is an expectation that any business using AI is doing so responsibly. AI guidelines for ethical use can help to reassure customers and provide transparency when questioned.
The future of threat detection
Organisations have quickly adopted AI for detection using it to identify anomalies, reducing the admin load on security professionals. Looking forward, predictive security work including forecasting security threats is the next step cybersecurity professionals can take. But, as AI becomes more integrated into cybersecurity strategies, it’s important to address the regulatory challenges. As AI evolves, businesses will need to balance its potential with responsibility and compliance, especially as partners and customers demand more transparency.