Cyber SecurityAI

AI and Cybersecurity: The New Frontline in Organisational Resilience

By Anne-Marie Tierney-Le Roux, Senior Vice President of Technology, IDA Ireland

Artificial intelligence has become both a target and a tool in cybersecurity. Attackers now use intelligent systems to scale and disguise intrusions, while defenders rely on the same technology for faster detection and response. Anne-Marie Tierney-Le Roux, Senior Vice President of Technology at IDA Ireland, explores the balance between innovation and control, and how it will define digital resilience in the years to come.  

The shifting nature of cyber risk 

AI is redefining the threat landscape at scale. The global AI-in-cybersecurity market is anticipated to exceed GBP22 billion in 2025 and reach nearly GBP70 billion by 2030. With AI systems integrated into all aspects of business activity, new vulnerabilities are introduced through model manipulation, data corruption, and the abuse of open-source code.  

A recent report by Arctic Wolf found that 29% of worldwide security professionals now view AI as their top priority, surpassing ransomware in the number one spot for the first time. Irish companies reflect the same attitude highlighted in the Hiscox Cyber Readiness Report 2025. It found that six in ten consider the abuse of AI to be a high risk over the next five years. As Ireland continues to be a magnet for global tech companies, the risk from AI-driven attacks is expected to rise.  

The dual role of intelligent systems 

The symmetry between attack and defence in AI-driven cybersecurity is striking. The same abilities that enable AI to be helpful to businesses are today leveraged by attackers. Researchers, between 2023 and 2025, saw a 223% increase in the marketplace on the dark-web forums for the trade in deepfake-based tools. Machine-learning algorithms can today create voice clones and phishing materials that mimic malware, rendering traditional pattern-based detection ineffective.  

However, AI also facilitates quicker and more intelligent defence. In Security Operations Centres, 57% of analysts utilise AI to quickly clear alerts, 56% indicate enhanced threat prioritisation, and more than half utilise AI in the case of pre-emptive security action. Predictive analytics and anomaly detection are transforming the way groups approach risk and contain breaches.  

Human oversight remains the key protection. Explainable AI is the means by which automated decisions can be audited and relied on. Trinity College Dublin’s AI Accountability Lab and CeADAR, Ireland’s national centre for AI, is leading research on making machine-learning models understandable and resilient to manipulation. They reflect Ireland’s increasing importance in the creation of ethical and secure AI systems that can be applied globally across industry.  

Regulating the future by building trust and accountability 

The EU Artificial Intelligence Act sets out tight requirements regarding transparency, risk categorisation and cybersecurity. Organisations are now required to document how their AI systems have been trained and tested to ensure that they have appropriate levels of reliability and human control. 

Ireland is one of the first EU member states fully prepared for enforcement and has designated 15 authorities as the bodies that will be responsible for checking compliance. Thanks to its forward-thinking approach, Ireland is a benchmark in the single market as a regulator, especially among companies that operate European headquarters in Dublin. Boards are already beginning to recognise that AI risk management is not a technical issue, but one of corporate governance. AI oversight embedded into audit and compliance infrastructures will, in the near term, be as standard as data-protection reporting.  

The National Cyber Security Bill, which will incorporate the EU’s NIS2 Directive, promises to bring AI and cybersecurity governance into closer alignment, creating a more cohesive framework for regulators, industry, and academia alike. National initiatives such as the National Cyber Security Centre (NCSC) and Cyber Ireland are central to this effort. Cyber Ireland, which unites over 500 firms and research bodies, will see its work strengthened by this legislative clarity, empowering companies to innovate, scale, and meet the compliance standards. In combination, these initiatives are bolstering innovation and digital resilience, ensuring the country remains at the forefront of the global cybersecurity and AI landscape. 

From risk to readiness 

In order to manage AI-driven threats, organisations need systematic approaches to handle them. First, they need to embed AI threat modelling into software security, deciding how the machine-learning systems could be deceived or attacked. They then need to combine zero-trust architecture with explainable AI, verifying every access request and ensuring every AI-powered decision is understandable and audit-friendly.  

Ultimately, skills and collaboration should be regarded as strategic priorities. Ongoing training in AI security, supported by partnerships between industry and universities, will be crucial for ensuring preparedness. Ireland already presents instances of this in practice. The Tyndall National Institute are engaged in the development of AI-security and photonics technologies that improve threat detection and safeguard data privacy. Such research, enhances Ireland’s international reputation as a reliable centre for secure digital innovation.  

Preparing for the next cyber frontier 

The forthcoming years are expected to play host to an accelerating cyber arms race. Offensive AI models will learn faster and adapt autonomously to exploit vulnerabilities prior to any reaction from human defenders. In response to these advancements, innovative defensive paradigms are being developed through agentic AI systems that operate independently to identify and neutralise threats in real time. Subsequently, organisations are beginning to prepare for quantum-safe cryptography, in anticipation of the forthcoming leap in computational risk.   

The long-term resilience will be dependent on how businesses strike a balance between accountability and automation. Artificial intelligence will not replace human defenders but will require smarter interaction between people and machines. Ireland’s growing pool of cybersecurity and AI expertise is uniquely positioned to take a leading European role in striking that balance. Businesses that invest in governance, skills, and clear AI systems will establish the benchmark for the next decade’s innovation.  

Author

Related Articles

Back to top button