Future of AIAICyber Security

Emerging AI Trends and Their Impact on Cyber Security

By Joel Latto, Threat Advisor, F-Secure

Artificial intelligence (AI) is likely to be the most important technological development of our lifetime. Its applications seem endless and it’s being harnessed by many industries including cyber security. It is becoming a critical frontline defence against increasingly sophisticated cyber threats. 

However, AI is a double-edged sword. While it enhances our ability to detect and respond to attacks, it also arms criminals with new capabilities. As with any new digital technology, the development of AI and related technology will create and complicate existing cybersecurity and privacy concerns.

It’s crucial for organisations to implement AI tools that are trained on diverse, high-quality datasets because by reducing biases and blind spots, they can improve an AI’s ability to detect anomalies, including AI-generated malware. However, training data alone doesn’t prevent adversaries from exploiting Large Language Models (LLMs). Robust model architecture, regular updates, and security measures like adversarial testing are critical to countering abuse.

Attackers that are good at social engineering, or the art of manipulation for malicious ends, will likely find ways to exploit any LLM. Let’s take OpenAI’s ChatGPT as an example. This AI tool is the most popular and widely recognised among consumers. As a closed-source LLM, it allows users to interact via a web interface but does not permit downloading the model to a PC. Additionally, its architecture and training data remain fully controlled by OpenAI.

However, this hasn’t stopped cyber criminals from using ChatGPT to enhance their scams. While the automated use of LLMs for scams remains somewhat niche, we’ve observed instances where OpenAI’s Application Programming Interface (API) were used to generate misleading or fraudulent messages on X. We know OpenAI’s technology was behind these scam messages because the automated scripts scammers used accidentally posted the company’s ‘error’ message instead of the intended scam content—an attempt to promote a fake crypto investment.

We’re seeing attackers leverage AI to create convincing deepfake scams, automate phishing campaigns, and develop adaptive malware that can outsmart traditional security systems. Money-saving expert, Martin Lewis was a victim after a fake advert circulated online featuring a computer-generated likeness.

To keep pace, cyber security professionals are turning to AI-powered detection tools that can identify threats by analysing behavioural anomalies and detecting subtle signs of deception. This could be deviations from normal system or network behaviour, that might indicate a security threat.

A major area where AI shows promise is in detecting deepfake content and analysing non-AI generated content, such as messages. At F-Secure, our smishing protection blocks between 5,000-10,000 malicious text messages by analysing messages with AI.

According to a report by Europol, advances in generative AI are making it harder for traditional detection tools to differentiate between real and synthetic media. AI-driven systems that continuously learn and adapt, can help uncover these forgeries by identifying inconsistencies invisible to the human eye, such as unnatural blinking patterns or inconsistencies in lip movements in deepfakes.

As cyber criminals evolve their tactics, AI’s ability to analyse vast amounts of data in real time will be essential in preventing widespread damage. However, it isn’t as easy as it sounds. It is an ongoing challenge and a tremendous undertaking.

One of the significant challenges with integrating AI into cyber security is the ‘black box’ nature of many models. Often, even the developers cannot fully explain why an AI system has made a particular decision. This opacity can lead to mistrust, particularly when the stakes involve data breaches or business continuity.

Enter Explainable AI (XAI). XAI is designed to make AI decision-making more transparent, providing clear reasoning that human analysts can understand and trust. In cyber security, this transparency is vital not only for compliance with regulations such as GDPR but also for enabling quicker and more informed decision-making during active threats.

By integrating XAI into cyber security platforms, teams can better validate alerts, refine response strategies, and foster greater collaboration between human analysts and automated systems.

XAI also supports better threat intelligence sharing across organisations. When AI-driven findings are clear and verifiable, information can be shared confidently with industry partners, bolstering collective defence efforts against emerging threats.

While AI is currently reshaping cyber security, a new frontier looms: quantum computing. Quantum computing holds the potential to revolutionise AI by providing significantly faster processing speeds and the ability to solve complex problems currently beyond the reach of classical computers.

As highlighted at the Quantum World Congress 2024, the gap between quantum potential and quantum security is wide and most organisations are woefully unprepared for the quantum-powered future, particularly in terms of cyber security.

Quantum computers have the potential to break widely used cryptographic protocols, rendering much of today’s internet security obsolete. This raises urgent questions about how AI will function in a post-quantum world.

Quantum-safe AI algorithms are already being explored to address this looming challenge. These algorithms are designed to be resistant to quantum attacks, ensuring that AI-driven cybersecurity measures remain robust even when quantum computers become mainstream. The U.S. National Institute of Standards and Technology (NIST) is leading initiatives to standardise post-quantum cryptography, but integrating these standards into AI frameworks is an additional layer of complexity that cybersecurity teams must prepare for now.

According to the World Economic Forum, organisations that start adapting their cyber security strategies for quantum resilience today will have a significant advantage. Future-proofing AI systems means building flexibility into their architecture so they can incorporate quantum-resistant algorithms as they become available. Failing to address this risk could mean that otherwise cutting-edge AI defences become obsolete overnight.

Furthermore, quantum computing may eventually empower AI models themselves, increasing their processing power exponentially. This prospect offers opportunities for more sophisticated threat detection but also necessitates a cautious approach to ensure that such powerful tools are not weaponised by bad actors.

To conclude, the rapid evolution of AI presents both immense opportunities and serious challenges for cyber security. Deploying AI tools to combat sophisticated threats like deepfakes and AI-driven malware is no longer optional; it’s essential. At the same time, transparency through XAI will be key to building trust in automated systems and ensuring effective threat response.

Preparing for the quantum era adds another critical dimension to AI strategy. Quantum-safe AI algorithms must become a priority to protect against tomorrow’s existential cybersecurity risks.

As we move forward, we must stay ahead of the curve by embracing these emerging trends, building resilient systems that not only defend against today’s threats but also anticipate the risks of the future.

Author

Related Articles

Back to top button