Future of AIAI

AI in Cybersecurity: A Double-Edged Sword

By Richard Lindsay, Principal Advisory Consultant at Orange Cyberdefense

Artificial intelligence (AI) is transforming the landscape of cybersecurity. Businesses are adopting this technology to identify threats more swiftly, analyse vast datasets, and automate tasks that previously took hours. However, as with any new technology, malicious actors are also discovering ways to use AI for their gain. From ā€œdark LLMsā€ to deepfakes and AI-driven phishing, AI is increasingly becoming a tool in the arsenals of cybercriminals.

While organisations seek ways to integrate AI into their tech stacks and workflows, they must also safeguard these systems and mitigate the threats emerging from the more sinister perceived aspects of AI for businesses.

Leveraging AI as Part of Defences

For cybersecurity professionals, AI can enhance defences by streamlining threat detection, incident response, and risk management. One domain where it is already demonstrating value is in identifying ā€œbeaconingā€ behaviour; the communication between compromised systems and external command-and-control servers, often employed by malware to receive instructions or exfiltrate data. While traditional detection methods typically struggle to recognise this activity, AI can assist in its identification through pattern recognition. Security teams can then be alerted to detected anomalies via real-time notifications, enabling them to take prompt action.

AI can also assist with more routine elements of system security, such as documenting security intelligence and event information, as well as analysing potentially harmful emails and files. In this regard, the technology alleviates some of the more manual tasks, thus allowing professionals to focus on higher-priority responsibilities.

However, for all the benefits AI brings to security, when threat actors find their uses for it, it can also cause more harm than good for legal businesses.

AI in the Wrong Hands

Generative AI (GenAI) has already established itself as a powerful tool for cybercriminals. For instance, some criminals are using legitimate chatbots to disrupt businesses. OpenAI regularly reports on perceived abuses of its ChatGPT platform, ranging from debugging and developing malware to disseminating misinformation, evading detection, and executing spear-phishing attacks. Hackers can also employ prompt injections to elicit unexpected behaviour in an LLM, circumventing its alignment policy and potentially generating unwelcome or compromising responses. Methods of attack include context switching, as well as concealing harmful code and prompts within input data, such as images or audio, all of which can lead to unauthorised content generation or service disruption.

Cybercriminals are also developing their own ā€œdark LLMsā€, such as FraudGPT, WormGPT, and DarkGemini. These tools are used to automate and enhance phishing campaigns, assist low-skilled developers in creating malware, and generate scam-related content.

These are just a few examples of how cybercriminals can leverage AI for their gain, and this trend is on the rise. Given the threats posed by AI, it is crucial that organisations do not hastily implement an AI-driven security strategy without first establishing a robust security posture to build upon.

Securing Systems, From Technology to People

To adequately secure AI systems and defend against the misuse of AI, organisations must deploy essential security technologies, along with people and processes, throughout the enterprise.

ā€˜Secure by Design’ is a phrase we are encountering more frequently, and the UK government recognises the cybersecurity practices of delivery teams and security professionals. Ensuring everything is secure by design entails integrating security measures from the outset of AI system development. This involves threat modelling, risk assessment, and designing systems to be resilient against attacks. Such an approach sets the stage for the secure implementation and deployment of AI systems. Following this, ongoing maintenance of AI systems must also be prioritised, which should be conducted through regular security audits, effective patch management, and the development and implementation of a robust incident response plan for attacks.

An often-overlooked aspect of securing systems is the human factor. Businesses should establish training and coaching programmes to assist employees in critically evaluating the opportunities and risks associated with AI implementation, encouraging them to engage with AI judiciously and select tools that align with their enterprise values and security expectations. In parallel, organisations should implement training, technologies, and processes that reduce the risk of employees unintentionally or deliberately disclosing sensitive data through GenAI or other LLM applications.

As an increasing number of organisations begin to utilise AI, both for security and for operational productivity, stringent processes must be executed to assess new AI capabilities and understand the true impact of adoption.

Consider the Past and Assess the Future

This is not the first instance in which technology has outpaced our ability to keep up with it. We have witnessed this with web applications and APIs. Each time, the rush to innovate has left security scrambling to catch up. However, AI may represent the most significant shift we have encountered to date, and it is already positioned on the frontline between the open internet and private systems.

While AI can aid businesses in their security efforts, particularly in the face of evolving threats, it cannot entirely eliminate circumvention, regardless of how overwhelming the AI discourse may become.

Author

Related Articles

Back to top button