Cyber Security

AI in Cybersecurity: Does Opportunity Outweigh Risk? 

Highly specialized AI is already starting to be used in cybersecurity solutions around the globe. But we are only at the beginning. AI has the potential to drastically change the way networks are kept secure. However, like any new technology, AI comes with its own set of risks and these risks need to be taken into account when evaluating the benefits AI will bring. 

Historically cybersecurity solutions primarily depend on signature-based detection techniques to ward off attackers. These systems compare known threat signatures in their database with incoming network traffic and create an alert when suspicious behavior on the network is detected. These alerts are often manually logged and reviewed by a security analyst, who may have to deal with hundreds of alerts a day. Not only is this a laborious process due to the large number of false positives, but it also means that innovative cyber threats that don’t match the previous patterns can slip through the cracks undetected. And for the majority of organizations, this is currently standard practice.  

Benefits of using AI  

However, more and more, organizations are turning to AI as a solution to detect cybersecurity threats. Security models based on AI can analyze huge amounts of data in a short period, spotting patterns and any activity that deviates from the norm. Additionally, AI can also be used to scan the entire network for vulnerabilities. So, what are the main benefits of using AI in these ways? 

Reducing Workload – Using AI cybersecurity software saves experts time as it greatly reduces the number of alerts generated by the system. This means that the team is not constantly overwhelmed by false positives and as a result, can stay vigilant. It also reduces the number of manual tasks for IT teams to complete, freeing them up to focus on more complex, strategic tasks. 

Cuts costs – Because the security operations center (SoC) is more efficient, using an AI cybersecurity solution can also save money by reducing operating costs. And because the AI can respond to alerts much quicker, other teams within the organization no longer need to wait for access to different systems. This leads to increased efficiency across the organization. 

Better Protection – The signature-based approach has proven to be inadequate against zero-day threats, as these would not match any of the known threats in the database. In comparison, AI is more likely to pick up new cyberattacks through pattern recognition. Additionally, the speed of threat detection and response is as close to real-time as possible, meaning that there is less time for hackers to perform malicious activity if they do succeed in accessing the system. 

Greater Adaptivity and Scalability – AI-based platforms allow the cybersecurity team to respond quickly to address an increase in potential threats or new behavior on a network without the need for additional staff.  

Risks of using AI  

There are several important benefits to using AI as part of cybersecurity. Yet there are also substantial risks that come with adding an AI system to your existing portfolio of cybersecurity software. 

Lack of accurate data – AI models inherently rely on the amount and caliber of data that they use to ‘learn’ about patterns of activity. The AI system will only perform as accurately as predicted if there is sufficient high-quality training data without bias. A model trained off inaccurate or incomplete data may produce false positives or a false sense of security. Threats may go undetected which would result in significant losses. But this problem can be avoided, if organizations thoroughly vet the data that is given to AI models. 

Privacy Concerns – AI systems process real-world data to train models on normal traffic patterns and to detect abnormal traffic patterns. This data should be protected by sufficient encryption or masking of sensitive data to prevent its misuse. Furthermore, if bad actors can access to this data, they may perform a model inversion attack where they can gain insight into the security solutions by observing the output of the model. Teams should document,  manage, and protect the data used by AI-based tools and delete it when it is no longer required.  

Resource Intensive – AI can be resource intensive because it consumes a large amount of energy and water to power and cool the systems performing the data processing.  For this reason, AI tends to have a larger carbon footprint.  Steps can be taken to reduce the consumption of computing resources by adjusting how often the AI models are trained.  Overall it should be expected that AI-based systems will have higher energy consumption. 

One Final Consideration 

Beyond the pros and cons of defensive AI, there is one other important factor that plays into the decision of whether to adopt AI cybersecurity software. This is the fact that many cybercriminals are adopting AI to create more sophisticated attacks that can avoid detection. This technology is likely to assist with malware and exploit development, vulnerability research, and lateral movement by making existing techniques more efficient. It has already been demonstrated that GenAi can be used to develop an exploit using the content of a public vulnerability notice.

This will intensify cyber resilience challenges and increase the number of threats faced by organizations. One way for organizations to defend themselves against these attacks is to fight fire with fire and adopt AI to deal with new techniques and the increased number of attacks.  

At the end of the day, AI cybersecurity solutions possess enormous potential but also bring risks. They can reduce the workload for IT teams and enhance the security of the network because they can discover new, innovative cyberattacks. As the number of AI-powered cyberattacks increases, organizations should also adopt AI themselves to make their defenses more sophisticated.

However, this does not mean that all AI models will be appropriate for cybersecurity purposes. When a new technology’s pace of development is high – as is the case with AI – security can often be a secondary consideration. Security must be a core requirement, not just in the development phase of an AI system, but throughout its lifecycle. IT teams must remain vigilant and ensure that the AI itself has not been compromised and that the analysis it is performing remains accurate.  

Author

  • Michael See

    Michael See is CTO of the Network Business Division at Alcatel-Lucent Enterprise. In this role, he is responsible for the technology underpinning ALE networking solutions and establishing strategic technology partnerships. Prior to his position as Network Business Division CTO, Michael held multiple technology and architecture leadership roles in the areas of networking and communications solutions at Alcatel-Lucent (now Nokia), starting in 1999 when he joined the company, named Alcatel at the time, through the acquisition of Xylan. Michael started his career at IBM where he held system design and architecture roles in IBM’s Networking Group.

Related Articles

Back to top button