AI

AI+ZT: The alphabet of cybersecurity for Gen AI threats

By Anant Adya, Executive Vice President and Head of Americas Delivery, Infosys

Everything about Generative AI is fantastic, fast and furious: 1 million users in five days, 100 million in 60; projected annual economic value potential between $2.6 trillion and $4.4 trillion; $1.3 trillion global market by 2032. 

No wonder that CEOs are gung-ho. In a late 2024 global survey of 2,300 gen AI decision-makers and influencers – 70 percent at the C-level – almost everyone (98 percent) said their organizations will invest in the technology over the next two years, and 61 percent said these investments would be significant. 64 percent of C-suite respondents believe that large gen AI investments will transform their industries significantly in 2025 itself, 97 percent of CEOs expect it to create a material impact, and so on. 

The flip side is that optimism about gen AI is causing organizations to gloss over its risks. While 9 out of 10 C-suite respondents expressed serious concern about gen AI’s security risks, the majority said these were outweighed by the benefits. A mere 25 percent agreed that they understood and managed the security risks adequately.  

Lack of security readiness is not just cause for concern, it’s a recipe for unmitigated disaster. Generative AI’s risks are as spectacular as its rewards. According to a research and insights provider, fraud enabled by gen AI could cost the United States a whopping $40 billion in 2027, sharply up from the $12.3 billion lost in 2023. Criminals are exploiting the same advances in gen AI as enterprises to mount sophisticated forms of attack, from deepfakes to ransomware to jailbreaking to adaptive malware, and even offering AI-powered phishing-as-a-service. Worse, they’re using the technology to prey on the minds of unsuspecting victims – a case in point being the attack on U.K. engineering firm, Arup, where fraudsters used an AI-generated deepfake video and psychological tricks to get an employee to transfer $25 million to their account 

The good news is that artificial intelligence technology also allows the world to fight back. Applied to Zero Trust (ZT) – the highest cybersecurity principle that shifts focus from safeguarding large networks to defending small user groups and individuals, and assumes that every request for access to enterprise resources is malicious pending verification – AI (-driven zero trust) exceeds perimeter security to offer organizations protection against the most sophisticated attacks, including those riding on generative AI. Leveraging an AI-driven zero trust approach, enterprises can continuously validate users and devices in real-time to reduce the threat surface, mitigate risk of attack, and elevate their security posture.  

AI-plus-zero trust is equal to the gen AI challenge 

Specifically, this is how AI enhances zero trust security: 

With AI-automated ZT operations, organizations can accurately detect and disable potential threats, such as malicious applications or unauthorized users, before they can cause real damage.  This is because AI continuously verifies access requests, and responds immediately to a detected threat by isolating the impacted system or device, blocking access rights, triggering an alert,  recommending remediation, etc.  

In addition, AI studies security data to provide real-time insights, adjust access controls, and also advance its own learning.  For example, AI-based behavioural analytics solutions analyze network and user activity to identify abnormal patterns, and AI-powered access control tools dynamically adjust individual users’ access rights based on the perceived level of risk. What’s more, even authorized usage can be regulated by allowing only “just in time” or “just enough” access, to align with the zero trust principle of least privilege.   

Integrating AI in ZT security architecture helps to block threats, such as generative AI-led phishing attacks, that can fool traditional security infrastructure.  Further, combining predictive AI with zero trust security provides complete visibility across an organization’s security setup, predicts the likelihood of risk based on data patterns, and identifies correlations between anomalous activities observed in different systems and interfaces. Examples of AI-powered security tools that can be integrated with a zero-trust approach include intelligent threat detection systems for blocking credential stuffing, and AI-enabled deception technologies that trap criminals with “cyber honeypots” to deflect them from attacking legitimate targets.   

Micro-segmentation, that is, dividing a network into small, isolated segments in order to limit the impact of cyberattack, is a core concept in zero trust security. However, traditional micro-segmentation techniques are complex and effort intensive. AI resolves these challenges by automating the process, dynamically segmenting the network based on real-time information and changing security requirements, to improve access control. For instance, it groups applications, users and devices according to parameters like risk profile and behavior; upon detecting suspicious activity in a particular segment, AI isolates it or tightens the security ring around it to contain the threat. It even updates the network segments as and when new devices or users are added. 

Real-world use-cases 

There are several areas where organizations can apply AI-driven zero trust to beat AI-generated threats. A bank can encrypt the proprietary data in its large language models, monitor how the models are used, and apply controls to allow only authorized access. This prevents threats such as model poisoning and data exfiltration. Businesses handling sensitive or personal data – healthcare institutions, for example – can leverage AI-driven zero trust security to monitor the network, detect suspicious behavior, arrest the threat on time, and thereby avoid ransomware attacks. For manufacturers, there is great value in using AI plus ZT to segment the shop floor network from the network meant for end-users to prevent infiltration of critical operational systems.  

Summary 

Many leaders believe generative AI is the most transformational technology there is. Many more leaders are placing their faith in it and quickly ramping up investments. But very few organizations are focusing on managing the significant risks of gen AI.  

That needs to change, and soon. Every advance in generative AI also brings new risks, ranging from spoofed identities to model poisoning to data leakage and prompt injection. Only a zero trust approach – where no device or user is implicitly trusted – can protect organizations against these threats. The integration of AI within a ZT framework enhances its power by ensuring continuous verification, enforcing strong, adaptive access controls, and micro-segmenting networks with speed and efficiency. By automating threat identification, triggering real-time responses, and learning from past risks to adapt to future threats, AI takes zero trust security to a new level.  

Author

Related Articles

Back to top button