Future of AIAI

Rethinking Identity in the Age of AI

By Jacob Ideskog, CTO of Curity

AI is reshaping how we think about identity, access, and trust. Deepfakes and synthetic content are breaking traditional verification methods, while autonomous AI agents, machine identities, are increasingly gaining access to systems that they have no need to access, often without proper oversight from a human being. These shifts expose serious gaps in current Identity and Access Management (IAM) frameworks. To stay secure, organisations must modernise how they define, verify, and govern both human and machine actors.

Synthetic Identities Are Breaking Traditional Verification

In the past, seeing was believing, but today, AI-generated deepfakes make it easy to fake a face or voice with frightening realism.

Biometric fraud is skyrocketing. In particular, deepfake-related identity fraud grew over 2,100% in the past three years. Attackers can now convincingly mimic users, tricking face and voice recognition systems that were once trusted gatekeepers. 

Recent research has shown that attackers are already using deepfake technology to target facial recognition software, combining realistic face-swapping tools with screen-sharing malware to bypass the selfie verification of an existing account. These attacks are especially dangerous because they exploit legitimate device permissions, allowing fraudsters to interact with apps in real time while impersonating victims. The result is a seamless flow of operations that appears authentic, while the user being impersonated has no idea their identity is being used. Ultimately deepfakes, when paired with other tools, can turn basic mobile verification into a major vulnerability.

This is no longer a niche, with free and open-source tools allowing even low-skilled attackers to produce deepfakes at scale. These are often paired with malware to target weak onboarding flows in mobile apps, bypassing liveness checks – used to confirm that a real person is present during biometric verification – rather than a photo, video, or deepfake. 

Visual and audio verification is no longer a guarantee of authenticity either. Trust in digital identity is eroding, and traditional biometric systems are falling behind.

Hard-to-Fake Authentication is Needed

To effectively counter deepfakes, authentication systems must move beyond methods that can be easily spoofed. We need approaches that are secure from the very start and difficult to fake. One such method is physical onboarding, where identity is verified in person before any digital credentials are issued. This could involve visiting a government office to present ID documents, having biometric data captured, or undergoing a face-to-face check before receiving something like a government-backed eID. These steps create a strong and trustworthy link between a person and their digital identity by ensuring the individual is real and matches their claimed identity.

Passkeys are also a promising advancement. They offer a secure, passwordless login experience and are highly resistant to phishing. However, like any digital credential, they rely on the initial onboarding process to establish trust. If that process is compromised, for example, through the use of a convincing deepfake, then even the most secure authentication method becomes vulnerable. Ultimately, no identity system is stronger than the foundation it’s built on.

Understanding Authentication Architecture is Critical

Addressing deepfakes isn’t just about adding filters or detection layers. It requires rethinking how trust flows through your authentication systems.

In the short term, developers should audit login and onboarding flows to identify weak trust points. It’s also important to make biometric checks smarter by ensuring they can tell the difference between a real person and something fake, like a deepfake or a photo. At the same time, shifting critical parts of the verification process away from the user’s device, where it’s easier to manipulate, and handling them on the server instead is a key step.

Layered security is also a critical feature. Combine biometrics with behavioural analytics and multi-factor authentication. Monitor users continuously after authentication, not just at login.

Over the long term, identity systems need to be built with synthetic threats in mind, treating them as part of the environment rather than rare edge cases. This involves moving beyond static checks and developing backend validation that can assess context, not just credentials. Authentication should become a continuous process, using behavioural patterns to confirm identity throughout a session instead of relying on a single login. Just as importantly, these systems must remain explainable and auditable so that teams can trace decisions and respond quickly when things go wrong.

Unchecked AI Agents Are a Governance Black Hole

While external threats dominate headlines, internal machine identities present a silent but growing risk. Machine Identities such as AI agents are increasingly autonomous, making critical decisions without human review.

These agents now triage incidents, approve requests, and even conduct transactions. ​​In doing so, they often interact with systems containing customer data, payment details, internal communications, and operational controls. Despite this level of access, many of these agents operate without clearly defined roles or dedicated credentials, and their actions often go unmonitored. To make matters worse, they can be manipulated through subtle techniques like prompt injection or model poisoning, which can quietly alter their behavior without triggering alarms or attention of overseers.

Without real-time oversight, high-stakes actions are taken without visibility. This lack of accountability makes AI agents a dangerous blind spot.

Governing Machine Identities

We urgently need to apply the same, if not stricter, governance standards to machine identities as we do to human users. Every AI agent should have a clearly defined role, limited access to only what it needs, and a set of expected behavioural patterns. Without this structure, we risk letting autonomous systems operate in the dark, with powerful permissions and no accountability. It’s not enough to deploy AI agents, we must actively monitor their actions, flag unusual behaviour, and make sure they are only doing what they’re meant to do, nothing more.

To do this, IAM systems must monitor machine identities continuously. Sudden deviations in behaviour or access patterns should raise immediate red flags. A new type of API is emerging quickly, the MCP (Model Context Protocol), which provides a uniform way for AI agents to access data and services. The rapid proliferation of these protocols require immediate attention to prevent unchecked access and misprivileged use. By clearly defining what data, constraints, and permissions are passed into a model at any given moment, organisations can ensure AI agents operate within strict and transparent boundaries. Agents’ actions are harder to predict and therefore must be treated with care. Any access given to an agent should be time constrained and or on a per need basis, with at least partial human-in-the-loop structures. 

Ultimately, policy must evolve to account not just for what an AI does, but for the context in which it operates. A robust security structure around Model Context Protocols may enable this by making intent and boundaries explicit, helping ensure AI agents remain focused, predictable, and secure in how they handle sensitive data and decisions.

It’s also critical to adopt zero trust for machines, preventing default access and continual verification. More importantly, when an agent is decommissioned, its access should be removed with the same discipline as a departing employee.

The Proliferation of AI-Accessible APIs

On top of this, AI-compatible APIs are multiplying rapidly. Their permission models often lack clarity, and it’s difficult to predict how an AI will use the access it’s given. As users link more AI agents to APIs, dormant access becomes a liability. Permissions granted today may be exploited tomorrow, or shared across agents without the user’s awareness.

We’re nearing a future where AI agents interconnect freely, seeking and using APIs on the fly. This means access granted to one agent may be leveraged by others without oversight. API providers must respond in the face of this growing security threat. Access must be user-specific and permissions should be transparent and revocable. Dormant access must be auto-expired and, ideally, APIs should use zero standing privilege by default, referring to a security approach where users and systems are not given ongoing or permanent access to resources. Instead, access is granted only when needed, for a limited time, and then automatically revoked.

The Future of Identity is Hybrid

AI is reshaping how we define and verify identity, and who or what we grant access to. Traditional identity models no longer suffice. Human and machine identities must both be verified, governed, and monitored continuously. The assumption that identity equals a person no longer holds. Biometric systems must evolve. Onboarding must be hardened. And access controls must account for autonomous, adaptive actors. Ignoring these changes isn’t just a technical oversight. It’s a vulnerability with real-world consequences. The future of identity is hybrid, and it demands a new kind of trust architecture.

Author

Related Articles

Back to top button