
A woman answers a call from her “bank.” The voice knows her name, her postcode, her recent transactions. It’s calm, convincing… and yet, AI.
The caller tells her that her account has been compromised and her funds are at risk. She hesitates, suspicion flickering. Yet the voice continues, smooth and steady, drawing on the cadences of a thousand calls it has conducted over the past months, and millions of books on behavioral psychology it was trained on.
A security procedure, it explains: She will soon receive a verification code by email. Moments later, her inbox pings. The sender looks right, the familiar name of her bank, except for a single dot buried in the address, invisible in the rush of panic.
She verified the code with the caller. The email also contains a link. She is told she must update her login details “for her own protection.” In doing so, she hands over everything the scammers need to take control of her account. The voice is still there, calm and reassuring, guiding her step by step.
This isn’t the lost opening chapter of an Orwell novel. We’re entering a new era where AI agents will be an integral part of life. The AI agents market is projected to grow at a compound annual growth rate (CAGR) of 45.8%, rising from its current valuation of $5.3–5.7B to an estimated $47.1B by 2030.
Deloitte’s 2025 Global Predictions Report projects that 25% of enterprises using generative AI will adopt AI agents by 2025, with adoption expected to rise to 50% by 2027. However, it will become increasingly more challenging to recognise AI generated content.
Some might still believe that they’re better at recognising AI agents than the average person, but Google’s haircut Assistant appointment call stunt in 2018 would have convinced a majority of readers.
When every voice can be faked, who will you believe?
Before the AI boom, receiving phone calls was simple. We might have not always known who was on the other line, or what their intentions were, but we were always aware that they were another human.
The rise of AI has changed that. Tools that once promised convenience and efficiency, from AI-powered chatbots that handle customer service to large language models that can generate convincing human-like speech, are now being exploited by criminals.
The result? Impersonation scams are exploding. Year over year, these scams surged by 148%. Fraudsters armed with AI can mimic loved ones, banks, or government agencies so convincingly that even cautious people fall for them. And as these voices become indistinguishable from real ones, the usual advice, “be vigilant”, is increasingly becoming meaningless.
The real issue is that we’re relying on individuals to outwit machines. We don’t need more warning, but a system that proves whether the caller is real in the first place. Just as web browsers show us when a site is secure, our phone and digital platforms should give us clear, instant verification that whoever is calling us is a real, trustworthy person, or a machine.
Without that, trust in digital communication will collapse under the weight of deepfakes.
Trust Needs Infrastructure, Not Vigilance
For far too long, our defense against scams has rested on the weakest link: human judgement. Banks urge people to “pause before sharing personal details”. Phone carriers advice to “hang up if a phone call feels too suspicious”. Unfortunately, in the age of AI, suspicion is no longer a reliable guide.
The danger is higher for older adults, who have historically been more vulnerable to fraud. According to the annual FBI Internet Crime Report, senior citizens lost nearly $5 billion in 2024, much of it to scams that prayed on urgency, fear, or family ties. Now imagine those same scams being supercharged by AI. A fraudster doesn’t just claim to be your grandchild in trouble, they sound exactly like them and can send you a photo.
McAfee research highlights a quarter of adults have experienced an AI voice scam, many with holding personal information, 77% of these victims had lost money as a result. The answer in solving this isn’t in telling senior citizens to “be more careful”. The answer is infrastructure.
Just as the Internet could have never scaled without Hypertext Transfer Protocol Secure (HTTPS) proving a website’s legitimacy, our digital communications need a comparable layer of authentication. Every caller, human or AI, should carry a verifiable credential that proves who they are and whether they are authorised to act on behalf of an institution.
Essentially, the responsibility shouldn’t lie with the individual on the other end of the line. Trust must be built into the system itself.
Digital Identity Verification Becomes the Infrastructure for Trust
The good news is that solving this problem doesn’t require reinventing the wheel, it requires applying the tools that already exist. Decentralized ID (DiD) systems, can give us an easy way to verify who’s on the other end of a call or message before we engage.
When an AI agent calls your phone, a system should instantly check the credential against the issuing institution, confirming its legitimacy before the conversation even begins.
At the core of these systems is authentication. Instead of relying on surface details, like a familiar phone number or a bank’s name in an email, they use cryptographic techniques to prove the origin of a call or message. That means a scammer spoofing a number or creating a lookalike email address can’t simply slip through. 
Once authentication is complete, the result is translated into a clear, human-readable signal. Just as a browser shows a padlock icon to indicate that a site is secure, these platforms provide a simple visual or audible cue that a caller or sender is verified.
Under the hood, the system utilises blockchain to ensure that a caller is both verified and authorised to speak on behalf of an institution like a bank. It’s required to be on the blockchain because one of its main properties is providing an immutable ledger. We could potentially trust a 3rd party to verify the caller is a representative of an organisation, but what if their database gets hacked… 
Blockchain technology creates a secure, tamper-proof way to validate identities at the source. For the end user, none of this complexity needs to be visible. Just as most people don’t understand the inner workings of the Internet yet use it daily, they don’t need to grasp the mechanics of blockchain. What matters is the simple outcome: a clear indicator next to a phone number confirming that the caller has been verified by the institution.
The goal should be to make individuals experts in security, but to give them an instant, trustworthy indication of whether a call is from a reliable source.
The next time a woman answers a call from her “bank.” The voice might know her name, her postcode, her recent transactions. It might be calm, convincing… and yet, there should be an exclamation mark saying: unverified AI agent.
				


