
A year ago, the easiest way to spot a fake online profile was to look for obvious tells. Stock photos, broken grammar, a refusal to hop on a video call. A good fake was expensive to produce, and most scammers couldn’t afford to get every detail right, especially at a large scale.
However, now AI can generate a face that has never existed, clone a voice from a few seconds of audio, and hold a conversation that adapts to your personality in real time. And all of that at a fraction of the cost.
According to the 2026 Norton Insights Report, 74% of current online daters in the US who were targeted by a dating scam fell victim to one. Americans reported $1.16 billion stolen through romance scams in 2025, and financial institutions saw a 63% increase in romance scam attempts between 2024 and 2025.
None of this will slow down on its own, because the AI tools that make it possible are only getting cheaper and easier to use, and the companies and regulators who should be stepping in haven’t come close to keeping up.
Trust has become a manufactured product
Say, you match with someone on Tinder. They’re attractive, but not suspiciously so. Their bio references your city, they ask about your weekend, and they can name their favorite coffee shop not far from where you live.
The conversation goes on for weeks. They send voice notes, bring back things you said days earlier, and stay so present and steady that there’s no reason left to question any of it.
Then you find out that none of it was human. The face was AI-generated, the voice was cloned, and the personality was a script fine-tuned on relationship psychology.
This is a real case: a team of six at Humanity Protocol ran a social experiment on Tinder using nothing but publicly available AI tools.
Our team used Reve AI, ChatGPT, Nanobanana, and Midjourney to build four fake profiles from scratch, with photos, bios, and voice content. They deployed TinderGPT, an open-source tool on GitHub, to keep up with 296 real users from multiple countries. Forty of those users agreed to go on a date.
The experiment ended ethically, at a restaurant in Lisbon, where all participants were informed and treated to dinner. But the takeaway was hard to ignore. A small team with consumer-grade AI tools bypassed every verification system a major dating platform had in place.
AI has outpaced every tool we had for detecting fakes
AI-generated photos are now trained on composite faces and produce images with consistent lighting, natural imperfections, and the kind of casual framing that reads as authentic. The scale of this shift is hard to overstate. Deepfake fraud surged 700% in early 2025, and the number of deepfakes circulating online grew from roughly 500,000 in 2023 to over 8 million by 2025.
Chatbots are fine-tuned on conversational psychology, mirror tone, build rapport, and escalate emotional intimacy on a deliberate schedule that most real people wouldn’t notice. Voice cloning has crossed what researchers now call the “indistinguishable threshold,” where a few seconds of sample audio is enough to produce a full synthetic voice complete with natural pauses and breathing.
Real-time video deepfakes have followed close behind. In Hong Kong, police arrested 27 people who used AI face-swapping technology during live video calls on dating platforms, and the fakes held up even when victims specifically asked for a call to verify their match.
Every tool used in the Tinder experiment was publicly available and either free or cheap, which means this wasn’t something that required state-level resources or a well-funded criminal operation to pull off. A small team did it in their spare time over a few weekends. And that’s the part worth sitting with, because AI has made the cost of running a convincing scam so low that even a small return on a single target is enough to justify the effort.
KYC was built for a different internet
Most Know Your Customer systems were designed for an internet where humans were scarce online, fake documents took real skill to produce, and identity was something you confirmed once at the point of sign-up. None of that is true anymore, but the systems that were built around it have barely changed since.
US lenders found themselves exposed to $3.3 billion worth of suspected synthetic identity fraud in the first six months of 2025 alone, spread across auto loans, credit cards, and personal loans. A fraud prevention researcher at iSolved showed how far things have slipped by building a fully working synthetic identity of himself, with documents that cleared standard verification in seven minutes. Separately, research from Sumsub found that AI-generated fake IDs can now be made for as little as $15 in under an hour.
Global regulators have started to respond, and financial penalties for KYC failures hit $1.23 billion in just the first half of 2025, a 417% increase over the same period the year before.
Meanwhile, centralized identity databases keep making the problem worse by piling up massive stores of personal data that become obvious targets for breaches. The more data platforms collect under this model, the more raw material they feed to the next wave of synthetic identity fraud.
Asking people to prove who they are is the wrong starting point
The natural response to AI that can fake a human identity is to pile on more verification. More documents, more biometrics, more liveness checks. But every new layer of proof also creates a new surface for AI to attack, and the arms race between verification and forgery has no finish line. The harder we make it to prove identity, the more personal data we force people to hand over, and the bigger the target we paint on the databases that store it.
A better path forward is to stop asking people to prove who they are and start asking whether specific facts about them can be confirmed in a specific context. A dating platform has no reason to ask for a passport number. All it needs to know is that a real, verified human controls the account.
This is where portable, privacy-preserving credentials come in. The user holds the credential on their own device, the platform checks the claim against a trusted issuer, and no central database ever stores the underlying personal data. Users share only the specific fields that a given situation calls for and keep everything else private. Verification becomes something that works across contexts without asking people to give up more personal information every time a new platform wants to check their identity.
We don’t need to prove we’re human. We need to prove what’s true.
For most of the internet’s history, the default assumption was that the person on the other end of a conversation was real, and that assumption held up because the cost of faking it was high enough to keep most bad actors out or at least make them easy to spot. That cost has now dropped to almost nothing, and the people and platforms still relying on that old default are the ones paying for it.
The tools to build something better already exist and have been accessible for a while. Verifiable credentials, selective disclosure, and decentralized trust frameworks all offer ways to confirm specific facts about a person without forcing them to hand over their entire identity.
The technology is there, but most platforms still treat verification as something they’ll upgrade eventually, and “eventually” keeps getting pushed back until something forces the issue.


