The United States faces a perfect storm: sophisticated AI-powered scams are surging just as our fragmented reporting systems fail to capture their true scope. Americans lost $12.5 billion to fraud in 2024 according to the FTC. Yet, experts estimate that only 2%-7% of scams are actually reported to any governmental agency, and a 2025 report by AARP showed that only 21% of scam victims reported the crime to anyone. The gap between reported and actual fraud has become a critical vulnerability that artificial intelligence is making exponentially worse.
AI Transforms the Scam Landscape
The integration of artificial intelligence into scam operations has fundamentally changed the fraud ecosystem. Voice cloning technology now enables criminals to impersonate loved ones with just three seconds of audio found in, for example, a video posted online. Deepfake videos create convincing impersonations of CFOs authorizing fraudulent wire transfers. Large language models generate thousands of personalized phishing emails that can bypass filters.
While the Federal Trade Commission does not have specific data on voice-cloning scams, over 845,000 imposter scams were reported in the U.S. in 2024. A study by Consumer Reports found that for four of the six AI products in their test set, researchers were able to “easily create” a voice clone using publicly accessible audio.
These AI tools enable a massive scale-up of extremely convincing scam “bait.” Where a traditional scam operation might target dozens of victims daily, AI-powered operations can simultaneously engage thousands of potential victims with highly effective, personalized lures.
The Reporting Maze
Yet, amid this tsunami of scams, the United States lacks a unified fraud reporting system. Instead, victims must traverse a discouraging labyrinth of agencies and paperwork. A victim of an AI-powered scam might need to navigate:
- The FBI’s IC3 for internet crimes
- The FTC’s ReportFraud.ftc.gov for consumer fraud
- Local police departments with varying levels of cybercrime expertise
- State attorneys general offices
- The Consumer Financial Protection Bureau for financial fraud
- The Federal Communications Commission for phone-based scams
- Private entities like banks, credit card companies, and social media platforms
- Media and other watchdog groups
This fragmentation means no single organization has a complete picture of the AI fraud landscape. It also may contribute to the belief of victims that reporting fraud will result in more work for them, and with only marginal benefit.
The Data Black Hole
The truth is, without comprehensive reporting, the agencies that could be empowered to battle AI-enhanced fraud are instead hamstrung. Machine learning systems designed to detect and prevent fraud require vast amounts of data to identify patterns, but this data never materializes when fraud is so severely underreported.
The underreporting crisis also hampers law enforcement’s ability to identify and prosecute AI scam operations, and governments and companies will struggle to appropriately fund countermeasures.
Improved fraud reporting would also enable banks, law enforcement, and regulatory agencies to take action against perpetrators, preventing further victimization. Reporting directly helps protect others from falling victim to similar scams.
Scam reporting also helps recovery efforts. Without the victim reporting the fraud, the chances of recovering stolen funds are minimal. A quick response might make the difference between saving at least some of the money and losing everything.
Finally, given that many scams operate across borders, comprehensive reporting data would facilitate better international law enforcement cooperation and information sharing.
A Unified Response to AI Scams
Breaking the cycle requires fundamental reforms:
- Single Point of Entry: The U.S. needs a unified fraud reporting portal that both stores consumer reports in a centralized database and routes reports to appropriate agencies for action.
- AI-Ready Infrastructure: Reporting systems must be designed to capture the details of each scam or fraud, including evidence of AI-specific elements of the scam.
- Real-Time Data Sharing: Banks, telecom providers, tech companies, and law enforcement need automated systems to share their own fraud indicators in real-time and to receive an appropriate level of data from the victim reporting systems.
- Public Education 2.0: Traditional fraud awareness campaigns haven’t yet adapted to the AI era. Determining what educational strategies work to prevent which fraud scenario is a moving target, so an all-hands-on-deck approach may be a good place to start, with special priorities for approaches that focus on preventing damage done by AI-enhanced scams.
The Stakes Are High
As AI capabilities advance exponentially, the window for establishing an effective, nationwide fraud reporting infrastructure narrows. Some analysts predict that generative AI could enable U.S. financial losses from fraud to reach $40 billion by 2027. This is an emerging threat that our current systems cannot adequately track or combat.
The message to policymakers must be clear: The reporting gap is a critical vulnerability in America’s economic infrastructure. Every unreported AI scam provides criminals with consequence-free training data to refine their attacks.
Until America builds a unified, AI-ready fraud reporting system, we’re bringing a knife to a gunfight. Threat actors wield the most sophisticated technology ever created, while victims are left largely defenseless and with limited, confusing recourse.