Future of AIAI

The AI-Scam Crisis: Why We Must Act Now

By Brian Cute, Interim CEO, Global Cyber Alliance

Sprawling criminal enterprises are churning out fraud on an industrial scale, sometimes using forced labor and human trafficking in “scam complexes,” where tens of thousands of people are running text-message scams, romance fraud, and fake investment pitches. They’ve already stolen billions of dollars worldwide. These complexes are enormous, well-organized, and expanding at a pace that should alarm every business and every Internet user.

Their use of artificial intelligence (AI) is at a relatively early stage, but they are integrating AI into their methods at an increasing rate. According to the United Nations Office on Drugs and Crime (UNODC), “non-artificial intelligence automation continues to be widely deployed in the form of bots, scripts, and malware kits” but “the integration of AI is now reshaping the threat landscape with speed and complexity.” Cybercriminals are leveraging AI to increase the efficiency, sophistication, and scale of their attacks.

Soon, these criminal enterprises will fully embrace AI. And not just today’s chatbots, they will harness “agentic AI,” systems capable of acting on their own, learning on the fly, and adapting strategies to be more effective. Imagine scam campaigns that run 24/7, without human oversight, instantly shifting tactics when blocked, and probing for vulnerabilities across millions of potential victims simultaneously.

What Tomorrow’s Scams Will Look Like

Scammers will use deepfake videos indistinguishable from reality, voice-cloned calls that mimic your boss or family member, and chat systems that build long-term, personalized relationships with victims. They’ll use information they’ve learned about you either through your own social media accounts and websites or via data breach information they’ve collected from across the dark web. The messages won’t be riddled with typos and broken English that we now know to watch for; they will be polished, convincing, and relentless.

Scammers work very hard to stay one step ahead of cybersecurity professionals and many steps ahead of the average Internet user. Businesses already struggle to defend against phishing and fraud, combining technological defenses like email filtering and multifactor authentication (MFA) and proactive human elements like mandatory employee cybersecurity training, attack drills, and incident response plans. But the smaller the organization, the harder it is to find the time, money, and expertise to deal with cybersecurity.

Once scam complexes use AI at scale, the volume and sophistication of attacks will dwarf anything we’ve seen before.

We are not ready for this magnitude of threat.

We Must Act Now

There is a window, right now, to prepare. We must focus on two fronts.

First, we must equip people with cybersecurity knowledge. Many scams succeed not because of technical genius but because people don’t know or don’t follow the basic steps to protect themselves. Strong, unique passwords. Password managers. Multifactor authentication. Verifying requests before clicking links or sending money. These simple protections aren’t glamorous, but they are lifesaving. We must educate our staff, our families, our clients, our students, and we must make it easier for them to implement these simple practices, and hold them accountable to ensure they’re doing their part. These small actions can make every person less of a target and every business more resilient.

Second, we must make the Internet itself safer. That means shutting down malicious domain names registered solely for scams; even better if we can prevent those domain names from ever getting registered in the first place. It means shining a light on the massive amount of unwanted Internet traffic—botnets, probes, fake login attempts—that most of us never see but that fuels the economy of cybercrime. And it means reinforcing the technical foundation of the Internet itself: things like better routing security to prevent criminals from hijacking data as it moves across the globe.

This is a Shared Responsibility

Defending against AI-driven scams cannot fall to governments or technology companies alone. Every business leader, every organization, and every individual has a role to play. Business professionals, in particular, must recognize that this is not an abstract problem for “the IT team.” It’s a leadership issue, a trust issue, and in many cases a survival issue.

If your employees can’t tell whether the voice on the other end of the line is real, or if your customers are tricked into wiring money to criminals because a fake website looks exactly like yours, the costs could be catastrophic.

A Call to Action

The AI-powered scam wave is coming. The time to act is now. We must invest in defenses before the threat matures, not after. We must educate ourselves, our employees, and our communities. We must support initiatives that make the very infrastructure of the Internet safer. Together we must leverage AI to develop more effective cyber protection tools for all users and deliver them at scale. And above all, we must recognize that this is a collective challenge with a collective solution.

If we move together with urgency and focus, we can blunt the coming crisis. If we don’t, the scam complexes of tomorrow will outpace us, leaving individuals, businesses, and entire economies exposed.

The clock is ticking. Let’s not wait until the scams of the future arrive.

Author

Related Articles

Back to top button