
The digital battlefield is changing at an unprecedented rate, driven by the same technology created to empower us: AI. Unfortunately, this advantageous technology now finds itself in the hands of cybercriminals as a destructive tool, leading to a sharp rise in advanced threats.
Digital supervillains have innovated their tactics, wreaking havoc with realistic deepfake scams, continuously changing AI-developed malware and targeted scam campaigns sent through texts and calls. The era of simple phishing messages and clearly distinguishable viruses is fading to make room for an age when attacks are dynamic, scalable and eerily human-like. This presents a double-edged sword situation.
While AI enables attackers to automate and accelerate their nefarious work, it also offers the best chance of defense. Legacy security controls, premised on detecting known patterns and reacting to previous attacks, are increasingly outmatched by the speed and immediacy of AI-based attacks. As cybercriminals use AI to exploit human vulnerabilities at scale, the only viable and scalable shield is the application of similarly intelligent, AI-based tools that can immediately detect, analyze and react to such sophisticated threats. Ultimately, the best tactic is to fight AI with AI.
Why Traditional Cybersecurity Approaches Are Falling Behind
The nature of AI-based threats reveals inherent limitations in traditional cybersecurity controls that were not designed to deal with such adaptive and intelligent attackers.
Signature-based detection, a cornerstone of many legacy systems, is reactive by nature and struggles to keep pace with the rapid mutation and novelty of AI-generated malware strains. Before a signature for a new threat can be identified and deployed, the AI attacker has usually already shifted to a new form, exposing infrastructure.
Human-centric models of authentication, though essential, are not invulnerable, either. Deepfakes can get past voice or facial recognition systems, and more broadly, AI-facilitated social engineering can take advantage of human psychology and judgment and make individuals bypass security gates or get tricked by advanced deceptions. A 2025 McAfee report states deepfake scams surged tenfold in the last year. This is likely a conservative number as many scam victims are too embarrassed to file a report.
Above all, the sheer speed, volume and flexibility of AI-enabled attacks overwhelm traditional defenses that significantly rely on human action and manual inspection. Security teams are unable to manually inspect millions of potential threats or match the pace of attackers who can swiftly create and deploy new exploit variations, resulting in dramatic delays in detection and response.
The Evolving Landscape of AI-Enabled Threats
The integration of AI hasn’t just caused an increase in the number of attacks; it’s a transformation in their nature and sophistication. Attackers have shifted to developing and deploying more advanced, costly and difficult-to-detect threats, such as:
- Deepfakes: AI-powered deepfake technology has made it possible to create highly realistic synthetic video and audio. It is being utilized for malicious purposes, including financial fraud where voice clones impersonate executives to authorize transactions (“CEO fraud”) or family members for emergency scams. Deepfakes are also powerful tools of disinformation and reputation vandalism, representing a challenge to identity and authenticity validation in digital communication. In a recent survey of business leaders across finance, technology and telecommunications sectors, half of all businesses reported experiencing fraud involving audio and video deepfakes, with businesses losing an average of $450,000 per breach.
- AI-Generated Malware: Traditional cybersecurity is often signature-based detection, identifying known patterns in malicious code. AI, however, can generate highly polymorphic malware that constantly changes its form and behavior. By constantly modifying its code, AI-generated malware can produce unlimited variants, making it impossible for static signature databases to keep pace. This allows the malware to go undetected, linger in systems for extended periods of time and adapt its methods to bypass security software, leading to an ongoing threat that evolves faster than traditional defenses react.
- Automated Scam Campaigns: AI tools allow cybercriminals to send scam calls and texts quickly and cheaply, causing exponential harm. Recent data from the Federal Trade Commission shows that consumers reported losing $470 million to scams that started with a simple text message in 2024, which is more than five times the $86 million in 2020. Based on large datasets, AI algorithms can generate highly personalized messages and scripts, even referencing specific details about the victim in an attempt to gain credibility. Natural Language Processing (NLP) allows for generating persuasive, grammatically correct communications. Automated systems can disseminate these scams en masse to conduct real-time response analysis and improve human-based detection and response capabilities.
- Phishing-as-a-Service (PhaaS): AI assists in the industrialization of cybercrime by having PhaaS platforms, which lower the technical barrier for attackers through user-friendly interfaces and automated tools. AI can help cybercriminals create persuasive email content, produce realistic spoofed websites and automate the process of targeting and delivering phishing attacks. By democratizing advanced phishing capabilities and making them scalable, AI-powered PhaaS platforms allow a broader set of malicious actors to carry out highly successful attacks.
The power of AI allows attackers to harm millions of unsuspecting victims at once. This combination of supercharged deception, evasion and scalability makes these next-generation threats larger, more financially costly and more damaging to reputation and trust than ever before.
AI as the Defensive Line
Fortunately, the same technology used by attackers can be harnessed to build robust, intelligent defenses capable of countering these advanced threats. While Truecaller’s expertise is largely focused on combating voice and text fraud, the principles and tools developed can be applied to other types of fraud. Whether it’s financial fraud, identity theft, or phishing, AI-driven solutions can help detect and prevent fraud in various industries like banking, healthcare, and e-commerce.
By using AI to detect patterns of fraud, verify identities, and automate responses, businesses and organizations can proactively defend against a wide range of threats, not just those targeting communications. The adaptability of AI ensures that it can be tailored to combat fraud in almost any sector.
Deploying AI tools for defense offers the proactive, adaptive and scalable capabilities required to meet the current threat landscape. Some of these effective measures include:
- AI-Based Threat Detection: Beyond signature-based matching, AI can scan massive streams of data from networks, endpoints and user activity to identify anomalies and anomalous patterns on the fly. Machine learning models can learn what “normal” is for a specific user or system and signal when deviations would be a potentially AI-designed attack, even if the exact exploit is unknown.
- Deepfake Detection Software: New AI models are being developed and trained to specifically identify the subtle artifacts, discrepancies or statistical anomalies inherent in synthetic media but invisible to human perception. They can scan videos and audio to identify their genuineness and thus provide an essential layer of security against deepfake impersonation.
- NLP: AI-powered NLP is becoming increasingly popular, citing a 2023 market value of $24 billion and projected to reach a value of $158 billion by 2032. Thus, illustrating its indispensability in social engineering detection. By examining language, tone, urgency and linguistic patterns in emails, messages and even voice call transcriptions, NLP models can identify the characteristic signs of phishing, scams or malicious intent, even in highly personalized and creative communications.
- Continuous Authentication: Rather than authenticating a user just at the time of login, AI can enable continuous authentication systems. These continuously monitor the user’s ongoing behavior, typing patterns, mouse activity, location and other contextual data to continuously authenticate the user throughout a session. If the behavior is significantly different from the learned baseline, the system can then request re-authentication or flag the session as possibly compromised, essentially circumventing scenarios where initial credentials have been compromised due to phishing or deepfake attacks.
Fighting AI is a Collaborative Effort
In Truecaller’s experience, AI is a crucial ally in the fight against phone and text fraud, but there is clear potential for worldwide application across industries of all kinds. Users can carefully train and deploy AI to predict and monitor emerging cyber threats before they cause irreparable damage; however, achieving this effectively requires collaboration among government entities, businesses and consumers.
This collective effort is essential for enabling pivotal AI tools like behavior-based detection systems, which can analyze historical and real-time communication patterns to detect anomalies that can indicate fraud.
- Federal law enforcement can share vital threat intelligence on known malicious infrastructure.
- Businesses (like telecommunications providers, email services, and social media platforms) can provide anonymized pattern data from the high volume of digital interactions they oversee.
- Consumers can support the cause by reporting suspicious activities, adding crucial, up-to-the-minute data points on emerging tactics.
For example, suppose a phone call impersonates a government official. The AI system’s ability to flag it depends on having this shared intelligence. It can check if the number matches known scam lists (government intel), analyze call origin patterns against typical behavior (business data), and identify linguistic inconsistencies learned from reported scam communications (business/consumer data). Without comprehensive input from all these sources, the AI system lacks the insights needed to preemptively block interactions, stopping scams before they ever reach the user.
Building a Resilient Future
It is no secret that fraudsters work in collaboration with each other; public and private sectors must also work in tandem. Services that can identify fraudulent or high-risk numbers, based on large databases and patterns of scam activity, can go a long way to prevent or minimize exposure to fraudsters. It is imperative to choose services that implement AI and machine learning tools to detect scams, especially voice impersonations, as deepfakes rapidly gain traction and threaten consumers and organizations worldwide. By aligning AI with public intelligence gathering, reporting and large databases, society can be strengthened to fight against malicious scams and ensure a resilient and secure digital communications front.