
If 2024 was the year of peer-to-peer (P2P) and authorized push payment (APP) fraud, then 2025 is set to be the year of agentic and artificial intelligence (AI) driven fraud. In addition to synthetic identities, automated attacks and vulnerability scanning, cybercriminals are using sophisticated and advanced deepfake audio and video to impersonate and carry out social engineering attacks.
As fraudulent activity continues to tap into transformative technology, traditional risk intelligence that relies on risk scores and out-of-band authentication won’t be enough to stop them. Instead, the focus should be on adopting a more advanced cybersecurity approach to include proactive and layered authentication methods that protect institutions and customers alike. This starts with a comprehensive understanding of how fraud attacks have evolved and how broader risk signals are more critical than ever to help financial institutions (FIs) adapt.
Top Fraud Attack Methods in 2025
Fraud attacks in financial services are evolving faster than any time in history. Deloitte’s Center for Financial Services predicts that gen AI could enable fraud losses to reach $40 billion in the United States by 2027. AI has become the catalyst for most prominent fraud attacks in digital banking due to its increasing sophistication. A few of the top fraud vectors through mid-2025 include:
- AI-Driven deepfakes and impersonations: Cybercriminals are increasingly using AI to create realistic deepfake audio and video to impersonate executives, employees, or even loved ones in scams like Business Email Compromise and social engineering attacks. These aim to trick individuals into authorizing fraudulent transactions or sharing sensitive information.
- Targeted social engineering: While traditional phishing remains prevalent, the trend is shifting towards more personalized and convincing spear-phishing attacks. Fraudsters research their targets to craft seemingly legitimate messages (emails, texts, phone calls) that exploit trust and psychological manipulation.
- Account takeover (ATO) fraud: With the proliferation of mobile wallets, P2P payment apps, and cryptocurrency platforms, and legacy SMS OTP authentication still remaining as the dominant for of authentication to protect online accounts, ATO fraud remains a top threat. Fraudsters gain unauthorized access to accounts, often through social engineering and credential-stuffing techniques, to drain funds or misuse the account.
- Real-time payment fraud: The rise of faster payment systems facilitates quicker fraud. Fraudsters exploit the speed of these transactions, using social engineering and other tactics, perfected with the help of AI to scale, to initiate unauthorized transfers.
- Synthetic identity fraud: This involves creating fake identities, often combining real and fabricated information, to bypass verification processes and open fraudulent accounts or apply for credit.
The Rising Need to Analyze More Risk Signals in Real-time
Financial institutions are facing persistent, sophisticated and calculated attacks largely driven by the speed of AI and deepfake technology evolution. Mastercard’s Chief Services Officer said that by 2030, the company expects the pace of digitization to accelerate further, leading to increased risks related to fraud and cybersecurity. These new attacks require a more responsive, dynamic authentication process, including the ability to collect risk signals and behavioral data across multiple channels and the ability to analyze it in real time.
When combining user behavior data with trusted signals like device identity, location, transaction analysis and consortium data, an FI creates a robust system that uses active and silent authentication risk signals to combat AI-driven fraud. These risk signals paint the picture of user behavior dynamics, allowing FIs to utilize more dynamic risk authentication processes that flag activity that’s suspicious, while removing friction on activity that’s typical of the customer.
Authentication methods that utilize more robust risk signals also keep transactions seamless with smart risk-scoring technology and the collection of positive and negative behavioral signals from the user’s past engagements. Institutions that leverage risk intelligence and adaptive authentication in their fraud detection process provide users with low-friction experiences and can establish identity on transactions that may have auto-declined previously. It also protects users whose activity is out of “normal” and takes steps to prevent fraud before it costs them.
AI’s increasing accessibility and capabilities significantly enable fraudsters to launch more targeted, convincing and automated social engineering attacks at a speed that makes it difficult to keep up. AI further provides intelligence behind attacks, providing the ability for the attacker agent to adapt in real-time based on defense responses. When facing dynamic attacks, financial institutions need the ability to leverage data across channels and systems and have the ability to dynamically, in real time, respond with the appropriate authentication method.
Modern authentication methods require the ability to be Context Aware, using signals in real-time to determine the attack and then respond in real-time with the appropriate authentication response, to protect users from dynamic attack vectors.
To be ready for the next generation of attack vectors, FIs will need to adapt their authentication strategies to become dynamic and Context Aware. Static, siloed authentication approaches will become easy targets for the next gen cyber criminals.

