As cybersecurity professionals, we already know that humans are the weakest link. But with the rise in popularity of artificial intelligence (AI), attackers are now exploiting human trust in more targeted, scalable, and convincing ways than ever before.Ā
The risks of social engineeringāalready present in 98% of cyberattacksāhave grown dramatically in an era where deepfakes are becoming commonplace. In fact, SumSub reports that the number of deepfakes detected globally across all industries increased by ten times from 2022 to 2023.Ā
This isn’t just a warning to include in your next round of security awareness training. This is a call for CISOs, compliance leaders, and security teams to reframe how you talk about social engineering within your organization and equip your team members with the tools and knowledge they need to recognize and avoid AI-powered deception.Ā Ā
The Rise of DeepfakesĀ Ā
Social engineering has always thrived on one basic principle: trust. Whether itās a convincing phone call from a bad actor claiming to be tech support or a seemingly innocent LinkedIn message from a fake recruiter, the goal is to manipulate human behavior. Whatās different now is scale and realism.Ā
Thanks to generative AI, attackers no longer need to spend hours researching to collect and analyze wide swaths of personal data. AI tools can gather information from public sources instantly and be used alongside deepfake technology to create convincing scams that seem real and authentic.Ā Ā Ā
In early 2024, UK-based engineering firm Arup lost $25 million to a sophisticated deepfake video conference scam. The attackers created a realistic video of an executive, including using a cloned voice, to request an urgent wire transfer. In another incident, an employee at a UAE bank was duped into authorizing a $35 million transfer after receiving a deepfake voice note from someone they believed was a company director.Ā
These attacks circumvent the common identifiers we teach about in traditional security awareness training. The grammar is perfect, and the voice sounds exactly like someone you know. Security teams must begin training employees to scrutinize and authenticate every request, especially when itās urgent or strays from standard operating procedures.Ā
The Evolution of PhishingĀ
For years, phishing detection has focused on telltale signs like bad grammar, generic greetings, and odd formatting. But these commonly known red flags are quickly becoming obsolete.Ā Ā
With tools like WormGPT and FraudGPTāuncensored versions of popular large language models (LLMs)āattackers can now generate emails that mimic your organizationās tone and reference internal initiatives and processes. These emails sound personal and professional, and they can be tailored based on the recipientās or their employerās social media activity.Ā
In one example that occurred in 2023, European law firms were targeted with emails containing links to fake case files. Because the emails matched the tone and structure of actual legal correspondence, senior partners clicked through, resulting in credential theft and the exposure of sensitive client data.Ā
Phishing defenses must evolve beyond static indicators and focus more on behavioral context. Is this request unusual for this person? Does this email try to bypass normal processes? If so, itās worth extra scrutiny.Ā
AI Chatbots and Support SpoofingĀ Ā
Another new risk for organizational leaders to be aware of is the rise of AI chatbot support spoofing. These malicious AI-powered chatbots can walk a victim through an entire scam in real time, providing seemingly helpful instructions while collecting log-in credentials, MFA codes, or installing malware. Victims often donāt realize theyāve been compromised until well after the fact, because the chatbot interaction felt just like a real support session.Ā
An example of this occurred in 2023 when attackers embedded a fake chatbot into a spoofed Microsoft 365 log-in page. The bot assured users it was verifying their identity while quietly harvesting sensitive information. In another case, a fake DHL chatbot scammed users out of credit card information under the guise of unpaid delivery fees.Ā Ā
Itās important to emphasize to your team members that real support agents will never ask for sensitive credentials via chat. If urgency or threats of account deactivation are involved, disengage. Instead of clicking links in emails, navigate to the site yourself so you can be more confident that the log-in page is legitimate.Ā
AI-Powered Social ReconnaissanceĀ
The power of AI doesnāt stop at impersonation. It also enables attackers to conduct elaborate reconnaissance, scraping public data from LinkedIn, Instagram, and other public platforms to build detailed psychological profiles of their targets and craft phishing messages that reference your real interests and relationships, both personal and professional. This profiling doesnāt just include where you work or who youāre connected to. It also includes your hobbies, your tone of voice, your values, and even your writing style.Ā Ā
In 2025, this is how attackers cook up convincing lies. Itās not mass emailsāitās highly tailored messages that reference your actual colleagues, mimic your companyās internal tone, and mention specific projects or deadlines, making even well-trained professionals vulnerable.Ā
In one case from 2023, attackers impersonated CFOs using voice and writing style models trained on company press releases and public blogs. Their phishing emails to accounting team members were so convincing that several organizations processed fraudulent payments, believing the messages came from their actual leadership team.Ā
In a world where an attacker can learn everything about you in seconds, ādonāt overshare onlineā is no longer just a personal safety tipāitās a frontline defense tactic.Ā
Defending Against AI-Driven Social EngineeringĀ
To counter these threats, security leaders must adopt layered defenses that are proactive rather than reactive. Some of the most effective strategies include:Ā
- Require live verification: Never approve sensitive actions based solely on a single communication channel. Follow up through a separate, trusted method.
- Use tools to detect deepfakes: Solutions like Microsoft Video Authenticator can help identify manipulated media, but only when coupled with human skepticism.Ā
- Look out for suspicious behavior: Encourage employees to assess whether a request fits the senderās usual behavior. Train them to trust their instincts when something feels āoff.ā
- Use caution when speaking with AI chatbots: Train your team members to immediately exit any session that pressures them for credentials or MFA codes. IT support should never ask for passwords over chat.Ā
- Check the URL before clicking: Especially in chatbot or phishing scenarios, hover over any link before clicking to make sure it leads where you expect it to. Whenever possible, navigate to the site yourself instead of clicking any links.Ā
We can no longer rely on the voice we hear or the name on the screen. Authentication needs to be multilayered. Every employee, from the IT help desk to the executive leadership team, needs to be trained not just to recognize suspicious emails, but to question every form of communication.Ā Ā
We also need to acknowledge that these attacks will get worse before they get better. Open-source generative AI models are evolving rapidly, and cybercriminals are beginning to chain multiple AI-driven tactics into sophisticated, multi-stage campaigns. This is the moment for security and compliance leaders to take the lead, invest in detection technologies, and foster a culture of healthy skepticism.Ā
The Bottom LineĀ Ā
As the use of AI tools becomes more widespread among bad actors, organizations must prepare for an onslaught of scams that can bypass our existing defenses, not through zero-day attacks and malware, but through our very human desire to trust. As AI continues to lower the barrier for executing highly convincing attacks, our skepticism remains one of our most reliable tools.Ā
We canāt rely on grammar checks or spam filters to protect us. As cybersecurity leaders and consultants, we must change how we think, train, and protect against social engineering attacks.Ā Ā
Cybersecurity isnāt just about firewalls or endpoint protection. Itās about people. In the age of AI, training your team members about the risks posed by AI is your best line of defense.Ā