Cyber SecurityAI

AI Can’t Fix Human Error, But It Can Redesign Cybersecurity Around It

By Ian Garrett, CEO & Co-founder at SendTurtle (powered by Phalanx AI)

Artificial intelligence is quickly becoming both the sharpest sword and strongest shield in cybersecurity. On one hand, attackers are using generative AI to craft convincing phishing campaigns, automate vulnerability discovery, and even develop malware variants faster than traditional defenses can keep up. On the other hand, defenders are embracing AI for log analysis, anomaly detection, and predictive threat modeling. The hype is enormous, and the temptation is clear: if attackers are using AI, then defenders should simply deploy more AI to counter them. 

But that approach is shortsighted. Amid the AI arms race, one reality hasn’t changed: humans remain the root cause of most breaches. Whether it is a misconfigured cloud environment, a weak or stolen password, or a moment of poor judgment in clicking a malicious link, people remain the entry point that adversaries exploit. AI can help address these challenges, but it cannot erase the human factor entirely. If next-generation defense strategies are to succeed, AI must be applied not just to automate detection, but to make security simpler, more resilient, and harder for attackers to exploit against people. 

Threat actors have always been quick to adopt emerging technology, and AI has become one of their most powerful assets. Generative models allow attackers to create highly convincing phishing campaigns at scale. Unlike traditional phishing attempts riddled with grammatical errors and awkward phrasing, AI-generated emails are context-aware, industry-specific, and nearly indistinguishable from legitimate communication. The Verizon 2025 Data Breach Investigations Report (DBIR) again confirmed phishing as the top cause of costly data breaches. This is directly caused by increased click-through rates, which have made phishing campaigns more effective than ever before. 

AI is also powering the rise of deepfake-driven social engineering. Synthetic audio or video can convincingly impersonate executives, enabling attackers to trick employees into authorizing fraudulent financial transfers or granting system access. A voice clone that sounds like a CFO asking for “urgent help” can bypass the natural skepticism that might be triggered by a suspicious email. In 2024, a Hong Kong finance worker was tricked into transferring $25 million after a deepfake video call impersonated senior executives. The ability to mimic trusted individuals with such precision fundamentally shifts the balance in favor of attackers. 

Beyond social engineering, AI models are being used to automate vulnerability discovery. Machine learning can quickly identify misconfigured cloud storage, exposed APIs, or outdated software versions that are ripe for exploitation. Tasks that once required specialized expertise can now be performed at scale by less experienced threat actors. This lowers the barrier to entry for cybercrime, increasing both the volume and sophistication of attacks. 

On the defensive side, organizations have embraced AI with equal enthusiasm. Security operations centers rely on machine learning to parse billions of log entries, spotting anomalies that would be impossible for human analysts to detect in real time.AI systems help establish behavioral baselines for users and devices, flagging when someone suddenly downloads a massive amount of data at 2 a.m. or when a system begins communicating with an unusual server abroad. These tools are invaluable for incident triage, allowing teams to prioritize the alerts most likely to indicate a real threat. 

Yet while these applications are critical, they risk locking defenders into a reactive cycle: an “AI vs. AI” arms race. For every new anomaly detection model deployed, attackers experiment with adversarial inputs designed to bypass it. As defenders become faster at detecting malware variants, attackers use AI to generate even more. This back-and-forth may keep security teams busy, but it doesn’t fundamentally shift the advantage away from attackers. 

The more sustainable approach is to use AI not only for detection and response, but also to build structural defenses that are harder for attackers to exploit in the first place. Instead of constantly trying to outmatch adversaries’ AI tools, defenders should focus on applying AI where it can remove human error, simplify workflows, and reduce the opportunities attackers can exploit. In other words, the real power of AI in cybersecurity is not in escalation, but in prevention. 

Passwords have been one of cybersecurity’s weakest points for decades. Despite endless awareness campaigns, people still reuse them, fall for phishing, or pick weak combinations that attackers can easily guess. AI only makes these problems worse. With machine learning, adversaries can predict common patterns, automate credential stuffing at scale, and generate fake login pages nearly indistinguishable from the real thing. What once took time and skill is now faster, cheaper, and more effective with AI in the mix. 

This is why defenders need to move beyond “smarter password management” and focus instead on eliminating passwords altogether. Passkeys, built on cryptographic authentication, tie credentials to devices, which makes them impossible to phish, guess, or reuse. Even if an attacker launches a flawless AI-powered phishing campaign, there is nothing to steal when organizations stop relying on traditional passwords. The breakthrough is that passkeys aren’t just more secure, but they’re also easier for users, removing the burden of memorizing complex strings or maintaining password managers. This captures a principle that should guide all AI-era defense strategies: if AI makes exploitation cheap and scalable, the only real solution is to redesign the system so there’s nothing left to exploit. 

Misconfigurations are one of the most dangerous (and overlooked) risks in cybersecurity. Cloud services, firewalls, and identity systems all rely on complex settings, and a single mistake can expose sensitive data or grant excessive access. With systems more interconnected than ever, misconfigurations remain a leading cause of breaches. Attackers now use AI to spot these weaknesses at scale, sweeping the internet for exposed databases, insecure APIs, or permissive controls within hours of their introduction. The window of vulnerability is shrinking, and manual oversight can’t keep up. 

Defensive AI offers a way forward. By learning what a secure baseline looks like, AI can continuously monitor configurations, catch drift in real time, and even remediate risky settings automatically. Instead of overwhelming administrators with alerts, AI can surface the most critical issues and act before attackers exploit them. In this way, AI can transform one of cybersecurity’s biggest liabilities into a strength, reducing opportunities for exploitation and embedding resilience directly into the system. 

Phishing remains the leading entry point for breaches, and AI has made it far more dangerous. Generative models can craft personalized, flawless messages tailored to specific industries, making them harder to detect and easier to trust. Traditional AI filters that scan for suspicious language help, but they risk falling into an endless arms race as attackers refine their techniques to bypass them. 

A stronger approach is to use AI to analyze behavior around phishing attempts, not just the messages. If an employee receives an unusual login request, AI can cross-check whether that interaction fits their normal patterns, role, or device use. This kind of real-time context analysis asks: is this action normal for this person, right now? AI can also reshape the human experience of phishing defense. Instead of generic warnings, adaptive prompts can provide real-time coaching that alerts a user that a link is inconsistent with their usual vendor activity and offers to open it in a protected environment. This turns defense into collaboration, where AI guides people toward safer behavior instead of overwhelming them with noise. Defenders can push further upstream by using AI to automatically detect and dismantle phishing infrastructure before it ever reaches employees. Just as attackers use automation to spin up fake domains and credential sites, defenders can leverage AI to disrupt those resources at scale. 

One of the biggest failures of cybersecurity over the past two decades is that many tools have made security more complex for the very people they were meant to protect. Employees are expected to juggle dozens of passwords, security teams drown in alert dashboards, and executives often face technical reports they can’t act on. Complexity is inconvenient and actively undermines security by creating friction that leads to mistakes, workarounds, or missed signals. 

AI has the potential to redefine the defender’s advantage. Rather than adding yet another layer of dashboards and alerts, AI can simplify security in ways that make it more usable and less error-prone for both employees and security professionals. For example, invisible automation powered by AI can handle behind-the-scenes verification without requiring employees to complete repetitive authentication steps. Security becomes stronger because users are asked to do less, instead of begged to do more. 

For administrators, AI can act as a unifying layer across fragmented security tools. Instead of manually correlating thousands of alerts from different platforms, AI can synthesize signals into a single, context-rich narrative: here’s what happened, here’s why it matters, and here’s what to do about it. This moves security from reactive alert fatigue to proactive decision-making. The most important principle is that AI should be used to reduce the cognitive load on humans. Adaptive security policies, powered by AI, can automatically adjust based on user behavior and risk level. A traveling executive logging in from a new country may face stronger verification checks, while an employee working from their normal device in the office should encounter minimal friction. By tailoring defenses to context, AI makes security not only stronger but also more seamless. 

The future of cybersecurity will not be won by whoever has the fastest algorithms or the largest datasets. It will be won by the organizations that apply AI to build resilience, i.e., systems that assume humans will make mistakes, and that minimize the damage when they inevitably do. Attackers will always find new ways to weaponize AI, but if defenders focus on human-centered design, they can make those attacks far less effective. 

Resilience begins with removing obvious targets. Eliminating passwords in favor of passkeys ensures that no matter how convincing an AI-powered phishing email might be, there is nothing to steal. Automating configuration checks reduces the likelihood that a single overlooked setting can bring down an entire network. AI-guided phishing defenses shift the battleground from inbox content to user behavior and attacker infrastructure, forcing adversaries to work harder for smaller gains. 

Equally important is the balance of responsibility between humans and machines. AI should augment people, not overwhelm them. Security analysts should not be buried in thousands of alerts generated by opaque models and instead should be empowered with clear, actionable insights distilled from complex data. Employees should not face endless prompts and confusing warnings, but should experience streamlined processes that guide them toward the safe choice without disrupting their work. 

This is what cyber resilience in the AI era looks like: not perfection, but systems designed to bend without breaking, even when humans slip up or attackers innovate. By focusing on simplicity, automation, and collaboration, organizations can ensure that their defenses remain strong even as the threat landscape evolves.  

Author

Related Articles

Back to top button