
Artificial intelligence is a double agent in cybersecurity. On one hand, it augments defenders with speed, scale, and precision that no human team can match. On the other hand, it arms attackers with tools to launch scams, hacks, and frauds that are disturbingly convincing.
In my role, I see both sides every day. The promise of AI is real, but so is the peril. What makes this moment different is that both defenders and attackers are using the same tooling, foundation models, data, and infrastructure. This really is an arms race, with AI learning to outsmart AI.
AI as a digital shield
The best thing AI has given us is speed. Security used to be a waiting game: set up rules, trigger alerts, and hope the system caught something. That worked for the old world of obvious attacks, but it does not work for today’s subtle ones.
Now, AI learns what normal looks like and flags anything that is not across identity, network, endpoint, and application telemetry. Be it a login from an unfamiliar device, a transaction pattern that’s just slightly off, or even a chat message that feels unusual, these are the breadcrumbs humans overlook but machines catch in milliseconds.
That is the real shift. Cybersecurity has moved from reactive to proactive. The goal is to reduce dwell time, contain threats earlier, and prevent loss before funds move. Aim for rapid detection of high‑severity events, keep false positives low, and review detection quality on a regular cadence. But only if we keep teaching the models. AI that is not retrained regularly will eventually go blind.
Phishing evolves, and so must we
Phishing used to be laughable. Badly written emails with typos and clumsy links. Those days are over. Generative AI lets attackers clone a colleague’s tone of voice, mimic formatting, and even carry out live chats that feel authentic.
Here is the blunt truth: spotting phishing with the naked eye at scale is no longer reliable. If your defence relies on people noticing a bad link, you have already lost. The new mindset must be: do not trust, verify. AI can scan language and metadata at scale, but culture has to do the rest.
That means layering defences: pair content scoring with sender authentication (DMARC, DKIM, SPF), sandboxing for suspicious links and attachments, and out-of-band verification for sensitive requests. Zero trust cannot just apply to networks. It must extend to interactions and processes. Verify payee changes, add extra authentication for unusual approvals, and keep an audit trail of who sent what and when.
Fraud at a new scale
Financial platforms have long relied on fraud detection, but AI has taken it to another level. Systems now look at thousands of signals, from where a transaction originates to the rhythm of how a user types. That allows us to spot fraudulent behaviour before funds even leave an account.
But let us be honest. Attackers use the same technology. Synthetic identities, fabricated transaction histories, deepfake voices on support calls. What once took a ring of criminals now takes one skilled individual with the right AI tools. That is the reality we face.
Our job is to raise attacker cost: combine behavioural biometrics with device intelligence and velocity checks, tune thresholds by segment, and measure outcomes such as authorisation rates, manual‑review burden, false‑positive rate, and fraud loss as a share of volume.When AI and humans work in tandem, the advantage tilts back to the defenders.
The AI arms race
This is why I describe today’s environment as an arms race. For every detection model we build, attackers build one designed to evade it. For every improvement in anomaly detection, there is an adversarial attack engineered to trick it.
Some go as far as creating attacks, with inputs crafted to mislead models and confuse detection systems. Others automate the hunt for vulnerabilities across thousands of systems at once, cutting down the time defenders have to respond. It is already common to see AI fighting AI in real time, with one side learning how to detect and the other side learning how to deceive.
On the defender side, keep red‑teaming models for evasion and prompt injection, watch for drift, and be ready to roll back if performance drops.
Balancing innovation and resilience
For fintech companies, this dynamic creates a real tension. The drive to innovate is strong. Customers want payments to be instant, verification to be seamless, and onboarding to be painless. But the very features that make life easier also widen the attack surface.
That is why resilience must be designed in from the start. AI can strengthen defences, but it should never be the only line. Multifactor authentication, zero-trust frameworks, adaptive authentication, API security, and observability remain vital. AI should enhance these measures, not replace them.
Security‑by‑design means threat‑modelling AI features, classifying model risk, enforcing least‑privilege access to training data, rate‑limiting sensitive actions, and keeping a human‑review path for high‑impact decisions.
Just like attackers, defenders cannot afford to stand still. Resilience means continuously testing defences, adapting processes as threats evolve, and keeping response plans ready. Standing still is not an option.
Trust is human, and that’s why it’s targeted
Here is the irony. In a world of hyper-intelligent AI, the most unpredictable factor is still human behaviour. That is why hackers use AI to mimic trust, create urgency, or impersonate authority figures. It is not about breaking systems anymore; it is about persuading people to let their guard down.
Technology alone cannot solve this. Awareness and scepticism remain our most powerful tools. Training people, employees and customers alike to slow down, question, and double-check is essential. Without it, even the best AI defences will eventually be fooled.
The road ahead
The double life of AI in cybersecurity is not something we can wish away. If anything, the divide will get sharper. Attackers will use AI to personalise their scams further, while defenders will push AI to detect the smallest anomalies in behaviour.
This cycle will not stop, but it does not have to be discouraging. By accepting AI’s dual nature and planning for both its risks and its rewards, organisations can prepare instead of react.
That plan includes clear governance (a model inventory with owners and data lineage, drift monitoring, incident runbooks, and red-team exercises), strong anomaly-detection capabilities across identity, network, and transactions, alignment to recognised frameworks (NIST AI RMF, ISO/IEC 42001), and compliance readiness for emerging rules like the EU AI Act.
At its core, cybersecurity has always been about trust. Customers need to believe that their data and their money are safe. AI, with its two faces, offers a challenge but also an opportunity: to strengthen our defences and to understand our adversaries more deeply than ever before.
The future of cybersecurity will not be decided by AI alone. It will be decided by how wisely we choose to wield it.



