Cyber SecurityAI

AI is Already Under Attack. Are We Repeating the Same Mistakes We Made with Cloud?

By Leon Teale, Senior Penetration Tester, IT Governance Ltd

In a recent penetration test, I tricked a company’s AI assistant into handing over confidential client records in under 10 minutes. This wasn’t a start-up cutting corners, either – it was a global firm confident its system was safe. On paper it looked secure, but in practice it was wide open.

This reminded me of what happened a decade ago when businesses rushed into the Cloud, drawn by savings in both cost and speed, and convinced security could wait. Unfortunately, that rush led to years of breaches, fines and retrofits. In fact, according to Cybersecurity News, Cloud misconfigurations accounted for 80% of Cloud security failures in 2024.

With AI, we’re following the same trajectory but moving even faster.

AI is already part of the attack surface 

Some executives still treat AI risk like a future problem, but it’s not. Attackers are exploiting weaknesses today.

I’ve seen prompt injection attacks that bypass safeguards. I’ve seen shadow AI, where staff paste confidential files into free tools, oblivious to logging oceans of sensitive data. I’ve seen poisoned training data blind systems to fraud. I’ve seen AI misapplied like chatbots used for compliance checks, delivering false assurances and failed audits.

These aren’t hypotheticals, they’re happening now – and most of the security tools leaders rely on simply weren’t designed to catch these issues.

What frustrates me most is the false sense of safety. A model may pass a polished lab test and then fail spectacularly in the real world.

A badly phrased prompt, a mislabelled dataset or one careless employee is all it takes for everything to unravel. Plus AI adapts with new data, so a model that looked safe yesterday may misbehave tomorrow.

Organisations treating AI tools like ordinary software is precisely why they’re such a tempting target for criminals.

The cracks are already showing 

Let’s look at a few examples. In July 2025, Thames Valley Bank suffered a breach that exposed the financial details of over 75,000 customers. According to filings with the Information Commissioner’s Office and the Financial Conduct Authority, this wasn’t the work of a sophisticated hacker. It came down to poorly governed AI systems and weak third-party integrations – problems that were entirely avoidable with the right oversight.

In the US, the Securities and Exchange Commission is coming down hard on ‘AI washing’, where companies exaggerate their AI capabilities to impress investors. Recent enforcement action in 2024 and 2025 makes it clear that inflated claims aren’t being brushed aside anymore. They’re now a serious compliance risk.

Europe also isn’t hanging back either: the AI Act introduces strict rules around high-risk systems.

The risks are here, regulators are watching and the consequences are unfolding now.

The governance gap 

Governance is, as ever, struggling to keep up with adoption. In a survey we ran this year, almost half of organisations told us they are already using AI in operations without formal oversight. Another 39% reported having policies, though many admitted these remain at an early stage, often limited to simple guidance such as “don’t paste in sensitive data”.

What stood out is that the same respondents also placed AI risk management at the top of their compliance concerns. That gap between concern and capability highlights the pressure many organisations are under: they clearly recognise the risks but are still building the structures to manage them effectively.

We’ve seen this pattern before with Cloud. Adoption moved quickly, while security often came later, which resulted in years of expensive problems. With AI, the consequences run deeper. These systems now influence hiring, compliance, trust, and even core business strategy.

What must change 

We don’t need more calm warnings. What we need is action.

  • Test AI like an attacker would. Basic penetration testing is no longer enough. Red teaming and adversarial testing are what reveal the real vulnerabilities.
  • Make policies real. A PDF sitting on a shelf is useless. Frameworks like ISO/IEC 42001 and the EU AI Act are a good starting point, but they only matter if you adapt them to your systems and keep them updated.
  • Train people properly. Most failures don’t come from advanced threats, they come from small misunderstandings. Practical, scenario-based training does far more to prevent mistakes than box-ticking compliance ever will.
  • Monitor continuously. AI systems evolve with every new data point. If you’re not auditing outputs in real time, you’re already behind.

We shouldn’t need to learn the same lessons twice. The rush into cloud taught us what happens when power outpaces control. AI is deeper, more autonomous, and embedded in how decisions get made.

The choice is simple. We either keep repeating the same mistakes or we act now before things break beyond repair. I’m not suggesting we put AI on pause. Innovation should keep moving, but it needs to rest on a foundation strong enough to carry it.

If you get governance right now, you gain more than protection from fines or breaches. You unlock faster growth, smoother compliance and the kind of trust that customers, regulators and investors reward.

I’ve spent years breaking into systems that executives swore were safe. The technology rarely failed. It was misplaced assumptions that opened the door. The same pattern is already unfolding with AI, only at a faster pace. With the right testing, strong policies and real awareness, AI can be harnessed safely. But leaders who keep sprinting ahead without control are not innovating. They are gambling with data, money and trust.

History will not remember who adopted AI first. It will remember who managed it responsibly.

Author

Related Articles

Back to top button