Cyber Security

Protecting against evolving AI threats

By Jon Geater, CTO and Co-founder of DataTrails

When it comes to AI, traditional cybersecurity approaches are simply not enough. A major challenge is that with AI it is not just about defending against external attacks. It is also about securing the very foundations on which digital interactions take place: the data itself.

Generative AI is already creating realistic deepfakes, automating fraud and manipulating media at scale, which means ensuring the provenance and immutability of data is now a fundamental requirement for cybersecurity.

AI-driven threats do not just exploit weaknesses in network security or system architecture. They can, and frequently do, take advantage of ambiguity, selective disclosure and the absence of a verifiable chain of custody for digital content. As information flows through connected systems, it is essential that it is safeguarded against tampering, misrepresentation and unauthorised alterations. This is not just about identifying manipulated data after the fact – it’s about ensuring data carries proof of authenticity from its creation.

Voluntary disclosure and the problem of selective silence

One of the most persistent challenges in digital security is voluntary disclosure. When systems allow entities to selectively provide information or omit inconvenient truths, they create a trust gap that bad actors can exploit. If organisations, individuals, or automated systems can simply refuse to attest to their data – or worse, tailor different versions of reality to different audiences – cybersecurity defences are built on shifting ground.

However, the issue is not only about the data that is attested, it is equally about data that’s missing. A supplier who submits records to an auditor but withholds the same information from a regulator is not just making an administrative decision. They are actively manipulating the narrative of truth by using silence (or more technically ‘equivocation’) as a strategic tool.

When an entity chooses not to attest, it must be assumed that the absence of data is itself meaningful. It is not a neutral state; it is a deliberate act that should carry consequences.

This problem is not new. In legal frameworks, it is well understood that silence can be a strategic tool, one that is carefully balanced by principles of responsibility and accountability.

The UK’s version of the Miranda Rights warning states: “You do not have to say anything. But it may harm your defence if you do not mention when questioned something which you later rely on in court. Anything you do say may be given in evidence.”

This structure works because it recognises that individuals have the right to remain silent, but highlights that choosing not to disclose critical information carries consequences.

The same principle must apply in cybersecurity.

In a world where AI-generated deception is a growing threat, the absence of attestation must be treated as a risk factor, not a neutral state. And the late production of evidence should be viewed equally sceptically.

Rejecting unverified data and recognising silence as manipulation

The integrity of digital data cannot be left to voluntary compliance. In the same way that a legal system cannot allow a suspect to provide testimony selectively, connected digital ecosystems cannot allow organisations to provide digital attestation only when it benefits them.

If systems accept information without cryptographic proof of provenance, they create opportunities for AI-driven fraud, identity manipulation and automated disinformation campaigns.

The solution is to embed security at the data layer itself, ensuring information is verifiably attested, immutably recorded, and safe from tampering at the point of creation. This is not a theoretical concern; it is a necessary foundation for AI cybersecurity defences.

Without immutability, digital records become fluid narratives rather than reliable evidence. AI systems trained on tainted or selectively disclosed datasets will reinforce and amplify falsehoods rather than protect against them.

Smart contracts, legal agreements, and critical business operations depend on unbroken chains of trust, where data integrity is preserved across every transaction and interaction.

To accept incomplete attestation records is to allow room for manipulation to exist. If AI is to be part of cybersecurity solutions rather than an enabler of new threats, it must operate on data that cannot be rewritten, manipulated or selectively disclosed without detection.

The principle of “rejecting unverified data” is not just about blocking information that lacks provenance. It is about ensuring the absence of attestation is itself flagged as a sign of potential fraud. And this is not about limiting access or increasing surveillance, rather, it’s about ensuring every digital object, from a legal contract to a research paper, carries a secure, traceable proof of origin. And everybody opts into a social contract of mutual digital accountability.

It means data should be inadmissible for AI-driven decision-making unless it is cryptographically attested, and any unexplained omission should trigger scrutiny. This shifts the burden away from users, auditors and cybersecurity teams, who are often forced to detect manipulation after the fact, and instead creates an environment where unverifiable data – and failing to attest – cannot be weaponised in the first place.

Consequences for Non-Disclosure: A Shift in Digital Accountability

This approach fundamentally changes the security model. Rather than assuming information is trustworthy until proven otherwise, it enforces a default stance of verification before trust.

AI security frameworks must be built on provable data integrity and auditable provenance, or they will become part of the very problem they were designed to solve. The alternative – continuing to rely on voluntary disclosure and post-event fraud detection – will only allow adversaries to stay ahead of security systems by exploiting the weaknesses of unverified data.

The crucial shift is not just in verifying data, but in treating non-attestation as a critical cybersecurity risk. A system that allows entities to opt out of attesting their data without consequence enables deception by design. To prevent AI-driven fraud and misinformation, attestation must be embedded as a default requirement, and unexplained data gaps must be treated as potential manipulation rather than procedural oversight.

The future of AI in cybersecurity is not about simply detecting attacks. It will be about making data and content deception impossible to execute without detection. The problem is not whether bad actors will attempt to manipulate digital records, it is whether organisations will implement data integrity measures that prevent them from succeeding. This is the foundation for a secure digital ecosystem – one where AI can operate as a force for protection rather than an accelerant for fraud.

The solution is clear: establishing provenance, ensuring immutability and embedding data integrity into digital ecosystems from the outset. Without these principles, AI security efforts will always be reactive rather than preventative.

With them, trust is no longer a matter of belief but of proof.

Author

Related Articles

Back to top button