AI & Technology

Make AI Safe, Before You Make It Smart

By Kory Daniels, Chief Security & Trust Officer, LevelBlue

The public availability of GenAI landing in 2022 fundamentally changed the way individuals, and businesses, carry on with their pre-GenAI way of life.ย Tablingย the opinions if hype orย immediateย value,ย for the moment, the disruptive force wasย unparallelย in history. Theย unprecedentย rate of adoptionย ledย businesses toย assessย if investments, and strategy needed to evolve in haste.ย ย 

In 2025, we have seen not only massiveย diversity in the application of AI capabilities across all industries, butย alsoย a rapid evolution ofย the technologyย providedย by vendors, which continuesย at a pace the world is barely able to keep up with.ย At a time whenย research hasย indicatedย thatย ย 95%ย of AI projectsย fail, howย can security, complianceย and risk teams helpย shepherdย their organisationsโ€™ย adoptionย ofย AIย while minimisingย inadvertent risk exposuresย that could be exploited by criminals or trigger regulatory scrutiny and penalties?ย ย ย 

Each new developmentย in AIย pushesย toย challengeย cybersecurityย readiness. However, the hype should not distract cybersecurity professionals from the bottom line:ย if AIย isnโ€™tย properlyย governed, andย builtย with security from the ground up,ย the odds of negative outcomes will far outweigh the ability of security teams to keep up.ย 

The importance of Secure-by-Designโ€™ in cybersecurity AIย 

โ€˜Secure-by-designโ€™ is more than a marketing slogan. It is the principle thatย cultureย must beย celebratedย with security baked in from the ground up, as opposed to being bolted on as a patch or an afterthought. In practice, this meansย technologyย mustย beย designedย toย reasonably protectย against malicious actors, safeguard sensitiveย dataย and defend the connected infrastructure that organisations rely on.โ€ฏย 

AI introduces new stakes. Given that AI systems make decisions at scale and access massive datasets, any flaw or misconfiguration within the code can have far-reaching consequences. Organisations now faceย nearly 2,000ย attacks per week, with the average breach costing $4.88 million. As AI becomes increasingly central to operations, a single vulnerability could lead to significant disruptions in business operations.โ€ฏย 

Too often, companiesย fail toย apply rigorous oversight to how AI systems are built,ย trainedย and deployed. AI alsoย doesnโ€™tย operateย in a vacuum. Most organisations rely on third-party vendors and external services for their AI solutions, which means secure-by-design needs to extend across an organisationโ€™s entire supply chain. Every tool or platform introduced without strong safeguards increases the attack surface of an organisation, meaning that as digital ecosystems expand, so do the opportunities for threat actors to exploit increasing vulnerabilities.ย 

But organisations that make AI secure from the very beginning go beyond just protecting critical processes and information; rather, they create systems they can trust to innovate safely. In addition, regulators and industry standards are starting to demand this approach, making secure-by-design AI both a strategic and operational imperative.โ€ฏย 

Overall, AI promises smarter defences, but ifย it’sย not made secure by design, it risks becoming a bigger liability than the problemsย it’sย supposed to solve.โ€ฏย 

Using AI to strengthen cyber defences without compromising data privacyย 

Traditional defences often miss what looks like โ€œnoise.โ€ AI-powered systems built on secure-by-design principles can turn that noise into insight. Deep learning and Natural Language Processing (NLP) can correlateย seemingly unrelatedย events, such as unusual loginย attemptsย and abnormal network traffic, toย identifyย complex attack patterns.โ€ฏย 

One of the big misconceptions, however, is that using AI for cybersecurity requires sharingย large amountsย of sensitiveย regulatory and complianceย data. However, thisย shouldnโ€™tย be the case. Modern AI-powered Security Information and Event Management (SIEM) systems are designed to keep that data secure while they analyse enormous data volumes in real time using machine learning algorithms thatย establishย baselines of โ€œnormalโ€ behaviour and flag anomalies with exceptional precision.โ€ฏย 

Extended Detection and Response (XDR) platforms further illustrate this shift toward AI-driven cybersecurity that is secure by design. By aggregating data from networks, cloud environments, endpoints and identity systems into a unified view, these platforms enable advanced behavioural analytics that continuouslyย monitorย user and entity activity. This modelling helps define normal behaviour across the digital ecosystem, allowing security teams to detect anomalies early, without compromising data privacy.โ€ฏโ€ฏย 

To ensure AI is both effective and compliant, organisations should also apply a few practical deployment principles. Prioritise building tools that automate internal processes rather than directly analysing customer data. Wherever possible, process data locally rather than in external cloud analysis to reduce exposure risks.ย 

By embedding responsible AI practices and aligning them with GDPR requirements like data minimisation, purpose limitation, and accountability, these platformsย operateย in a compliant and ethical manner. Additionally, they should take emerging standards like the EU AI Act into account. Together, these platforms enable real-time threat response without compromising user trust or data integrity.ย 

Finally, AI deployment should be underpinned by clear contractual safeguards. That means data processing agreements that define how information is handled andย retained, vendor warranties that guarantee customer dataย wonโ€™tย be repurposed for training, and well-defined breach notification terms. Without these protections, even the most sophisticated AI risks becoming a compliance headache.โ€ฏย 

When implemented responsibly, XDR can support GDPR compliance and reinforce trust in AI-powered defences.ย 

The bottom lineย 

AI in cybersecurity is no longer optional; threat actors have already embraced AI.ย Theyโ€™reย running automated phishing campaigns, developing adaptive malware designed to outsmart traditional defences, and deploying real-time evasion techniques.โ€ฏย 

Defenders need to catch up while also ensuring compliance andย maintainingย digital trust. Traditional signature-based detection misses advanced threats that behavioural AI catches withย 98%ย accuracy. The questionย isnโ€™tย whether cybersecurity teams should adopt AI,ย it’sย how AI can be adopted effectively and securely before attackers gain a permanent advantage. The answer lies in deploying AI in ways that strengthen defences without introducing new risks.โ€ฏย 

The winning approach is straightforward: build secure by design tools, not data pipelines. Use AI to generate scripts, create dashboards, and automate configurations while keeping sensitive data local rather than processing this data elsewhere. Organisations that master this tool-building approach will gain AIโ€™s defensive advantages without the compliance headaches, regulatory penalties, or customer trust issues that come with external data sharing or exposed attack surfaces.ย 

Author

Related Articles

Back to top button