AI

What IT leaders should know about building AI-powered security systems from scratch

By Kendra Cooley, Senior Director of Information Security and IT

Greenfield security in 2025ย doesnโ€™tย start with a SIEM or a shiny dashboard. It starts with a decision: where can AI safely take the wheel, and where must people keep both hands on it? AI is rewiring the way we architect programs, but trustย doesnโ€™tย come from a model checkpoint. It comes from design choices,ย controlsย and culture.ย 

Lesson 1: What โ€œbuilding from scratchโ€ means in the AI eraย 

A modern security program is a product, not a project. You ship capabilities in sprints, measure outcomes, and iterate. With AI in the stack, that product mindset accelerates: Detection engineering, enrichment, triage, takedown, and executive reporting can be automated endโ€‘toโ€‘end with agentic workflows. Done well, this moves teams from chasing artifacts to dismantling campaigns, and linking domains, accounts, phone numbers, and other indicators into a realโ€‘time map of attacker infrastructure.ย 

Just as important, threat activity no longer stays in one lane. Impersonation and fraudย hopย from paid ads to social media to SMS to throwaway domains and the dark web, so โ€œsingleโ€‘surfaceโ€ tools produce fragmented views and slow response. A multiโ€‘channel architecture โ€” one that correlates signals across domains, social, ads, telco, and more โ€” closes those gaps and shortens dwell time.ย 

Reality check: the human element is still the primary catalyst in real breaches.ย Thatโ€™sย not aย knock onย AI;ย itโ€™sย a reminder that attackers use people as much as they use malware. Theย 2025 Verizon DBIRย again finds the โ€œhuman elementโ€ย remainsย a major factor in breaches, often via social engineering and credential abuse, underscoring the need for controls that span users, data, and infrastructure.ย 

Build principles for day oneย 

  • Treat correlation as a firstโ€‘class feature: design around a graph or campaign view, not single alerts. ย 
  • Automate where the playbook is repeatable (enrichment, clustering, takedowns); gate automation with human validation where decisions are ambiguous.
  • Instrument for outcomes (reduced dwell time, fewer repeat abuses), not vanity metrics.

Lesson 2: Can you trust it? Policies and controls for AI reliabilityย 

AI speeds you up, but speed without guardrails is just risk at scale. Trustworthy AI in security programs is an operations problem, read: Governance, evaluation, and change control, and not aย vibesย problem.ย 

Put these controls in place:ย 

  1. Adopt a risk framework that fits security work.ย Useย NISTโ€™s AI Risk Management Frameworkย (Govern, Map, Measure, Manage) and its Generative AI Profile to define roles, risks, and actions across the lifecycle. Align your acceptance criteria (e.g., falseโ€‘positive tolerance, latency budgets) to those functions.ย 
  2. Secure by design and default.ย Bake in logging, abuseโ€‘resistance, and safeโ€‘fail behaviors before you ever connect a model to production workflows.ย CISAโ€™s Secure by Designย guidance provides concrete patterns and accountability practices.ย 
  3. Defend against LLMโ€‘specific threats.ย Prompt injection, data leakage, insecure output handling, supplyโ€‘chain risk in model or tool dependencies, these are table stakes. Build tests and mitigations that map to theย OWASP Top 10 for LLMย applications.ย 
  4. Create an AI bill of materials (AIโ€‘BOM).ย Track model versions, prompts, tools, training sources,ย evalย suites, and known limitations. Treat model upgrades like any other production change: peer review, rollback plans, and staged rollout.
    ย 
  5. Humanโ€‘inโ€‘theโ€‘loop where it matters.ย Use automation to doย the heavyย lifting, butย require human approval for highโ€‘impact actions (bulk takedowns, customer communications, legal escalations).
    ย 
  6. Regulatory awareness.ย If youย operateย in or serve the EU, understand theย EU AI Actโ€™sย phased obligations,ย i.e.riskย management, logging, incident reporting, and transparency,ย especially for highโ€‘risk systems and models. Build the paperwork as you build the pipeline.ย 

Lesson 3: The human side is still the hard sideย 

Great AIย wonโ€™tย save a broken process. Building from scratch means integrating security into how the entire business works.ย 

  • Crossโ€‘functional operating model. Give SOC, Brand, Fraud, and Threat Intel shared visibility and jointย KPIsย so multichannel threatsย donโ€™tย fall between silos.
  • Simulation and rehearsal. Run realistic, multiโ€‘channel socialโ€‘engineering exercises (email, SMS, chat apps, paid ads) that mirror live attacker infrastructure. This hardens processes and surfaces gaps in minutes, not quarters.
  • Culture beats configuration. Train people to challenge odd requests,ย validateย outโ€‘ofโ€‘band, and report quickly. Pair awareness with product changes (e.g., safer defaults, stepโ€‘up verification) so humansย arenโ€™tย the last line of defense.

What good looks likeย 

A โ€œgoodโ€ AIโ€‘powered security system blends ruthless automation with principled restraint:ย 

  • Campaignโ€‘centric visibility that connects signals across channels into a single threat map.
  • Agentic workflows that automate correlation, prioritization, and disruptionโ€”backed by human validation for highโ€‘risk actions.
  • Continuous evaluation tied to business outcomes: faster takedowns, fewer repeat incidents, lower analyst toil.
  • Governance and complianceย baked in from sprint one viaย NIST AI RMF, OWASP LLM controls, and (where applicable) EU AI Act requirements.ย 
  • A security culture that treats AI as a force multiplier, not a substitute for judgment.

AIย doesnโ€™tย replace the craft of security engineering; it changes its unit of work. When teamsย design forย campaigns instead of alerts, and balance automation with trust, organizations get systems that are faster,ย saferย and far more resilient than what we could build five years ago.ย 

Author

Related Articles

Back to top button