AI

What IT leaders should know about building AI-powered security systems from scratch

By Kendra Cooley, Senior Director of Information Security and IT

Greenfield security in 2025 doesn’t start with a SIEM or a shiny dashboard. It starts with a decision: where can AI safely take the wheel, and where must people keep both hands on it? AI is rewiring the way we architect programs, but trust doesn’t come from a model checkpoint. It comes from design choices, controls and culture. 

Lesson 1: What “building from scratch” means in the AI era 

A modern security program is a product, not a project. You ship capabilities in sprints, measure outcomes, and iterate. With AI in the stack, that product mindset accelerates: Detection engineering, enrichment, triage, takedown, and executive reporting can be automated end‑to‑end with agentic workflows. Done well, this moves teams from chasing artifacts to dismantling campaigns, and linking domains, accounts, phone numbers, and other indicators into a real‑time map of attacker infrastructure. 

Just as important, threat activity no longer stays in one lane. Impersonation and fraud hop from paid ads to social media to SMS to throwaway domains and the dark web, so “single‑surface” tools produce fragmented views and slow response. A multi‑channel architecture — one that correlates signals across domains, social, ads, telco, and more — closes those gaps and shortens dwell time. 

Reality check: the human element is still the primary catalyst in real breaches. That’s not a knock on AI; it’s a reminder that attackers use people as much as they use malware. The 2025 Verizon DBIR again finds the “human element” remains a major factor in breaches, often via social engineering and credential abuse, underscoring the need for controls that span users, data, and infrastructure. 

Build principles for day one 

  • Treat correlation as a first‑class feature: design around a graph or campaign view, not single alerts.  
  • Automate where the playbook is repeatable (enrichment, clustering, takedowns); gate automation with human validation where decisions are ambiguous.
  • Instrument for outcomes (reduced dwell time, fewer repeat abuses), not vanity metrics.

Lesson 2: Can you trust it? Policies and controls for AI reliability 

AI speeds you up, but speed without guardrails is just risk at scale. Trustworthy AI in security programs is an operations problem, read: Governance, evaluation, and change control, and not a vibes problem. 

Put these controls in place: 

  1. Adopt a risk framework that fits security work. Use NIST’s AI Risk Management Framework (Govern, Map, Measure, Manage) and its Generative AI Profile to define roles, risks, and actions across the lifecycle. Align your acceptance criteria (e.g., false‑positive tolerance, latency budgets) to those functions. 
  2. Secure by design and default. Bake in logging, abuse‑resistance, and safe‑fail behaviors before you ever connect a model to production workflows. CISA’s Secure by Design guidance provides concrete patterns and accountability practices. 
  3. Defend against LLM‑specific threats. Prompt injection, data leakage, insecure output handling, supply‑chain risk in model or tool dependencies, these are table stakes. Build tests and mitigations that map to the OWASP Top 10 for LLM applications. 
  4. Create an AI bill of materials (AI‑BOM). Track model versions, prompts, tools, training sources, eval suites, and known limitations. Treat model upgrades like any other production change: peer review, rollback plans, and staged rollout.
     
  5. Human‑in‑the‑loop where it matters. Use automation to do the heavy lifting, but require human approval for high‑impact actions (bulk takedowns, customer communications, legal escalations).
     
  6. Regulatory awareness. If you operate in or serve the EU, understand the EU AI Act’s phased obligations, i.e.risk management, logging, incident reporting, and transparency, especially for high‑risk systems and models. Build the paperwork as you build the pipeline. 

Lesson 3: The human side is still the hard side 

Great AI won’t save a broken process. Building from scratch means integrating security into how the entire business works. 

  • Cross‑functional operating model. Give SOC, Brand, Fraud, and Threat Intel shared visibility and joint KPIs so multichannel threats don’t fall between silos.
  • Simulation and rehearsal. Run realistic, multi‑channel social‑engineering exercises (email, SMS, chat apps, paid ads) that mirror live attacker infrastructure. This hardens processes and surfaces gaps in minutes, not quarters.
  • Culture beats configuration. Train people to challenge odd requests, validate out‑of‑band, and report quickly. Pair awareness with product changes (e.g., safer defaults, step‑up verification) so humans aren’t the last line of defense.

What good looks like 

A “good” AI‑powered security system blends ruthless automation with principled restraint: 

  • Campaign‑centric visibility that connects signals across channels into a single threat map.
  • Agentic workflows that automate correlation, prioritization, and disruption—backed by human validation for high‑risk actions.
  • Continuous evaluation tied to business outcomes: faster takedowns, fewer repeat incidents, lower analyst toil.
  • Governance and compliance baked in from sprint one via NIST AI RMF, OWASP LLM controls, and (where applicable) EU AI Act requirements. 
  • A security culture that treats AI as a force multiplier, not a substitute for judgment.

AI doesn’t replace the craft of security engineering; it changes its unit of work. When teams design for campaigns instead of alerts, and balance automation with trust, organizations get systems that are faster, safer and far more resilient than what we could build five years ago. 

Author

Related Articles

Back to top button