
As organizations race to deploy AI systems at scale, the challenge is no longer just technical capability, but how to build intelligence that is secure, governed, and resilient from the start. Kaushik Jangiti is a cybersecurity practitioner and AI security researcher with more than a decade of experience spanning enterprise and product security across complex AI, data, and cloud environments. Having led DevSecOps programs, advanced threat operations, and cloud security initiatives from startups to large enterprises, his work focuses on embedding security directly into how AI systems are designed, built, and operated. In this interview, he explains how cross-functional alignment between engineering, data science, and security teams forms the foundation of secure by design AI, why culture and workflow integration matter as much as tooling, and what leadership qualities are essential as AI innovation accelerates faster than traditional risk models can keep pace.
How do you bring engineering teams, data science groups, and security leaders into alignment when defining the foundation of a secure-by-design AI system?
As organizations race to deploy AI systems at scale, the challenge is no longer just technical capability, but how to build intelligence that is secure, governed, and resilient from the start. Kaushik Jangiti is a cybersecurity practitioner and AI security researcher with more than a decade of experience spanning enterprise and product security across complex AI, data, and cloud environments. Having led DevSecOps programs, advanced threat operations, and cloud security initiatives from startups to large enterprises, his work focuses on embedding security directly into how AI systems are designed, built, and operated. In this interview, he explains how cross-functional alignment between engineering, data science, and security teams forms the foundation of secure by design AI, why culture and workflow integration matter as much as tooling, and what leadership qualities are essential as AI innovation accelerates faster than traditional risk models can keep pace.
Alignment must start at the beginning of the project, not after training a model or deploying an agent. When a BRD or FRD is created, I introduce a High-Security Requirements Document based on an initial risk assessment covering security engineering, privacy, compliance, and third-party risk. This approach is key to my process. Without early guardrails, organizations take on downstream risk. Teams appreciate this clarity: engineering understands boundaries, data science knows data governance needs, and security sets expectations before design begins. This reduces ambiguity and creates a shared baseline for secure-by-design principles.
My approach emphasizes end-to-end security, recognizing that AI systems require ongoing governance across identity, data, and models. In my experience with AI-data cloud environments, the most successful organizations embed controls into existing team workflows. This includes real-time feedback in IDEs, CI/CD pipelines, IaC templates, and model development platforms. For AI, it’s critical to enforce data lineage, model provenance, dataset governance, drift detection, and model-risk guardrails before experimentation.
However, tooling alone doesn’t build a secure organization; culture is essential. I invest in a Security Champions program where embedded champions act as force multipliers, translating security goals into engineering actions and sharing engineering insights with the security team. This two-way feedback fosters cultural cohesion beyond what a central security team can achieve.
Finally, every control must demonstrate its value through ROI, effort, and risk mitigation. The goal isn’t to implement everything immediately, but to focus on what matters most, in an order that maximizes security benefits while minimizing disruption. This approach ensures our secure-by-design foundation is practical, scalable, and aligned with the fast pace of AI innovation.
When technical priorities conflict with security requirements, how do you guide teams toward decisions that support long-term resilience?
Conflicts between technical and security priorities often reflect a lack of shared understanding rather than fundamental incompatibility. In these situations, I direct the discussion toward risk and its tangible business impact, rather than focusing solely on compliance or abstract policies. I pose targeted questions regarding potential exposure, remediation costs, and the consequences of failure. This method reframes the conversation, enabling a clear understanding of the risks involved. Most engineering and data science leaders make informed decisions when risks are articulated in terms of customer trust, operational stability, regulatory exposure, or the accumulation of technical debt.
Still, I don’t think security should always come first. Sometimes the business needs to move quickly, and security’s role is to manage risk without slowing things down. This could mean using compensating controls, accepting some risk with explicit approval, or spreading security requirements across several releases rather than doing everything at once. What matters most is being open and responsible. If we delay a control, we document it, assign someone to own it, and set a deadline to resolve it. I also rely on Security Champions to help in these situations. They explain needs on both sides and often find creative ways to meet everyone’s goals. Building long-term resilience isn’t about winning every debate. It’s about building trust, demonstrating that security supports steady progress, and ensuring trade-offs are clear, intentional, and owned.
What frameworks or practices help you translate security principles into workflows that developers and data scientists can adopt without friction?
The key is to meet teams where they already work and embed security directly into their existing workflows. For developers, that means, for instance, having security tools such as SAST/SCA, in early lifecycle IDE plugins, CLI, pre-commit hooks for policy enforcement, and PR checks, that deliver real-time actionable feedback loops rather than security reports weeks later. For data scientists, we integrate data classification checks into notebooks, enforce dataset governance at ingestion, and embed model provenance and drift detection into ML pipelines. Paired with lightweight, Pareto-driven threat modeling and baseline IaC and pipeline policies, the secure path becomes the path of least resistance, enabling teams to adopt security without friction.
For agentic AI systems, I extend this to a Layered Security Model that applies defense-in-depth across the entire architecture, covering model integrity, data and memory governance, orchestration oversight, agent identity, sandboxing, system-level zero trust, and an overarching governance layer aligned with frameworks such as NIST AI RMF and MITRE ATLAS. This method not only protects applications but also ensures that AI agents can make decisions independently and use tools effectively. When real-time safeguards, cultural alignment via Security Champions, and layered architecture are combined, security becomes seamless, scalable, and naturally adopted by developers and data scientists alike.
How do you ensure that model performance goals do not overshadow responsible data governance and risk controls?
This tension is real, and it starts with how success is defined. If the only metrics leadership tracks are model accuracy or inference speed, governance will always feel like a tax. That’s exactly why I push for the High-Security Requirements Document I mentioned earlier: when governance and risk controls are embedded into project milestones from day one, they become delivery criteria, not optional enhancements. You haven’t shipped a production-ready model if you can’t explain where the training data came from or demonstrate compliance with data usage policies.
Operationally, the trick is making governance controls invisible to experimentation. Automated data lineage tracking, classification tagging at ingestion, and policy-as-code for dataset access let data scientists move fast while maintaining auditability. I also work with leadership to define risk appetite explicitly—what trade-offs are acceptable, which are not. When performance goals conflict with governance, the decision should be intentional and documented, never accidental. The organizations that get this right treat governance as a competitive advantage: they can demonstrate trustworthiness to regulators, customers, and partners in ways their less disciplined competitors simply cannot.
In your experience, what is the most overlooked collaboration gap between engineering, data science, and security teams, and how do you address it?
A significant gap is the lack of shared vocabulary around risk. Engineering prioritizes uptime, latency, and technical debt. Data science emphasizes model performance and the speed of experimentation. Security focuses on risks, threats, vulnerabilities, and compliance. As a result, teams often miscommunicate because they pursue different goals and use different terminology. Projects sometimes stall not due to disagreement, but because no one translates technical concerns, such as “this model lacks input validation,” into security implications like “this creates an injection vector that could expose customer data.”
The practices I have described help bridge this gap. The High-Security Requirements Document connects technical decisions to business risk in terms that all stakeholders understand. Security Champions serve as translators between security and engineering. I also invest in cross-functional threat modeling sessions, where data scientists, engineers, and security teams review use cases and attack scenarios together. This builds empathy and shared context that lasts beyond the meeting. Ultimately, the gap is relational, not technical. Closing it requires intentional opportunities for teams to learn each other’s priorities and develop a common language.
How do you evaluate the maturity of an organization’s readiness to build secure by design AI, and where do you typically focus first
I evaluate maturity across four dimensions: governance clarity, control coverage, cultural adoption, and continuous feedback. Governance clarity asks whether the organization has defined policies for data usage, model risk, and third-party AI, and whether those policies are operationalized or just sitting in a document somewhere. Control coverage assessment, whether security is embedded throughout the AI lifecycle: data ingestion, training, deployment, inference, and monitoring. Cultural adoption measures whether security is owned solely by a central team or is distributed across engineering and data science. Continuous feedback assesses whether mechanisms exist to detect drift, anomalies, and emerging risks post-deployment. Most organizations score unevenly strong in one dimension, weak in others.
Where I focus first depends on what the assessment reveals, but I typically prioritize foundational data governance and baseline pipeline controls. If an organization can’t answer basic questions about where this training data came from, who approved its use, and what access controls are in place, then advanced tooling won’t help. I establish data lineage, classification, and access policies first, then layer in model provenance and runtime monitoring. Quick wins matter early: dependency checks, secret scanning, and IAM guardrails build credibility and demonstrate that security enables rather than obstructs. Maturity isn’t built overnight; it’s built through a sequence of investments that deliver visible, compounding value.
Can you share an example where cross-functional alignment directly prevented a significant security or integrity risk in an AI project?
One small example came from An agentic AI application used a RAG (Retrieval-Augmented Generation) pipeline to fetch data from a source system. The expectation was that the RAG layer would enforce real-time access controls—meaning that if a user’s permissions changed at the source — for example, being downgraded from read/write to read-only on dataset XYZ or losing access to dataset ABC entirely — the agent should immediately reflect those updates. During a cross-functional review with engineering, data platform, identity, and security teams, we uncovered a critical issue: access-control propagation from the source system to the RAG layer was delayed by several hours. During that delay, the agent continued to retrieve data using stale permissions, effectively granting access to a user who no longer had it in the system of record.
Looking at the rapid growth of AI, what leadership qualities will matter most for professionals who want to guide both innovation and security in equal measure
Successful leaders recognize two realities: AI innovation must progress rapidly, but unchecked speed increases risk. This demands intellectual humility, as the threat landscape is evolving and past best practices may soon be outdated. Leaders must also communicate effectively across disciplines, engaging data scientists on model architecture, engineers on pipeline design, and executives on business risk. Those who remain isolated in one area will struggle to foster the cross-functional trust required for secure-by-design AI.
Principled pragmatism is essential. Leaders who always say “no” are bypassed, while those who approve everything risk avoidable failures. Effective leaders assess risk, propose alternatives, and guide teams toward innovative, defensible decisions. They invest in culture-building initiatives such as securitycChampions, promote a shared vocabulary, and foster environments in which engineering and data science view security as an enabler. As AI continues to accelerate, those who shape its future will be leaders who balance ambition with accountability. This is the standard I set for myself.


