AI

How Leadership, Guardrails, and Culture Turn AI Into an Everyday Accelerator with Sabareesh Kappagantu

As artificial intelligence becomes embedded in everyday software development, the real challenge for engineering leaders is no longer access to powerful models, but how those models are introduced, governed, and trusted inside real teams and real codebases. The difference between AI that creates lasting leverage and AI that adds noise often comes down to leadership, workflow design, and cultural norms rather than technical capability alone.

In this interview, we spoke to Sabareesh Kappagantu, a software engineering manager whose career spans AI-driven platforms at IBM Watson, Acoustic, and AVEVA. With deep experience across frontend systems, backend infrastructure, and high-performance data platforms, he brings a grounded perspective on what it actually takes to integrate AI into engineering workflows without compromising quality, judgment, or accountability. Rather than treating AI as a standalone innovation, Kappagantu has focused on weaving it quietly into the fabric of how teams design, review, and reason about software.

Drawing on years of hands-on leadership and day-to-day collaboration with engineers, he discusses why AI adoption succeeds only when it reduces cognitive load instead of adding process, how lightweight guardrails and shared context prevent misuse, and why architectural thinking matters more than speed. He also explores how engineering managers must evolve as AI becomes permanent infrastructure, shifting from coordinators of execution to stewards of shared understanding, intent, and trust across increasingly complex systems.

Your leadership spans AI-driven platforms at IBM Watson, Acoustic, and AVEVA. How has that experience shaped the way you think about AI not as a standalone tool, but as an everyday accelerator embedded directly into engineering workflows?

Over my career across IBM Watson, Acoustic, and AVEVA, Iโ€™ve seen firsthand that AI delivers the most impact when itโ€™s treated less as a standalone capability and more as quiet infrastructure embedded in everyday engineering work. When I was at IBM Watson and Acoustic, generative AI wasnโ€™t a thing. We were building systems by hand, systems that captured web user behavior data and fed it into models to predict behavior patterns across multiple parameters. The AI and engineering teams were largely segregated, with limited face time and exposure to each otherโ€™s goals and to what we were collectively trying to accomplish. This led to misalignment, frequent back-and-forth, and missed value.

The problem wasnโ€™t inaccurate models or algorithms, but friction and a lack of exposure to how AI outputs could be used to build better data pipelines. At the time, we were manually recognizing code patterns, over-optimizing code, or reducing duplication. Because we had many engineers working together, we relied on agreed-upon coding patterns, guidelines, test nomenclature, and shared libraries.

Now, with AI in the picture, each team and individual interacts with AI in their own way, bringing different mental models for context and receiving varied responses, whether for code generation or other tasks. This has made it imperative to define AI governance and to encode the โ€œunsaid rules,โ€ such as checking shared libraries for existing helper functions instead of duplicating code, and following shared naming conventions and coding patterns.

All of this has shaped my current leadership style, which focuses on weaving AI into existing workflows to reduce cognitive load for individuals and teams, accelerate feedback loops, and preserve human judgment. The goal is to make AI feel like a natural extension of how teams build software.

You remain deeply hands-on while managing large-scale teams. How does working alongside engineers influence the way you introduce AI-assisted development, code generation, and architectural exploration into day-to-day practice?

Being hands-on alongside engineers has been central to how I introduce AI-assisted development into day-to-day practice. Outside of my core role, Iโ€™ve built and shipped production-ready mobile applications as personal projects, handling everything from design and development to deployment and DevOps in close collaboration with an AI model. That experience grounded my understanding of where AI meaningfully accelerates work and where human judgment remains essential. It also reinforced that AI is most useful when it operates as a partner in thinking, not as a replacement for engineering intuition.

On my team, I use AI primarily as a lens rather than an author. I rely on it to help me understand complex pull requests, summarize large design documents, and capture nuanced architectural discussions that often happen verbally and get lost over time. This allows me to engage more thoughtfully with engineers, ask better questions, and focus conversations on trade-offs rather than mechanics. Because I still read code, review PRs closely, and write code periodically myself, Iโ€™m able to introduce AI in ways that feel grounded in the reality of the codebase rather than abstract productivity mandates.

That proximity also helps me navigate resistance to AI adoption. Some engineers are understandably cautious due to hallucinations, lack of domain context, and unreliable outputs. I donโ€™t push past those concerns. Instead, I encourage honest feedback about where AI helps and where it breaks down. At the same time, Iโ€™m deliberate about guardrails, particularly for junior engineers or new team members. Over-reliance on AI early on can short-circuit the development of critical thinking around system design and how complex systems fit together. My goal is to normalize AI as a support tool that accelerates understanding and exploration, while preserving the deep reasoning and ownership that strong engineering teams depend on.

Many organizations struggle to move AI from experimentation into real productivity gains. What systems and guardrails have you found most effective in helping teams adopt AI responsibly without slowing delivery or compromising quality?

AI is often introduced as a parallel activity rather than embedded into the systems that govern everyday engineering work. Adoption tends to stall when AI usage is informal or inconsistent, making its impact difficult to measure and trust. Iโ€™ve found the most effective approach is to integrate AI into existing engineering rituals, such as pull request reviews, design discussions, and documentation, so it augments how teams already build software instead of adding new process overhead.

Guardrails are critical, but they need to be lightweight and close to the code. In addition to keeping humans firmly in the loop for decision-making and accountability, Iโ€™ve seen strong results from codifying expectations directly into repositories through rules files and code-level conventions that both engineers and AI tools can reference. These include architectural constraints, preferred libraries, naming conventions, and testing requirements. AI-generated output is held to the same standards as human-written code, with usage differentiated by experience level, encouraging AI as an explanatory and exploratory tool for newer engineers, and as an accelerator for senior engineers.ย 

Continuous feedback loops help teams refine these guardrails over time, allowing AI to drive productivity without compromising quality.

With your background spanning frontend, backend, and high-performance data systems, how do you help teams use AI to make better architectural decisions rather than simply faster ones?

I encourage teams to use AI as a tool for architectural reasoning rather than execution speed. Instead of generating solutions, AI is most effective when it helps make constraints, trade-offs, and system behaviors explicit early in the design process. I often apply it to analyze data flow paths, reason about state boundaries, or explore performance and scaling implications across frontend, backend, and storage layers. This reframes discussions of architecture around impact and intent, rather than preference or familiarity.

Just as importantly, Iโ€™m deliberate about where AI stops. Architectural decisions carry long-term operational and maintenance consequences that require domain knowledge and accountability, so AI is used to broaden perspective, not to decide. By externalizing complexity, summarizing options, highlighting risks, and making assumptions visible, AI helps teams slow down at the right moments while still progressing with confidence. The outcome is architecture that is easier to evolve and operate, not because it was built faster, but because it was reasoned about more carefully.

Cultural adoption is often harder than technical integration. What leadership behaviors and team norms have you seen make the biggest difference when engineers begin to rely on AI as a strategic partner?

The biggest shift Iโ€™ve seen is when leaders model how to work with AI rather than positioning it as something to adopt or comply with. I regularly share both my successes and failures using AI with my team, where it helped clarify a design or surface trade-offs, and where it misled me due to missing context or flawed assumptions. Being explicit about both outcomes reinforces that critical thinking still matters and that questioning AI output is expected, not discouraged. This transparency lowers defensiveness and creates psychological safety for engineers to experiment.

Team norms reinforce that behavior. On my team, AI output is treated as something to be interrogated rather than accepted. Engineers are encouraged to explain why they accepted, modified, or rejected an AI suggestion, keeping ownership firmly with the human. We also make time to discuss external signals, news, research, and real-world examples of AI adoption or failure, so the team stays informed and grounded rather than operating on hype or fear. These conversations have made engineers more comfortable sharing their own experiences and uncertainties.

Cultural adoption improves when teams are explicit about where AI helps and where it doesnโ€™t. Creating space to surface failure cases or near-misses without blame prevents quiet misuse and builds shared learning. When reliance on AI is paired with openness, critique, and reflection, it becomes a strategic partner that strengthens engineering culture rather than diluting it.

Looking ahead, how do you see the role of engineering managers evolving as AI becomes a permanent part of how high-performing software teams design, build, and deliver complex systems?

As AI becomes a permanent part of how software is built, the role of engineering managers shifts from coordinating execution to amplifying collective judgment. One of the most tangible changes I see is in how information flows through teams. AI allows managers to summarize complex design discussions, capture decisions that would otherwise live only in meetings or chat threads, and preserve context over time. This changes the managerโ€™s role from being a relay of information to being a curator of shared understanding, ensuring that intent, trade-offs, and rationale remain accessible as systems and teams scale.

For high-performing developers, AI becomes less about speed and more about leverage. Engineering managers increasingly help senior engineers use AI to offload cognitive overhead, summarizing significant code changes, synthesizing feedback across reviews, or recalling historical decisions, so their attention is focused on the most complex problems. At the same time, managers must ensure these gains donโ€™t fragment the system.

ย Conwayโ€™s Law still applies: if teams reason locally without shared context, architectures drift. AI can either accelerate that drift or help counteract it by making cross-team dependencies, design intent, and system boundaries more visible.

Finally, engineering managers become responsible for shaping the boundaries within which AI operates. Rather than prescribing usage, this means establishing guardrails that preserve accountability and quality, clarifying what decisions require human ownership, where AI-generated output must be reviewed more deeply, and how context is shared so AI is used responsibly.ย 

In this model, the managerโ€™s impact comes from designing an environment where AI strengthens reasoning, communication, and consistency, allowing teams to operate at a higher level without losing rigor or cohesion.

Author

  • Tom Allen

    Founder of The AI Journal. I like to write about AI and emerging technologies to inform people how they are changing our world for the better.

    View all posts

Related Articles

Back to top button