Radhakrishnan Krishna Kripa didn’t plan to become a DevOps engineer. She started as a web developer, focused on front-end code and database queries. Then came a production deployment that failed at 2 AM, not because the code was broken, but because a configuration file didn’t match across environments. No version control. No automated testing. Just manual fixes and late-night troubleshooting.
That failure changed her career trajectory. She realized that the most difficult problems in software weren’t always in the code itself; they lived in how teams built, deployed, and maintained systems at scale.
Today, Kripa leads DevOps infrastructure in her role, where she architects CI/CD pipelines for real-time simulation platforms used across healthcare, automotive, and aerospace. She’s migrated legacy codebases with decades of history, containerized complex C++ environments that require GPU testing, and built package management systems for hundreds of interdependent libraries. She also manages direct reports, mentors engineers breaking into the field, and speaks at conferences about both technical practices and the culture shifts DevOps requires.
Her book, Beginning DevOps: A Guide to Containers, Kubernetes & More, addresses a common misconception she encounters when mentoring newcomers: that DevOps is primarily about tools. Mastering Docker or Kubernetes is important, but understanding when and why to automate, and how to work across development, testing, and operations teams, is more important.
We spoke with Kripa about the technical challenges of building DevOps infrastructure from scratch and how AI fits into workflows dominated by containers and cloud systems. We also discussed what changes when you transition from an individual contributor to a manager and why mentoring people from non-traditional backgrounds has become central to her work. She also shared where the field is heading and what someone entering DevOps should focus on right now.
Building DevOps infrastructure from scratch for a real-time simulation platform used across healthcare, automotive, and aerospace involves massive scale. Walk us through that process and where the biggest technical challenges emerged.
In an organization that develops simulation-related products, a few challenges in DevOps that involves resources, infrastructure, and planning are as follows:
- Unlike many organizations, we have large c++,c, C# codebase, which can have legacy systems, mixed OS targets like Windows and Linux, specialized environments using different compiler versions like gcc versions, MSVC versions and physical infrastructure for testing even GPU involved processes.
- Containerization can be complicated as environments will be easier on build environments vs testing side it gets more complex with gpu involved tests that are easier on physical machines with displays. Dev containers are a great way to produce environments across personal machines vs CI/CD pipelines.
- Another big challenge has been dependency and version management. Our environment and code base includes hundreds of interdependent libraries, so moving from zip packaging to standardized systems like Conan/NuGet required significant automation, governance, and CI/CD pipeline management to ensure reproducibility and traceability across branches and releases.
- As we migrated pipelines to Azure DevOps some did use Azure DevOps from start but some did not, we had to optimize builds and caching strategies while including compliance checks like digital signing using internal tools, vulnerability scanning, SBOM generation and policy enforcement for all code changes that gets checked in via a force Pull request (PR) policy running that into the pipeline flow without slowing developers down. We had to make sure the pipelines are optimized as well as fast enough for developers to make sure the code changes are verified.
- Now the new challenge with evolving AI into real world applications has got new challenges that we are researching how we can bring the use of AI to help with observability in addition to security by making sure how data privacy and reproducibility of issues are secured along with automation and observability using AI.Ā Ā
The transition from web development to DevOps happened during a startup internship. What specific moment or problem made that shift clear?
My journey in tech began back in 2014, when I moved from India to the USA to pursue my Masterās in Information Technology at UNC Charlotte. My focus was purely on core development, mainly web technologies, data structures, and software engineering. So naturally, my first job after graduation was as a developer.
But everything changed when I joined a startup as my full-time position after graduation. While my title said Java Developer, the reality was far broader. I was not just writing code, I was deploying web applications to servers, troubleshooting environments and sitting in on customer calls to triage issues. That was my first real exposure to agile methodology, AWS, Jenkins, CI/CD pipelines, and virtual machines and cloud infrastructure.
I still remember one night in my next job, a production deployment failed not because of a code issue, but due to a configuration mismatch. We had no version control, no automated testing, and every fix meant late-night thoughts of how to resolve this.
That was the moment it hit me: The biggest challenges were not in writing software, but they were in how we built, deployed, and maintained it.
That experience sparked my curiosity. I wanted to understand how to make software delivery predictable and repeatable. That curiosity led me into automation, infrastructure as code, and DevOps principles, which was an emerging method at that time.
A few months later I took on a consulting role as my next fulltime opportunity, which gave me a chance to go even deeper that was building CI/CD pipelines, optimizing environments and learning from a variety of teams and experts. Each project taught me DevOps was not just about tools, it was about collaboration, feedback loops from developers and testers and ownership of deployments.
Today, as a Lead DevOps Engineer at Ansys Inc. I get to combine all those experiences from coding to consulting into building large scale automation frameworks, CI/CD pipelines and mentoring teams to think with a DevOps mindset.
Migrating legacy codebases from TFVC to Git while preserving complete version histories is notoriously difficult. What makes these migrations fail, and how did the approach need to differ?
When we did the migration TFVC was really deprecated by Azure devops and not supported anymore, it was little late we still thought the history migration would be easy but realised not as easy as that sounds, it took a big leap effort from IT team to help move history and preserve some history and create a backup in TFVC and locked the repo for future references.
The best approach would have been migrate at correct time, there was no failures in migration but improvements as we optimized the pipelines to be more templatized in vNext (classic pipeline) followed with yaml, also made sure we have maximum changes centralized via version control, main was to move away from zip binaries to conan/nuget package references in managed as well as unmanaged code.
The book “Beginning DevOps” targets people trying to break into the field. Based on what comes up in mentoring conversations, what do most beginners get wrong about what the job actually requires?
One of the biggest misconceptions beginners have about DevOps is that it is just about tools.
They think if they master Docker, Kubernetes or Jenkins, theyāre ādoing DevOps.ā But the reality is, those are just instruments, but DevOps is really about mindset and collaboration. It does not mean we don’t need to learn these tools, the first step, as per my book, is the same, a simple way to learn all the tools and some chapters focusing on how the mindset and collaboration come into play as well.
In my mentoring sessions, I often find DevOps engineers to understand systems, problem-solving, and how to work across teams, especially with development, testing, operations, and even security teams.
Another common misconception is that DevOps is a single army technical role. It is actually the opposite. I understand it as a team sport where we learn by getting feedback and making the developer’s deployment flow successful. The best DevOps engineers I have worked with are great communicators, not just great coders.
And finally, automation is exciting, but the real question comes from asking, āWhy are we automating this?ā or “Can we just not automate this?”. Understanding the process before you automate it is what separates good engineers from great ones.
When I wrote Beginning DevOps, my goal was to have a checklist to start with and also understand how communication and collaboration matter. It is never too late to start your career in the DevOps world.
Containerization, Kubernetes, and cloud systems are daily tools in modern DevOps. Where does AI fit into these workflows, and what parts of the job will stay fundamentally hands-on?
AI is an add-on to deal with many steps in DevOps day-to-day tasks, which will allow a DevOps team to focus on higher-level issues that the team might have to tackle.Ā
AI can help automate many tasks, such as monitoring and restarting any failed nightly jobs for random reasons, using agentic AI to maintain builds, sending emails with concerning logs, or automating and enforcing security policies.Ā
For example, it can be added to a Kubernetes workflow, where in the traditional way, it will keep scaling resources. If any jobs need it, we can enable AI to monitor usage and decide if unwanted pods can be pruned. Instead of double-up scaling, it does so in an optimized way.Ā
AI, as mentioned above, can help monitor jobs and send a report by morning when you log in. The report will include details on why it failed, how many times it restarted the job to achieve a successful run, and if the run still failed, what the final metrics from retries were.
Managing direct reports changes the day-to-day work significantly. How has that transition played out, and what about leadership was most surprising?
Moving into a leadership role completely changed my day-to-day tasks and responsibilities. As an individual contributor, I had the power to measure my own progress in code, pipelines, or fixes. As a manager, success shifted to a mindset of how well my team grows, delivers and collaborates with different teams like I did as an individual contributor as well. I also believe in being a technical manager who will allow me to help and grow my team members in technical as well as self-growth aspects.
At first, it was tough to step back from being the person who solved every problem hands on. I slowly had to learn to delegate different tasks, prioritize deadlines, and distribute among my team members to create an environment where others could experiment, fail safely, and learn. The biggest shift was realizing that my value now comes from making an impact and not producing it directly.
What surprised me most was how much leadership is about listening, staying up to date with technological shifts, and managing deadlines through collaboration as a team. One hour you are debugging a pipeline and next hour you might be helping your direct reports navigate through a task they are stuck on. Management is all about empathy, trust, collaboration and alignment.
Coming from a family of teachers, mentorship felt natural to me, but formal leadership taught me that every conversation can shape culture. I realized it is not just about delivering products or completing the work on time but also building great engineers and a team who will carry those products forward.
Mentoring, especially for women and people from non-traditional backgrounds, seems central to the work. What specific obstacles do these groups face when entering DevOps, and what advice actually helps?
Mentoring women and people from non-traditional backgrounds is something deeply personal to me. I have seen that the biggest obstacles are not always technical; they are shaping your mindset.
Many newcomers, especially women or career switchers, struggle with a feeling that they do not belong because they don’t have a computer science background or because they came from another field that is not at all software-based. There can also be a lack of visible role models in technical leadership, which makes it harder to imagine yourself in those roles.
When I was looking for a job in DevOps, I noticed that most DevOps roles still require years of tooling experience and coding, which can be intimidating. Plus, the software industry keeps evolving. The entry points are not always clear or the same, which can cause unnecessary barriers for talented people who could thrive if given the proper support.
If we can pair new hires with mentors, celebrate learning over perfection, and create environments where people can ask ābasicā questions without fear. My advice to anyone breaking is that you donāt have to know everything to start. DevOps is about curiosity, collaboration, and resilience. Focus on building communication and learn scripting, tools and what is trending these days, the rest will follow. And my message to leaders is to be the person who opens a door for someone who wants to try automation or a field that is not his/her expertise yet. I always feel grateful for my mentors and managers who believed in me, and also my family of teachers, to my leaders at Ansys, who encouraged questions and growth.
If I could say one last thing it would be to make DevOps truly inclusive, where every background is seen as an asset, not an exception.
Speaking about breaking into DevOps with confidence goes beyond just technical knowledge. What does that confidence actually consist of?
Breaking into DevOps with confidence is not about knowing every tool; it is about embracing a new mindset.
In traditional development, success often ended with āIt works on my machine.ā But DevOps flips that completely. It is no longer about individual code but about how we can make it about outcomes. We own the entire pipeline, from build to deployment and we make sure it works everywhere.
For me, confidence in DevOps came from realizing that this field thrives on collaboration, iteration, and curiosity.
- Automation is a responsibility; one broken script can block ten engineers.
- Progress comes from iteration. Experiment, get feedback and improve continuously.
- Collaboration turns good engineers into great ones.
- You do not need to know everything to start, begin where you are.
Breaking into DevOps confidently means shifting from imperfection to better state. It is about owning the process, learning as you go, and building systems, collaborating with people, which can help in a system that runs smoothly together.
The DevOps landscape keeps shifting. Where should someone entering the field focus their learning right now, and what changes are coming that will matter most?
DevOps is evolving from a focus on automation and tooling to a focus on experience, security and sustainability.
The next significant shift is happening around Developer Experience (DevEx) and DevSecOps. Companies are realizing that productivity is not just about pipelines,it is about how the developerās day to day environment is. Platform engineering, internal tooling; and Dev Containers are great examples, they will help teams standardize environments, onboard faster, and eliminate the classic āit works on my machineā problem.
At the same time, DevSecOps is no longer optional. As delivery cycles speed up, we need to make sure security enters every stage from code to cloud using practices like policy as code, automated scanning and secret management.
As DevOps continues to evolve, the real skill isnāt mastering every new tool as mentioned before, it is understanding how all the pieces fit together. The future belongs to those in DevOps who can see the big picture, connecting CI/CD, cloud, security and developer experience into one cohesive system.
Learn to think in systems, not silos. Understand automation pipelines end-to-end and stay curious and up to date about how AI-driven practices are reshaping DevOps from predictive monitoring to intelligent workflow automation.