AI

AI in Runtime Across the Cloud, Where the Battle for Trust is Won Inside the System

By Amiram Shachar, CEO, Upwind Security

Artificial intelligence (AI) is eating the world. Every company is shifting from being just a software company to becoming an AI-first company, and anyone who wants to move faster can’t do that without being AI-obsessed. But, like anything else, AI is a skill, and the more you use it, the better you get at it. 

Beyond data and algorithms, AI relies on the environment in which it runs. Once deployed, it does not exist in an abstract environment but in the constant motion of cloud production systems, where models are called upon to make decisions, take actions, and shape outcomes at high speed. 

For most enterprises, that environment is the cloud, and here security is just as critical as model design. Most new applications are developed with AI models at their core. In the past, we fixed software through code changes; however, we are now entering an era where, to build or repair software, we retrain AI models to create more resilient and longer-lasting systems. 

The industry has rightly focused on the quality of training data, the strength of governance frameworks, and the role of regulation. These are essential safeguards, yet they capture only part of the picture, especially when it comes to AI. 

What is runtime? What does it mean for AI? 

Runtime in the cloud is the environment and state where code is executed, along with all the libraries, interpreters, and dependencies it relies on. In the context of cloud security, runtime security means monitoring and protecting workloads while they are executing in the cloud, for example, by detecting malicious processes, privilege escalations, and unexpected network flows. 

For AI applications, runtime refers to the phase where a model is running and making predictions in the real world, as opposed to the training phase. It occurs after deployment, when the trained model is used in production. 

Consider an AI system such as a self-driving car. The model receives live inputs, such as camera feeds, sensor data, or text, and produces outputs, including steering decisions, classifications, or recommendations. In this instance, training happens in the lab or during simulation, learning from large datasets, while runtime happens on the road, in the car, making split-second decisions in real traffic. 

For AI applications, it is the runtime environment that truly matters. Without strong cloud security at runtime, there can be no enduring trust. And it is here, in real time, that security is tested and trust must be earned. 

The overlooked gap in today’s AI debate 

Much of the discussion around AI trust focuses on inputs, whether training data is accurate and free from bias, whether models are explainable, and whether design choices are transparent. While important, these concerns do not reflect the lived reality of models in deployment. Once operational in the cloud, AI systems behave differently from how they did in testing. 

This dynamism creates a blind spot. Governance frameworks, surface scans, and periodic audits, although valuable, cannot keep pace with systems that change in real time. The result is a gap between what is certified in controlled conditions and what actually happens in cloud production, where workloads shift, configurations drift, and new inputs stream in continuously. Within this gap, security lapses can occur and trust can falter. A model that looked safe during validation may drift, malfunction, or even be manipulated once exposed to the complexity of cloud environments. 

Why perimeter defences fall short 

For years, security has been designed around the perimeter, with monitoring at the edges, scheduled scans for anomalies, and alerts raised when suspicious patterns are detected. That approach may work for static systems, but it struggles to address AI running inside dynamic cloud environments. 

Consider an online recommendation system quietly manipulated with false inputs. From the outside, everything appears stable: the system runs smoothly, dashboards display green indicators, and outputs appear plausible. Yet within the model and the workloads that host it, subtle shifts in behaviour are underway. A retail platform might begin surfacing counterfeit products, or a streaming service could start promoting misleading content. 

The system still appears to be performing as intended, but in reality, it is shaping outcomes in ways that compromise both security and trust. 

The sheer velocity of AI compounds the challenge. These systems operate at machine speed, generating risks that unfold far more quickly than manual review or after-the-fact reporting can address. To keep pace, organisations need an approach that starts inside the system and extends outward, embedding visibility and control at the very point where decisions are made. 

Inside-out security for live AI in the cloud 

To safeguard trust, security must move closer to the core of AI decision-making and infrastructure. This means shifting from an outside-in model of scanning to an inside-out model of visibility, where behaviour is continuously observed, contextualised, and adjusted as necessary. By embedding oversight within live cloud environments, organisations can not only verify whether a model was previously validated but also determine whether it is behaving securely and responsibly in the present moment. 

This approach brings several advantages. It links outputs to inputs and environmental triggers, providing the context required to explain not just what a model did but why. It enables real-time response, allowing interventions before small deviations escalate and cause harm. 

Importantly, runtime security in the cloud strengthens governance frameworks and training-time safeguards. Just as financial audits are reinforced by real-time transaction monitoring, AI governance is reinforced by continuous oversight in production. Together, these measures create a more complete and resilient picture of assurance. 

The evolving role of the CISO 

AI is transforming industries, and the role of security leaders, specifically Chief Information Security Officers (CISOs), is at the centre of the debate over AI trust. They are now required to defend cloud infrastructure, meet compliance requirements, and serve as navigators of live risk, guiding their organisations through an environment where AI models shape everything from customer interactions to financial markets and critical infrastructure. 

To succeed, CISOs must develop fluency in runtime AI and cloud workloads, learning how to observe and interpret systems that are continuously adapting, while translating this complexity into clear, actionable insights for boards and regulators. They must ensure that security safeguards do not slow innovation but rather enable it, because AI is not only a potential source of risk but also one of the greatest engines of growth. In this way, CISOs are evolving from gatekeepers into guides, steering AI and cloud innovation safely forward. 

From alarm to assurance 

Too often, the debate about AI security drifts into alarmist territory. A runtime-first, cloud-aware approach reframes the conversation in more constructive terms by focusing on what is happening inside systems right now. It equips leaders with the clarity to understand risks, the context to explain them, and the tools to address them before they escalate. It also offers reassurance to customers, employees, and regulators that AI systems in the cloud are not opaque black boxes, but secure, observable, and controllable entities. 

Trust is not a one-time certification. It is earned continuously, moment by moment, decision by decision. By embedding security at runtime in the cloud, organisations can demonstrate that their AI is not only well-trained but well-defended and well-behaved. 

Towards a more resilient AI future 

The next phase of AI adoption will be determined by who can operate AI securely and with trust at scale. That foundation will not be built exclusively in labs or through regulation, but in runtime, in the daily reality of cloud systems that learn and adapt. 

Organisations that adopt an inside-out security approach will be better positioned to innovate with confidence while managing risk responsibly, and will empower their CISOs to act as protectors of infrastructure and navigators of trust. By doing so, they will create a more resilient future in which AI enhances the bond between technology and society. 

Author

Related Articles

Back to top button