
The speed at which generative AI (GenAI) is changing enterprise operations is breathtaking. A few years ago, genAI was, at best, a promising chatbot. Today companies are rolling out agentic systems across their environments that are capable of reasoning, decision-making, and acting autonomously. And by 2028 Gartner predicts that agentic AI will handle 15% of daily workplace decisions, marking a transition toward hybrid teams of humans and autonomous agents.
When it comes to IT departments, autonomous agents are already being deployed to proactively detect and resolve issues, optimise processes, and prevent disruptions, saving both time and money for overstretched IT teams. However, although the productivity benefits and cost savings are immense, GenAI can be a double-edged sword if implemented incorrectly. Given that these tools are making autonomous decisions, that means business need to have incredibly strong governance and accountability processes in place to avoid costly errors or unexpected disruptions.
More work for IT teams
Unlike traditional automation, which follows predefined rules and workflows, agentic AI is able to reason out the best solution to a prompt by drawing on additional context. However, this also means that they are non-deterministic, i.e. putting in the same prompt twice in a row can potentially return very different results.
As agentic decision-making increases, this presents a challenge to businesses around how to maintain transparency and accountability. In short, while GenAI enables organisations to be far more agile and productive, that can only happen if there are robust frameworks to monitor, guide, and interpret AI actions. The responsibility for this inevitably falls on IT teams who are now being told to deploy agentic systems that increase business velocity, while simultaneously maintaining transparency and mitigating the potential risks.
A broken support model
Worse, traditional IT support models simply aren’t designed able to operate at the necessary speed to cope with these changes. Historically, IT support has relied on ticket-based, reactive processes: a problem arises, a request is logged, and IT investigates. This process doesn’t work with agentic tools because they are acting continuously and often invisibly, intervening before a ticket is ever raised. As a result, if an AI system has made an error, by the time a ticket has been logged and IT made aware of an issue, the damage may already have been done.
For example, let’s suppose an AI agent decides that it needs to access some sensitive data in order to best respond to a prompt, or draws on the wrong dataset entirely. Without clear logs or monitoring, IT may not know that this has happened, or how these changes are impacting end users, SLA performance, or key business metrics. Without insight into how AI actions affect workflows, productivity, and employee experience, errors or poorly timed interventions can erode trust.
Many organisations are discovering that traditional monitoring tools, designed to track human-led processes, are insufficient for managing digital colleagues. CIOs and IT leaders must now focus on understanding adoption, usage, productivity, and risk, while ensuring autonomous actions are accountable and aligned with organisational objectives.
Turning autonomy into advantage
There is no magic bullet to solve the governance, risk, and compliance (GRC) challenges related to GenAI. However, an essential part of any solution should be a robust Digital Employee Experience (DEX) strategy. DEX differs from other performance tracking tools because it enables IT teams to gain control where and when GenAI interactions are taking place. For instance, by integrating policy and data controls directly into apps or workflows, IT teams can set up automated interventions to happen in real-time rather than being alerted to a problem and having to deal with the fallout later.
Moreover, by reducing both cognitive and process friction (e.g. through in-app alerts and guidance) businesses can lower digital friction and increase adoption without introducing heavy-handed policies or blanket rules that over-block or under-protect, meaning a greater return on investment for the AI tools themselves.
By adopting DEX principles, AI agents can be treated like digital colleagues: aligned to specific roles, evaluated against outcomes, and optimised based on their effect on efficiency and productivity. Integrating DEX processes ensures interventions are contextual, allowing AI to act proactively, resolve issues in the flow of work, and learn continuously from real-world feedback.
This approach transforms IT from a reactive support function to an orchestrator of a dynamic, collaborative digital workplace. Rather than simply layering automation on top of existing processes, organisations guided by DEX principles can prevent friction, personalise experiences, and ensure that autonomous systems enhance instead of disrupt work.
Redefining IT’s role in an agentic future
Agentic AI is about augmenting human capability, not replacing it. By placing DEX at the centre of AI strategy, organisations gain actionable insight into human–AI interactions, ensuring autonomous systems act transparently, responsibly, and in ways that improve work.
IT teams evolve from reactive problem-solvers into architects of a human-centred digital workplace. By designing workflows where employees and AI agents collaborate seamlessly, organisations can create a workforce that is more resilient, efficient, and empowered. Technology becomes a force multiplier, amplifying human skills and capacity.
The success of autonomous work will ultimately be measured not by the number of AI systems deployed, but by how effectively they support the people who rely on them. Organisations that integrate agentic AI with robust DEX insights will convert innovation into sustained operational and human impact, something conventional auditing or monitoring frameworks alone cannot achieve.
