
Most conversations about AI in the enterprise start with the technology. Jisu Dasgupta starts with the data.
As a Senior Director at a $6 billion global enterprise and a veteran of Accenture, IBM, and HCLTech, Dasgupta has spent over a decade doing the unglamorous work that separates AI initiatives that deliver from those that collapse under their own ambition. His resume includes multi-million dollar transformation programs across Fortune 500 organizations, a career built on converting fragmented IT environments into unified, intelligent platforms, and a front-row seat to nearly every mistake large enterprises make when they rush AI to market before the foundations are ready.
His view is direct: organizations are overinvesting in tools and underinvesting in the data, governance, and cultural groundwork those tools depend on. Until that changes, he argues, AI will do nothing more than automate the noise.
In this interview, Dasgupta covers the distinction between automation and augmentation, what it actually takes to make predictive incident resolution work, why bias remains the most overlooked risk in enterprise AI deployments, and where the end-user experience is headed over the next five years.
You often describe AI in the end-user space as a shift from automation to augmentation. What does that distinction mean in practical enterprise terms, and how should leaders rethink their AI strategies accordingly?
Automation is focused on removing humans from the loop, whether that means triaging a ticket or fulfilling a request autonomously. It is efficient, and it serves a clear purpose. Augmentation, however, operates on a different level entirely. It brings the human back to the center, giving people capabilities they never had before.
In IT service management and operations, that shift looks like this: rather than waiting for a user to submit a ticket before anything gets resolved, you give users an intelligent experience that understands their context, anticipates their needs, and guides them to a resolution. The ticket submission happens in the background. That is what it means to augment and put the human in control, while operating at a much higher level of effectiveness.
For leaders, this needs to become a guiding mindset. It means designing AI strategy around amplifying the user experience, not just driving efficiency. It means measuring success by KPIs such as user confidence and adoption, not just cost reduction. Organizations that earn their employees’ trust also win in adoption, and that trust is one of the most underrated competitive advantages a company can have.
From your experience leading ITSM and ServiceNow transformations, where are enterprises overinvesting in AI-driven automation, and where are they underprepared?
The honest reality is that most organizations are not ready. But the fear of missing out and the need to answer to the shareholders create a race to implement before foundations are in place. Often, AI adoption is on an organization’s roadmap because the board expects it to be part of the strategy.
What this approach consistently misses is the groundwork: an up-to-date knowledge base, content governance, and clean data.
Having led multiple ITSM and ServiceNow transformations, I have seen the same pattern repeat itself. AI virtual agents and agentic solutions get built on top of information that is either non-existent or poorly structured. The result is not improvement, it is the same wrong and inadequate responses, now delivered faster and at scale.
Investment tends to flow toward shiny new platforms and tools that can demonstrate quick wins, and to some degree that logic holds. But the underspending happens where it matters most: the data, the organizational knowledge, and the consolidation of information that these platforms actually run on. Those foundations are treated as a parallel workstream rather than a prerequisite. Until organizations change that, AI will do nothing more than automate the noise.
Predictive incident resolution is becoming a major focus area. What architectural and data foundations must be in place before predictive models can actually deliver meaningful operational value?
Predictive incident resolution looks impressive on a pitch deck, and enterprises are being sold on the outcome without nearly enough scrutiny of the foundations it requires.
Before any predictive model delivers real value, three things need to be in place.
First, clean, consistent, and historically accurate data, and I mean quality, not just volume. If the last five years of data have uncorrected categorizations, broken CI relationships, and vague resolution notes, you are training a model on noise. The output will reflect exactly that.
Second, a mature and operational CMDB. If it is stale, inaccurate, or incomplete, the model is not predicting anything; it is simply guessing.
Third, integrated observability. Monitoring alerts and logs need to speak the same language as your ITSM platform. Without that alignment, you are working with disconnected signals.
Underpinning all of this is data governance discipline: clear ownership, agreed definitions, and quality thresholds that are actually enforced. If the historical record is messy, incomplete, or siloed, you do not get predictive resolution. You get automated misdiagnosis.
Intelligent virtual agents are widely adopted, but user satisfaction varies. What separates AI assistants that truly enhance the end user experience from those that simply deflect tickets?
Virtual agents, when built correctly, do a good job and their responses serve the purpose. But user satisfaction is about something more than accuracy. It is about the experience, and specifically, personalization.
A well-designed virtual agent does not give every user the same response. It offers different styles of communication, all pointing to the same answer, shaped around who is asking. That is what true personalization looks like. The response a developer receives should be more technical than the one a business user gets, because their experience, language, and approach to the problem are different. When an agent starts operating that way, it stops feeling like a bot or a tool and starts feeling like a natural colleague. That shift drives trust, and trust drives adoption.
The real measure of a virtual agent is not response accuracy in isolation. It is intent resolution: did the user get what they needed, how quickly, how accurately, and what was the path that got them there? That is the metric that actually reflects whether the experience is working.
When deploying AI at scale across fragmented IT environments, what governance challenges emerge, particularly around data quality, bias, and model transparency?
Fragmented IT environments are not new, and in many large enterprises, they are simply the norm: multiple redundant applications and platforms, siloed data, and inconsistent processes that have not been updated in years. When you deploy AI into that landscape, you do not fix the problems. You amplify them.
Data quality is the biggest hurdle, and fragmentation makes it worse. Inconsistent processes, different standards, and different terminology across systems all of it becomes training material. AI learning from that kind of data will output noise and, in many cases, hallucinate.
Bias is the most overlooked risk. If historical data carries skewed resolution notes or inconsistent ticket details, the model will not self-correct. It will learn the skew and reproduce it at scale.
Model transparency, to me, is the equivalent of leadership accountability. In a fragmented environment, the hardest question is not how the model works. It is who owns it when the outcome is wrong. That question rarely has a clean answer, and it should, before deployment, not after.
Governance, in short, comes down to three things: a single source of truth, clear data ownership, and model decisions that are explainable to the people most affected by them.
You have partnered closely with C-suite leaders in large enterprises. How do you balance innovation speed with risk management and change management when introducing AI-driven automation?
My approach is grounded: clear in vision, disciplined in delivery, and packaged into a concise story.
I always start with a pilot, a controlled environment that lets me test the waters, surface risk early, and validate the approach before scaling. From there, I build an MVP that generates value quickly and, equally important, earns stakeholder trust. Getting stakeholders to accept the change journey early is not a nice-to-have. It requires keeping them part of the process from the beginning, not bringing them in once decisions have already been made.
I measure outcomes, not just outputs. And I do not preach change management. I prefer a living conversation, one that stays open and evolves with the initiative.
If employee and stakeholder satisfaction is treated as a core KPI rather than an afterthought, the initiative succeeds. That holds true for any AI or automation program, without exception.
In your work across Fortune 500 organizations, what cultural barriers most often slow down AI adoption in IT operations, and how can leaders address them?
Cultural barriers are, at their core, a human problem. The fear of displacement, the lack of trust in outputs, and a natural resistance to change are not obstacles you can communicate your way through, even when your message is right.
What I have learned the hard way is that you have to craft a way through it. I go directly to the teams and leaders involved and ask them a simple question: what do you think I should do to make this work? That is not a feedback exercise. It puts them in charge of the approach itself.
When people move from resistance to ownership, something shifts. Ownership becomes advocacy, and those same people become your change champions. That is the only sustainable path through cultural resistance I have found.
Looking ahead, how do you see AI evolving in the end-user space over the next five years, particularly in terms of proactive service delivery and human-machine collaboration?
AI is evolving exponentially, and nowhere is that acceleration more visible than in the end-user space. You can see it in the market reactions around companies like ServiceNow and Salesforce, the stock movements, the acquisitions, the urgency. I wake up every day and something meaningful has shifted.
In five years, I do not think the conversation will be about proactive or predictive anymore. It will be prescriptive. Systems that continuously scan the environment, draw on historical data to anticipate what will be needed, and design a recovery path before the problem ever surfaces. You will not log in to a list of patches to install. The system will have handled it. It will have read your emails, understood your priorities, and surfaced exactly what you need to focus on that day.
That is not a distant vision. The trajectory we are on points directly there.
The organisations that will be ahead are the ones building for that future now — not waiting for the technology to mature. “The best way to predict the future is to create it.” — Peter Drucker


