
I’ve had conversations with over 80 CIOs in the last six months about their biggest challenges (and hopes) as they look ahead to 2026. These were not CIOs from other tech companies. These were the ones managing established, often regulated, always complex businesses from banking to insurance to manufacturing. They are the leaders who will actually decide what the future of software looks like as they integrate generative AI and agentic systems into real, mission-critical enterprise apps and workflows.
I learned some unexpected things along the way that challenge much of the prevailing wisdom from AI prognosticators. Spoiler: the “winners” in the AI race may not be tech vendors, but their customers – these same organizations I’ve been talking to, who are suddenly empowered to build most of the applications and agents they need themselves, without the overhead and compromises that come with buying commercialized software. Here are ten surprising, and perhaps controversial, predictions based on these conversations that point to where our industry may truly be headed.
-
AI will increase complexity before it reduces it
The potential for AI to accelerate and scale software development is huge. We are already seeing how vibe coding is turning what used to take weeks into minutes and even seconds. What most people aren’t seeing today is the potential for AI to help with the harder stages of the enterprise software development lifecycle. They are overindexing on the build phase, but creating bottlenecks downstream in quality control, security, maintenance and updates.
2026 will be the year IT teams turn their focus to containing and auditing ungoverned AI-generated apps and agents. Those who use AI to systematically govern their full portfolio will be the first to realize the true potential of AI-driven development.
-
Most AI agents will fail in production
Demos of autonomous AI agents are spectacular. Unfortunately, these demos crumble when they meet an enterprise production environment, and this is likely to get worse soon. Why do they fail? Because real world environments involve:
- Constantly changing APIs
- Incomplete or messy data
- Conflicting business rules
- Complex identity and permissioning models
- Non-deterministic behavior leading to unpredictable outcomes
Most autonomous agents will need tight orchestration layers and human-in-the-loop controls. In other words, they’ll need new platforms. Autonomy only works in fantasy. It’s orchestration that wins in reality.
-
The enterprise winners will be platforms, not models
The days when every company was racing to build their own LLM have passed. More practical and much less expensive solutions, such as small language models (SLMs) and vertical models, are emerging.
While a handful of big LLMs will dominate the mass market for consumer AI, enterprise leaders will be able to choose among more specialized options and even develop agents that connect to more than one LM for different scenarios.
Owning the model matters less than owning the lifecycle. Platforms that enable secure, governed, multi-model agent orchestration across complex enterprises will control the value chain.
-
AI will shift value from feature delivery to system integrity
The risks of unmanaged AI running rampant without enterprise-grade guardrails are too great to ignore:
- Hallucinations
- Policy violations
- Data leaks
- Model drift
- Incorrect workflow generation
The ability to ensure correctness at scale becomes more important than the ability to generate software. The new premium will be integrity. The market will reward platforms that can ensure AI-driven systems behave as intended, every single time. The new mantra is trust > velocity.
-
Shadow AI will become a bigger problem than shadow IT ever was
The fact that non-technical users can generate production code and workflows with LLMs is far more dangerous than unauthorized SaaS adoption. Unapproved apps used to be a nuisance, but unapproved models and agents are an existential risk.
Without any oversight at all, a business user with an unvetted LLM can generate production-level code, create autonomous workflows, or connect to sensitive enterprise data. This risk is insidious, viral, and incalculable.
-
CIOs will spend more on control and governance, not less
AI promises deflation even in the face of inference costs. But the reality will be re-inflation of IT budgets to offset:
- New security layers: New runtime defenses and guardrails to protect against prompt injection, data leakage, and rogue agent actions.
- New model oversight: Continuous evaluation and monitoring for performance degradation, model drift, and emergent bias.
- New compliance obligations: Adherence to emerging frameworks like the NIST AI Risk Management Framework and preparing for a new class of AI systems audits.
- New skills: A desperate scramble for talent in AI engineering and governance to manage these complex new systems.
-
Code becomes cheap → Architecture becomes expensive
For decades, the strategic challenge was writing the code to realize a given architecture. Now that AI can generate functional code, its strategic value is lost. Architecture, integration, data modeling, and lifecycle governance become the new strategic moat. The stack collapses upward.
Value and expense will concentrate in these layers:
- System architecture
- Data modeling
- Integration strategy
- Lifecycle governance
As AI commoditizes the lower levels of the stack, solutions and talent that design, integrate, and govern these complex, AI driven systems become the most valuable resources.
-
Agents will be used to test new business models
Agentic AI will increase pressure on business leaders to innovate before losing market share or getting disrupted by a competitor. The focus will shift from driving efficiency to reimagining business models around agentic automation and scale. Experimenting with new business models will no longer be a career make-or-break risk, as agents will make it possible to quickly execute and scale ideas that work and pull back on the ones that don’t. The winners will be the leaders who embrace agility and set bold ambitions for how agentic AI can transform their core business.
-
Regulated industries will build compliance into AIimplementations ahead of government mandates
Global finance, healthcare, manufacturing and other regulated industries aren’t going to wait for a patchwork of government policies to reap the speed and scale of agentic AI. It’s in their interest to build agents that act in accordance with today’s regulations, and design them to be easily updated for evolving laws.
Regulated companies will voluntarily adopt model traceability and lineage, mandatory responsible AI audits, architectural compliance checks, and role-based access restrictions. Building agents with those embedded guardrails will allow regulated companies to entrust agents with higher-impact decisions that impact people and infrastructure and protect against high-profile disasters or loss of public trust.
-
Enterprise developers will be more valuable
With AI, general coding can be automated, but systemic complexity cannot. The developers who can master this comp will become more valuable. Expect top developers to be 5X more productive. This directly counters the common and misguided fear that AI will replace them.
As top-tier talent transitions from writing boilerplate code to conducting a symphony of AI agents, they will be harder to find, increasingly leveraged, and dramatically more valuable.



