AI & Technology

Your AI Will Fail If You Don’t Fix This First—Why Foundations Matter More Than Models

By Karthik Reddy Kachana

If you’re deploying autonomous AI agents in healthcare, financial services, or any regulated industry, you’ll eventually hit the same wall we face daily: the infrastructure deadlock. The promise is seamless intelligence. The reality, for anyone operating at scale, is fragmentation, compliance gaps, and systems never designed for what you’re asking them to do.  

Most AI projects in regulated environments don’t fail because of bad models. They fail because we’re building adaptive, probabilistic systems and trying to run them on infrastructure built for deterministic, static transactions. It’s like building a race car and putting horse cart wheels on it. 

Having spent over a decade architecting enterprise platforms across more than 100 applications, supporting over 45,000 users, including sales teams, marketers, customer service agents, order management, field service, dispatch operations, and regulatory stakeholders across three continents, I’ve seen firsthand where AI initiatives succeed and where they quietly fail 

The Fragmentation Problem 

Here’s where most projects go wrong. They start with the model. Which LLM? Which vector store? Which prompt strategy? These questions matter, but they’re not the first questions. 

The first question is where your data lives. 

In most enterprises, the answer is everywhere. Customer records in Salesforce. Transaction history in a custom database. Compliance logs in a data lake that no one remembers how to query. Each has its own format, its own access controls, and its own definition of complete. They don’t talk to each other. They weren’t designed to. 

I once reviewed an AI sales assistant project that promised to surface next-best actions for field reps. The model was fine. The engineering was solid. But 3 weeks into development, the team discovered that the data feeding the model came from 6 different systems with 6 different customer IDs. The same customer appeared six different ways. Downstream AI systems, whether recommendation engines, copilots, or agent frameworks, could not reliably reason over the data because identity resolution had never been solved upstream. The project failed not because the models were flawed, but because the foundational data architecture could not support accurate inference or governed action. 

A January study found that 80% of international health buyers now treat data residency as a pass-fail criterion. If your data leaves the country, you don’t get the contract. Most enterprises can’t even tell you which countries their data passes through. 

The fix isn’t better models. It’s a data unification layer that sits upstream of every AI initiative and feeds a governed orchestration layer. Before any model is invoked, enterprises need clarity on where data lives, who owns it, how identities are reconciled, and which systems are authorized to act. In practice, models generate insights and intent, while platforms and workflows enforce policy, trigger actions, and write back to systems of record. 

You cannot train AI on data you cannot find. You cannot trust AI on data you cannot trace. The fragmentation problem isn’t technical debt. It’s architectural failure. 

The Sovereignty Wall  

Data localization laws are spreading faster than enterprises can map them. China’s PIPL. Russia’s 152-FZ. Europe’s emerging frameworks under the EU AI Act. State laws you’ve never heard of until compliance sends an email with a deadline that passed last quarter.  

A few years ago, we faced this exact problem across two major markets simultaneously. One country required all patient data to remain within its borders. Another demanded that any data leaving its jurisdiction be permanently deleted after 30 days. Our global platform, designed in an age before these laws existed, was suddenly noncompliant in two of our largest growth markets. 

The standard response most enterprises take is to split systems. A separate instance for China. A separate instance for Europe. A separate instance for everywhere else. We watched a peer company try this. 18 months later, they had twelve separate Salesforce orgs, no unified view of their global operations, and a data reconciliation process that required a team of fifteen people working overtime every quarter.  

This satisfies regulators on paper. It creates an operational nightmare. three platforms, three data models, and three reporting tools. Global visibility vanishes. When something goes wrong, you’re stitching together insights from systems never designed to share. And when a patient in one country needs support from a specialist in another, you discover that your systems can’t talk to each other. 

​There is a superior architectural alternative. I pioneered a framework that separates global visibility from localized storage. By designing a system where raw data remains within national borders while aggregated, anonymized insights flow to global teams, I enabled the organization to achieve compliance without sacrificing visibility. This approach utilizes field-level data redaction and in-country vaults to store sensitive information locally while ensuring operational data continues flowing seamlessly. 

It’s not a technical limitation. It’s an architecture choice. And it’s the only way to scale across regulatory regimes without losing the ability to see what’s happening. 

The Compliance Blind Spot 

January NIH study landed like a warning shot. 66% of US physicians are actively using AI tools. Only 23% of health systems have the necessary agreements in place with their AI vendors. 

Two-thirds of physicians are using AI. Three-quarters of health systems are without the paperwork. 

Consider a hypothetical hospital system that deploys an AI-powered clinical documentation tool to reduce physician burnout. The system records patient visits, generates summaries, and updates electronic health records. Adoption is rapid. Documentation time drops. Clinician satisfaction improves.  

Months later, during a routine legal review, the organization realizes that the AI vendor does not have a Business Associate Agreement in place. Every interaction processed by the tool now represents a compliance exposure. The hospital must notify affected patients, assess legal liability, implement remediation plans, and pause further AI deployments while governance gaps are addressed.  

The lesson isn’t that teams acted recklessly, it’s that without structured procurement, legal review, and onboarding controls, even well-intentioned AI adoption can create material regulatory, financial, and reputational risk 

Build compliance into the architecture from day one. Audit trails that follow every transaction. Access controls that travel with the data. Integrity checks that run automatically. Documentation can be backfilled. Trust cannot. 

The fix is straightforward but rarely done. Before any AI tool touches production, verify the agreements. If a vendor can’t sign a BAA, they don’t get your data. If your procurement process doesn’t check for this, fix procurement first. The technology is secondary. 

The Connectivity Assumption 

Most enterprises build for perfect connectivity. They assume wifi. Assume cellular. Assume data flows when and where it’s needed. 

Healthcare doesn’t work that way. 

Field reps work in hospital basements where signals don’t reach. Community health workers do home visits in rural areas with spotty coverage. Immunization drives happen in parking lots and school gymnasiums. Patients don’t disappear when connectivity does, but their data might. 

I watched a field service team struggle with this for years. They supported diagnostic equipment in hospitals across the Midwest. Every time they walked into a basement imaging suite, their mobile app would lose connection. They’d complete the service work, drive back to the office, and spend hours manually entering data. Equipment sat idle while they did paperwork. Patients waited longer for results. 

We partnered with platform vendors to build an offline-first architecture. Tens of thousands of records syncing locally. Full functionality without any connection. The app worked in basements, in rural clinics, anywhere. When connectivity returned, data synced automatically. The team stopped doing data entry and started doing more service calls.  

The UK just launched a multimillion-pound project to deploy offline clinical record systems for military medics in low-connectivity environments. The private sector is years behind this curve.  

Here’s a test. Take your most critical mobile workflow. Go to the basement stairwell. Turn off wifi and cellular. Does your app still work?  

If not, you’re not ready for where your users actually are. 

The Governance Gap 

IBM’s January research found that 56% of healthcare leaders are evaluating agentic AI. Only 18% have deployed at scale. The barriers aren’t technical. They’re data privacy. Accuracy concerns. Skills shortages. Unclear return on investment. Nearly half of leaders admit they don’t fully understand these systems. 

This isn’t a skills problem. It’s a governance problem. 

I saw this play out at a company that launched twelve AI pilots in a single year. Each team chose their own model, their own evaluation criteria, their own success metrics. Six months in, no one could compare results across pilots. No one knew which investments were working. Leadership lost confidence. The entire program stalled. 

Organizations launch pilots without frameworks, without guardrails, without any way to know whether they’re working or failing. They treat AI like any other software project, and it’s killing their chances of success. 

In regulated environments, AI should never bypass enterprise control planes. Models can assist with reasoning and recommendations, but authorization, execution, auditing, and rollback must remain firmly within governed platforms. Treating AI as an autonomous actor rather than a controlled component is how organizations lose trust, with regulators, users, and themselves. 

We built an AI intake process that every proposal must pass before writing a line of code. The questions are simple but mandatory:  

  • What data does this need? Where does that data live today? 
  • Who owns that data? Have they signed off on its use? 
  • What regulatory requirements apply? HIPAA? GDPR? PIPL? 
  • How will we measure success? What does “good enough” look like? 
  • How will we know when it’s failing? What’s the rollback plan? 

Teams complain at first. It feels like bureaucracy. But six months in, when leadership asks which pilots are working, you have answers. When a regulator asks how you govern AI, you have documentation. When a project fails, you fail fast and move on. 

The intake process matters more than the model selection. If you can’t answer those questions, you’re not ready to pilot. 

The Path Forward 

All of these gaps are the same gap. Organizations treat AI as a model problem when it’s an infrastructure problem. They invest in algorithms while their foundations crumble. 

The fragmentation problem requires a data unification layer. The sovereignty problem requires separating global visibility from local storage. The compliance problem requires building agreements and audits into procurement. The connectivity problem requires offline-first architecture. The governance problem requires an intake process that forces hard questions before anyone writes code. 

Regulations are accelerating. State laws, national mandates, and international frameworks are layering requirements faster than most organizations can map them. The window to act is closing. 

78% of health systems are engaged in AI projects, while only half feel ready; this gap between ambition and execution has never been wider. 

The organizations that win the next decade won’t be those with the smartest AI models. They’ll be those who built the foundations first. Governance before pilots. Sovereignty before regulators demanded it. Offline capability because patients don’t disappear when connectivity does. 

Your AI will fail if you don’t fix this first. The only question is how much you’ll lose while learning. 

Author

Related Articles

Back to top button