Future of AIAI

Why AI Initiatives Fail More Often Than They Should?

By Danny Sandwell, Senior Solutions Manager at Quest Software

Organizations are moving fast on AI. From automating operations to generating real-time insights, leaders are under pressure to launch high-impact initiatives quickly. But many projects stall or fail—sometimes before they even reach production.Ā 

According to McKinsey, just 22% of companies using AI have seen it scale across business units and Gartner predicts that by 2026, 60% of organizations will have abandoned half their AI projects because they lack the data foundation to support them.Ā 

There’s no single cause. But a common—and fixable—one is poor data structure. AI projects depend on data that’s well organized, well understood, consistent and well maintained. Without it, models don’t learn correctly, predictions go sideways, and teams spend more time fixing errors than generating value.Ā 

This is where data modelling makes the difference.Ā 

What Is Data Modelling—and What Are the Misconceptions That Undermine It?Ā 

Data modelling defines how data is organized, related, and governed. It creates a shared structure that helps both humans and machines make sense of complex information.Ā 

However, modelling often gets deprioritized. It’s not that organizations skip it entirely—it’s that it happens informally, inconsistently, or in isolation. Definitions are buried in spreadsheets. Business rules live in individual heads. Documentation is incomplete or outdated.Ā 

Why?Ā 

  • ā€œWe’ll clean it later.ā€
    Teams underestimate how messy data becomes mid-project, or how hard it is to reverse bad structure.Ā 
  • ā€œAI will figure it out.ā€
    Even the best models struggle when inputs are mislabelled, duplicated, or undefined.Ā 
  • ā€œWe don’t have time.ā€
    Data modelling is seen as a delay—when in reality, it shortens delivery by reducing rework and confusion, while providing a common and clear understanding of what success looks like.Ā 

These misconceptions persist because modelling is still perceived as too technical, cumbersome or academic. But modern tools make it more visual, collaborative, and iterative—especially for AI-focused teams.Ā 

What Happens When the Foundation Is Weak?Ā 

The costs of loose or inconsistent data modelling aren’t hypothetical. They show up in real, measurable ways:Ā 

  • Poor data quality.
    According to Gartner, bad data costs organizations an average of $12.9 million per year. AI amplifies these issues by making predictions at scale—wrong ones.Ā 
  • Delayed launches.
    Teams spend weeks untangling source conflicts, fixing schema mismatches, and re-aligning logic because they lacked a clear structure at the start.Ā 
  • Lower adoption and trust.
    Business users don’t trust AI insights when results contradict their expectations or can’t be explained.Ā 
  • Regulatory exposure.
    With frameworks like the EU AI Act and stricter data privacy laws, knowing where data comes from, and how it’s used, is no longer optional.Ā 

These failures often get chalked up to model design or lack of AI expertise. But in many cases, the real issue is upstream: unclear definitions, inconsistent metadata, and weak data lineage.Ā 

Strong Modelling Doesn’t Slow You Down—It Speeds You UpĀ 

Good data modelling isn’t about bureaucracy. It’s about reducing risk and increasing clarity.Ā 

Here’s what it looks like in practice:Ā 

  • Clear business definitions.
    A shared glossary helps align data owners, developers, and analysts around the same terms.Ā 
  • Well-documented relationships.
    When the system knows how data entities connect, AI can more easily detect patterns and context.Ā 
  • Visual tools, not just code.
    Modern modelling platforms let teams collaborate across departments and roles without needing to deep-dive into schemas or scripts.Ā 
  • A natural collaboration point for technical and non-technical stakeholders.
    Data modelling creates a shared language that bridges IT and business. It gives business leaders visibility into how data is structured, while giving technical teams clarity on business intent.Ā 
  • Reusable logic.
    When models are built cleanly, they can be extended across projects, saving time on future use cases.Ā 

For AI teams, this means less time cleaning data, more time tuning models, and faster paths to real outcomes.Ā 

What is the modelling advantage that most teams are missing out on?Ā 

There’s another angle most teams overlook: explainability. As AI becomes more embedded in business decisions, organizations need to show not just what the model did—but why.Ā 

Data modelling supports this by:Ā 

  • Clarifying the lineage of inputsĀ 
  • Providing transparency into business rulesĀ 
  • Making assumptions and definitions visible and reviewableĀ 

This is especially critical as regulators push for greater AI accountability. In this environment, a well-modelled dataset isn’t just helpful—it’s a compliance safeguard.Ā 

Don’t Let a Messy Structure Undermine a Smart StrategyĀ 

AI can’t fix bad data. It makes bad data drive bad insights, faster.Ā 

If your organization is struggling to scale AI, pause and ask: Do we know where our data comes from, how it’s structured, and what it means? If not, data modelling is the fastest way to get there.Ā 

You don’t need a new platform or massive overhaul. Start with one initiative. Rebuild the model. Align the definitions. Document the relationships. The rest of the AI pipeline will get easier from there.Ā 

The smartest organizations aren’t the ones with the most models. They’re the ones with the strongest foundations.Ā Ā 

Author

Related Articles

Back to top button