Many enterprises have bought into the buzz around AI, implementing these technologies with sky-high expectations; but most aren’t seeing any returns. According to research, around 95% of generative AI projects have failed to deliver value for their organisation.
The reality of AI is becoming increasingly clear. Rubbish data breeds worse results, and quickly. And bad outputs have consequences; McKinsey reports that nearly a quarter of organisations claim to have experienced negative impacts from generative AI inaccuracies.
The simple truth is that, much like putting tape on a leak, AI systems do not fix broken data. They amplify it and they do so at machine speed, often with little to no human oversight to prevent it. And for those in functions where decision-making centres around data, such as marketing, this has the potential to turn AI into less of a silver bullet solution and more of an albatross around the neck.
Before AI came the data – and that’s where the problem lies
Despite common perception, most marketing is data and these teams depend on it to allocate budgets, target audiences, optimise campaigns and report results. But so many still do so with unreliable data. Research shows that a significant share of marketing data contains gaps and inconsistencies.
When marketers apply AI to inaccurate or incomplete data, all they are doing is automating errors. Without accurate insights, campaign performance suffers and teams lose trust in analytics. The end result is a slowing down of operations, with teams having to manually sense-check AI outputs because they no longer trust the numbers underneath them.
Common issues include missing fields, duplicated records, mismatched metrics, and outdated values. These problems occur across media, CRM systems, ecommerce tools, and analytics platforms.
With CMOs estimating that nearly half (45%) of the data that their teams use is inaccurate, incomplete or flawed, then adding AI to the problem is only going to multiply these mistakes. It’s not the technology that’s failing; it’s performing exactly as designed. The failures are in the inputs.
More doesn’t mean better – AI won’t help if the fundamentals aren’t in place
The temptation might be to simply layer on more AI tools if the initial implementation is a failure. Each new tool claims to solve a specific problem, but in practice, this approach often makes things harder.
Disconnected AI tools create new silos, with each system ingesting data in its own format and applying its own assumptions, meaning context is lost the moment analysis moves from one tool to another. Each one applies its own logic, meaning outputs are out of sync and making Marketing Mix Modeling (MMM) harder. The task of reporting also becomes more difficult, requiring manual reconciliation.
This fragmentation undermines one of AI’s main benefits; it should reduce complexity, not increase it. Marketers have to spend time managing tools rather than tapping into insights that inform strategy. And smaller teams with few resources struggle the most, lacking the staff or expertise to maintain custom pipelines and models.
Cleaning, unifying and democratising data
Setting about fixing these issues requires good data hygiene practices to be established. So rather than chasing after shiny new AI tools, organisations need to ruthlessly clean, unify, and democratise their data.
Cleaning data requires accuracy and consistency to be enforced before analysis begins. Metrics must be standardised across platforms, values validated at the point of ingestion, and anomalies flagged in real time. And while automation is a key part of these processes, ownership matters too. Teams must have clear rules for what good data looks like and take accountability when it falls short.
Unifying data involves breaking down silos at the source; media data, sales data, and customer data must align with shared definitions. Time zones, currencies, and attribution windows must match up. Connection is not about centralising everything into one place, but ensuring data works together across disparate systems.
Democratising data means making insights accessible and usable to everyone who needs them. Many marketing teams have clean, connected data but still struggle to act. Dashboards and models that require specialist data science skills to use give data analysts a burdensome workload, whereas user-friendly interfaces make advanced insights accessible to the whole team.
The role of agentic AI and MCPs
Effective AI must support self-service insight. Marketers should be able to explore performance, test scenarios, and understand drivers without advanced technical skills. Agentic AI systems can help here, but only if they sit on trusted data. When data is of high quality, AI can surface patterns and explain changes, giving recommendations that marketers can follow with confidence.
This is where MCPs (Model Context Protocol) come in. MCPs provide the structure that allows AI systems to retain context and apply logic across multiple steps of analysis. Instead of treating each question as a one-off request, an MCP preserves the full context of an analysis, including metrics, filters, timeframes, and assumptions, as these questions evolve. This enables the AI to deliver consistent, comparable answers as the analysis progresses.
MCPs are also a governance layer; you can think of them as APIs for AI tools. They allow specialised agents to access shared, governed data and exchange context securely. This makes it possible to automate complex workflows, such as measurement or forecasting, without forcing each tool to rebuild its own data logic or assumptions.
When the underlying data is accurate, the MCP ensures that insights remain relevant, auditable, and aligned to business definitions. This means analysis is both fast and error-free, producing recommendations that teams can trust.
Given the right conditions, agentic AI systems could speed up business processes by between 30% and 50%, according to a recent report. However, it’s achieving those conditions that will dictate success or failure; Gartner projects that more than 40% of agentic AI projects will end up being cancelled by the end of 2027.
Clean data is the foundation stone of successful AI implementation
Successful AI implementations require discipline. Teams must prioritise data audits and standardising definitions. Adding AI tools before this ecosystem is solid is not much different to trying to build a house on shaky foundations: ineffectual and set up for collapse. And while this housekeeping work may not be glamorous, it is essential. Getting the basics right is the only way to get tangible returns out of AI investments. The lesson is clear: technology alone does not create value. Only clean, connected, and democratised data makes AI worth deploying.



