
The recently published MIT NANDA report, “The GenAI Divide: State of AI in Business 2025,” [https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Business_2025_Report.pdf] presents a stark finding: despite billions in investment, 95% of organizations are failing to achieve a measurable impact from Generative AI. While the report correctly identifies various challenges related to GenAI, its narrow focus and dramatic conclusions risk creating a misleading narrative of widespread AI failure. The report’s “GenAI-or-nothing” perspective underemphasizes targeted, function-specific solutions and fails to acknowledge established AI strategy principles can lead to an overly pessimistic and incomplete picture of the state of AI in business. Several related perspectives worth reading come from Michael Nuñez’s analysis and Cassie Kozyrkov’s leadership framing, both of which caution against misreading the ‘95%’ figure [Venturebeat, decision.substack.com].
The ‘GenAI Divide’ is a useful diagnostic, but because its primary success lens is structural disruption and sustained P&L lift, it risks under-weighting targeted, process-level gains. The report itself notes early, measurable savings from back-office automation, signaling incremental value that accumulates even without enterprise-wide restructuring. A fuller picture emerges when we also consider the cumulative impact of these smaller-scale wins, which the report itself acknowledges are often more successful than large, top-down initiatives.
For example:
- The MIT report’s 95% “failure” rate does not account for the thousands of organizations that are wisely not attempting to boil the ocean. Instead, they are successfully deploying smaller-scale AI solutions to solve specific problems, building internal expertise, and earning the credibility to tackle larger challenges later. These efforts, which do not result in immediate “structural disruption,” are incorrectly categorized as failures within the report’s framework.
- The report uses “GenAI” almost synonymously with “AI,” thereby ignoring the vast and successful landscape of traditional machine learning (ML). As demonstrated in our work, for example on dynamic price optimization, established ML techniques are consistently used to solve critical business problems and deliver tangible ROI. The solution, for instance, predicted a 9-point margin increase for a fashion retailer by optimizing markdown strategy, a clear success that the MIT report’s methodology would likely miss [Adopting a dynamic AI price optimization model to encourage retail customer engagement]. By focusing only on the struggles of cutting-edge, custom GenAI applications, the report creates a significant blind spot. It fails to acknowledge the ongoing, incremental, and highly profitable work being done with conventional ML in areas like demand forecasting, fraud detection, and customer segmentation.
A few other observations:
- Underestimation of Technical and Data Challenges: The report’s conclusion that the ‘learning gap’ is a more significant barrier than technical or data challenges is a provocative one, based only on the experiences of the 52 organizations surveyed. However, for many large enterprises, particularly those in heavily regulated industries, the reality is that legacy IT and data silos remain the most immediate and costly obstacles to AI adoption. The report’s focus on the ‘learning gap’ is a valuable contribution, but it should not overshadow the foundational importance of data modernization.
- Neglect of Security, Compliance, and Legal Barriers: The report discusses the “Shadow AI” economy as a source of innovation. It frames this as a culture clash. Yet for any CIO or CISO, it is a non-negotiable risk management imperative that makes the ‘learning gap’ irrelevant if a tool cannot be used securely. Further, in any regulated industry (finance, healthcare, government), allowing employees to use personal AI tools with sensitive corporate data is a catastrophic security and compliance failure waiting to happen. For example, model risk governance (e.g., Fed SR 11-7) and the EU AI Act’s transparency/record-keeping obligations make unapproved Shadow AI a governance non-starter in finance/healthcare [Federal Reserve, ISACA].
- Oversimplification of the “Build vs. Buy” Decision: The report presents a clean dichotomy: “buy” is twice as successful as “build.” The reality is far messier. Most large-scale deployments are a hybrid. Companies often “buy” a foundational platform (e.g., from a major cloud provider or a vendor) and then “build” custom applications and models on top of it. The report’s binary distinction misses this crucial and most common implementation pattern. My experience suggests that the most successful implementations often involve a more complex hybrid model, where organizations ‘buy’ foundational platforms but ‘build’ custom applications on top. The report’s binary framing, while useful for highlighting patterns, may not fully capture the nuanced reality of how AI is actually deployed in large enterprises.
- The “Black Box” Problem is Ignored: The report champions “learning-capable” systems, but does not address the explainability and trust issues that come with them. A system that “learns and adapts” is also one whose decision-making process can become opaque over time. In many contexts (like credit scoring, clinical diagnostics, or quality control), a predictable, static system is preferable to an adaptive “black box” that cannot be easily audited or explained. The report overlooks this fundamental trade-off between adaptability and transparency. This omission is particularly striking given the report’s own data that notes “A vendor we trust” as the single most important criterion for executives selecting GenAI tools.
In closing, The MIT NANDA report makes several important contributions to our understanding of AI adoption in business, particularly its identification of the ‘learning gap’ as a key barrier, and its documentation of the ‘shadow AI economy.’ However, the report’s emphasis on ‘structural disruption’ as a primary success metric may inadvertently obscure the significant value being created through more targeted, incremental AI implementations. A more complete picture emerges when we consider both the report’s findings and the broader landscape of AI adoption, which includes the steady accumulation of function-specific wins that collectively drive business transformation.
The reality is that the state of AI in business is far healthier and more nuanced than the “95% failure” narrative suggests. The true “divide” is between organizations that follow a disciplined, value-focused AI strategy and those chasing grand, disruptive visions without a clear plan. Successful organizations are not necessarily building massive learning agents. Instead, they are methodically applying a range of AI tools, both traditional ML and GenAI, to solve specific, high-value problems.
Ultimately, success is not defined by enterprise-wide disruption, but by the accumulation of targeted, function-specific wins that collectively drive the business forward.


