AI

Why Your AI Project Is Failing (And How to Fix It)

By Oliver King-Smith, Founder and CEO,ย smartRย AIย 

The numbers are brutal. In August 2025, MIT published a study revealing that 95% of AI projects at large US corporations wereย failing to deliverย meaningful value.ย Butย here’sย what should really keep you awake at night: that figure isย actually worseย than it sounds.ย The MIT researchers included a small subset of projects that had performed exceptionally well in their analysis. When you remove those bright spots, the failure rate becomes even more devastating.ย 

There’sย one number, however, that offers genuine hope. When companies brought in outside helpโ€”particularly smaller vendors with specializedย expertiseโ€”their success rate jumped to 67%.ย That’sย the difference between one success out of twenty andย roughly twoย successes out of three.ย It’sย the difference between despair and redemption.ย 

So why are projects failing? And more importantly, why are companies ignoring the solution thatย actually works?ย 

The Seduction of Controlย 

A few years ago, right after ChatGPT launched and captured the world’s imagination, I pitched a document retrieval systemโ€”what’sย known as RAG, or Retrieval-Augmented Generationโ€”to a large organization whereย we’dย previously delivered a successful AI project. They hadย theย budget. They hadย the motivation. But theyย made a decisionย thatย I’veย seen countless times since: they decided to build it internally.ย 

Their reasoning was logical, or so they thought. “This is too core to our business,” they told us. “We need to own this technology.”ย 

I’mย confident they cobbled together something that kind of worked,ย probably usingย open-source tools likeย LangChainย and whatever frameworks were trending on GitHub at the time. Did they get the full benefit of AI? I seriously doubt it. More likely, they got something that limped along, consuming resources and attention while delivering mediocre results.ย 

The tragedyย wasn’tย unique to them.ย I’veย watched this pattern repeat across industries: mature companies with impressive track records in their core business, suddenly convinced they could become leaders in AI. What inspired this confidence?ย Largely, itย was ChatGPT’s deceptive simplicity. The chatbot made AI seem almost trivialโ€”just write a prompt, get an answer. How hard could it be?ย 

Very hard, as it turns out.ย 

The Hidden Complexityย 

Here’sย what companies discover too late: ChatGPT made artificial intelligence seem seductively simple, but in practice, things get complicated fast. And complexity requires experienceโ€”real, hard-won experience that most organizations simplyย don’tย possess.ย 

Building effective AI systems requires a solid foundation in software engineering fundamentals. You need to understandย testing. You need to know how to benchmark whatย you’reย actually building. You need to recognize when something is broken, and here’s where AI is uniquely deceptive.ย 

In traditional software development, the program has the decency to crash when something goes wrong. It either works or itย doesn’t. Engineers might sometimes bury their heads in the sand if something works intermittently, but eventually, reality forces accountability. The program fails in obvious ways.ย 

AI is worse.ย I’veย watched teams build models riddled with fundamental bugsโ€”broken approaches, flawed architectures, garbage dataโ€”and theyย don’tย even realize it. The system sort of works. It produces outputs. Itย doesn’tย crash.ย Soย they keep going, making one more tweak, certain that everything will be okay. “Surely the next iteration will fix this,” they tell themselves.ย 

Itย won’t.ย 

The problem is this: with traditional software,ย it’sย usually clear whether your code is wrong. With AI, you oftenย can’tย tell whether the model itself is broken or whether your implementation is faulty. Youย can’tย tell whether your data isย poisonedย or your training approach is fundamentally flawed. That clarityโ€”that hard-won understanding ofย what’sย working andย what’sย failingโ€”is what experience provides. And youย can’tย buy experience quickly. You canย pay forย outside experts, or you can pay your team to learn on your dime. Either way,ย you’reย paying. The only choice is how long you want to pay and how much organizational painย you’reย willing to endure during the learning process.ย 

Bad Tools in Inexperienced Handsย 

The situationย isn’tย helped by technologyย vendorsย making things worse.ย 

Microsoft’s Copilot initiative is a perfect case study in misdirected ambition. The branding itself is soย confusedย that most peopleย don’tย actually knowย what Copilot is or what it does. Part of it involves allowing organizations to build their own AI workflows. Microsoft initially tried to charge $108,000 per year for this capability. Then they cut the price to $360 per yearโ€”a 300-fold reduction. Now they give it away for free.ย 

I tried building something with it once, and I told my engineering team: if you ever produced something this poorly designed, I would fire the lot of you.ย 

Think about that pricing trajectory. Microsoftย doesn’tย drastically slash prices on products, and then eventuallyย giveย them away, if peopleย actually wantย to use them. That pricing tells you everything you need to know. The tool is bad.ย It was clearly built by software engineers who understood traditional software development but had little comprehension of how AIย actually works.ย Despite being terrible,ย it’sย now in the hands of IT departments across Americaโ€”most of them with zero training in AIโ€”who areย attemptingย to use it to build AI workflows for their organizations.ย 

Butย here’sย what should really concern you: even if you somehow get it to work in Copilot, itย won’tย stay free. Microsoft will hike the price to at least $100,000 per year at some point. You can count on it.ย There’sย an old joke in the software industry that explains Microsoft’s business model perfectly: they call their customers “users” because they give away free tools to get them hooked, and onceย you’reย dependent on the platform, they squeeze you.ย You’veย already seen this exact pattern with Copilot’s pricing.ย Don’tย expect it to be the last time.ย 

Is it surprising that projects built with inadequate tools and insufficientย expertiseย are failing? Of course not. It would be surprising if they succeeded.ย 

The Perfect Stormย 

Now we can see the complete picture of why 95% of AI projects fail.ย 

First, companies and their internal AI champions wanted to own everything. They were seduced by visions of becoming the next Elon Muskโ€”the next visionary who would build the magic system thatย changed everything. This desire for control blinded them to a simple truth: theyย didn’tย have theย expertiseย to do it well.ย 

Second, they reached for tools that were never designed forย the work. They used platforms like Microsoft’s Copilot that were built by people whoย didn’tย understand AI or what AIย requires. They used generic software frameworks repurposed for machine learning. They used whatever was cheapest or most readily available, not what wasย actually appropriate.ย 

Third, and most critically, they lacked the skill and experienceย to engineerย good AI systems. Building production AI is different from running a successful traditional business, no matter how technologically sophisticated that business might be. The principles are different. The debugging process is different. The entire mental model of what constitutes success is different.ย 

The result? Less than 5% of projects delivered results that justified the investment.ย 

It’sย a tragedy not just for those companies, but for AI itself. AI has genuine power and potential. But whenย it’sย deployed poorly, when projects fail repeatedly, when billions are wasted chasing unrealistic goals with inadequate tools and inexperienced teams, AI gets a reputation for beingย hypeย rather than substance.ย 

The Unrealistic Expectations Problemย 

Beyond execution failures, another reason projects collapse is simpler and more preventable: companies have wildly unrealistic expectations about what AI canย actually do.ย 

Too many leaders have listened to the hype.ย They’veย heard Sam Altman at OpenAI, Elon Musk, and countless other technology evangelists talk about AI as ifย it’sย on the verge of replacing human thinking entirely.ย They’veย absorbed talk of AGIโ€”Artificial General Intelligenceโ€”and the brilliant capabilities of large language models, whileย largely ignoringย the elephant in the room: hallucinations.ย 

AI systems confidently produce false information. They make up facts. They sound perfectly plausible while being completely wrong. But whenย you’reย approaching AI through the lens of hype, these limitations seem like minor speed bumps.ย They’reย not.ย They’reย fundamental constraints that need to shape how you deploy AI technology.ย 

This is where the concept of “Assistive Intelligence” becomes crucial. When you frame AI as a tool that helps you achieve more, rather than replaces what you do, you start thinking about AI fundamentally differently. You ask better questions about what AI isย actually goodย at.ย 

AI excels at processing vast quantities of information quickly.ย It’sย exceptional at sifting through thousands of documents and finding patterns that would take humans weeks toย locate.ย It’sย good at asking hundreds of questions in the time it would take a human to ask one. It is genuinely powerful at narrowly defined, specific tasks where the data isย cleanย and theย objectivesย are clear.ย 

But AI is terrible at the things that amateurs often expect from it: replacing human judgment, making consequential decisions independently, or serving as a substitute for humanย expertise.ย Almost everyย project that tries to use AI as a replacement for human workers ends in disaster.ย The technologyย simplyย isn’tย there. We keep overselling it, and we keep being disappointed when realityย doesn’tย match the marketing.ย 

The path to success is precisely the opposite:ย identifyย very narrow, specific tasks where AI can augment human capability. Spend the time to build and train your systems with that specific task in mind. Treat AI as one more check in the system, like the “Swiss cheese” approach to safety, where no single check is sufficient to catch every problem, but multiple overlapping checks catch most of them. Use AI as an extra set of eyes.ย Don’tย use it as a replacement for the eyes you already have.ย 

Who Bears Responsibilityย 

The blame for inflated expectationsย doesn’tย rest entirely on the companies that tried to build AI systems in-house. The technology industry has pushed those expectations relentlessly.ย 

Sam Altman has repeatedly overstated what AI can do.ย The constant drumbeat of AGI talk has created an expectation that AI systems are far more capable than theyย actually are.ย Elon Musk has promised autonomous vehicles and artificial general intelligence withย a confidenceย that the underlying technologyย doesn’tย yet support. And Microsoft has released half-baked tools that give the impression that AI development is simpler and more approachable than itย actually is.ย 

When leaders and technology vendors oversell capabilities, they create a market expecting miracles. When the miraclesย don’tย materialize on schedule, disappointment is inevitable. Budgets expand. Timelines slip. Patience erodes. Projects collapse under the weight of unmet expectations that were never realistic to begin with.ย 

Atย smartRย AI, we had to learn what these models could andย couldn’tย do when large language models burst onto the scene with ChatGPT. But we had an advantage:ย we’dย already been working with GPT-2 and other language models for some time. We had built experience with their limitations and quirks. We understood hallucinations becauseย we’dย encounteredย them. We knew about token limits and context windows and theย various waysย that language models could confidently produce gibberish becauseย we’dย already seen it happen dozens of times.ย 

In-house teams building AI for the first time?ย They’reย learning all of this from scratch.ย Resultsย take longer. Budgets expand. Disappointment builds. And by the timeย they’veย learned what they need to know, the project is alreadyย overbudget, behind schedule, and at risk of cancellation.ย 

The Path to Redemptionย 

Here’sย the good news:ย it’sย not too late. That MIT study showing a 67% success rate when organizations bring in outsideย expertiseย isn’tย just a statisticโ€”it’sย a roadmap.ย 

Butย there’sย a specific kind of outsideย expertiseย that works. The study found that smaller vendors with specializedย expertiseย were significantly more effective thanย the giantย consulting houses like McKinsey or Accenture. Why? Because those large firms are learning on your nickel, andย they’reย expensive learners. Their advantage is their scale and their brand, not necessarily their depth ofย expertiseย in machine learning. Smaller, independent teams with years of hands-on AI experience?ย They’veย already paid the price of learning. They know what works and whatย doesn’t. They recognize failure modes youย haven’tย even thought of yet. They can move fast becauseย they’reย not rediscovering things fromย firstย principles.ย 

Bringing inย expertiseย from outsideย isn’tย a failure.ย It’sย an acknowledgment of reality. In the engineering and software development communities,ย there’sย a powerful cultural bias against asking for help. Admittingย youย needย assistanceย canย feel like admittingย you’reย not smart enough or experienced enough. That bias is costing you billionsย inย failed projects.ย 

The simple “hack” of asking for help gets you to success 67% of the time. Thatย doesn’tย guarantee every project will succeed, butย it’sย immeasurably better than failing 19 out of 20 times.ย The mathย is overwhelmingly compelling.ย 

Your Next Moveย 

If you have an in-house AI project that seems destined to become one of those 19 failures, you still have a window to change course.ย It’sย not too late. You need to swallow your pride and ask for help.ย 

Start by bringing in a small team of genuine AI expertsโ€”peopleย who’veย been building production machine learning systems for years, not consultantsย who’veย been reading about AI in the news. Ask them to audit your project. Have them assess your approach, your tools, your data, yourย architecture. Most importantly, ask them whetherย you’reย solving the right problem in the right way.ย 

Then,ย make a decision: do you try to build this capability in-house with expert guidance, or do you partner more deeply with outside specialists?ย Both paths can work. Whatย doesn’tย work is pretending you can build production AI systems withoutย expertise, using tools thatย weren’tย designed for the job, with expectations that are disconnected from reality.ย 

The executives and engineering managers reading this have built successful products and businesses. You know how to deliver results. But AI is different enough that your existing playbookย isn’tย sufficient. Recognizing that differenceย isn’tย weakness.ย It’sย the mark of a leader who understands the boundary between confidence and arrogance.ย 

The questionย isn’tย whether you can afford toย bring inย help. The question is whether you canย affordย not to.ย 

Author’s Bio:ย 

Oliver holds a PhD in Mathematics from UC Berkeley and an executive MBA from Stanford, and is an innovator withย expertiseย in Data Visualization, Statistics, Machine Vision, Robotics, and AI. As a serial entrepreneur, he has founded three companies and contributed to two successful exits. At his latest company,ย smartRย AI, Oliver King-Smith spearheads innovative patent applications harnessing AI for societal impact, including advancements in health tracking, support for vulnerable populations, and resource optimization. Throughout his career, Oliver has been dedicated to developingย cutting-edgeย technology to address challenges, and todayย smartRย AI is committed to providing safe AI programsย for manufacturing, MHI and SCM, and resource optimization,ย within your own secure and private ecosystems.ย 

LinkedIn profile:ย https://www.linkedin.com/in/oliverkingsmith/
Email:ย [email protected]ย ย 

Author

Related Articles

Back to top button