AI & Technology

Why I’m Staying Vendor Agnostic During the AI Gold Rush

We’re in the gold rush of AI development right now, with OpenAI, Anthropic, and Google driving constant changes in cost and model quality as they jostle for market leadership. Some tech companies rushed to sign multi-year contracts so they could start shipping AI applications quickly. Others are taking a more holistic approach to AI innovation — one that doesn’t depend on a single vendor’s success.  

In this article, Planful CTO Sanjay Vyas explains why he pushed for AI vendor-agnostic application architecture, how open-source models fit into the picture, and how to avoid getting locked in with a vendor that can’t scale. 

I still remember the feeling of showing up that first day on the job and realizing I was already behind. We’d been acquired by a company that was locked into a two- or three-year commitment with Rackspace. By then, AWS was breaking away from the rest of the cloud market, cutting compute prices almost every quarter. I had to watch our competitors seize better, cheaper, faster infrastructure as I waited for our contract to expire.  

That experience is why, in my current role, I’ve decided to design our AI stack to be vendor-agnostic from day one. In a “gold rush” market like this, it will take far more than two or three years to see who really wins on execution. In the last six months alone, OpenAI has fallen from clear favorite to one of several contenders. Google has regained ground and Anthropic is shaking things up again. Open-source models are improving fast.  

Building our architecture around a single AI provider creates the same trap I saw in cloud — and even earlier in big data for companies that went deep on Hortonworks or MapR. I don’t want my team, or our customers, reliving this lock-in story when the AI leaderboard reshuffles again.  

History shows that signing one big AI deal is the fastest way to fall behind 

The common sentiment I hear from fellow CTOs about the AI market goes something like this: “Vendors were pushing us hard to commit and offered deals that seemed unbeatable, so we took our pick and now we’re going to lean in.”  

Great move on the vendor’s part. They know you’re racing to deploy something to meet board or market expectations. And they benefit from lock-in. Not you.  

OpenAI, Google, and Anthropic all work hard to secure enterprise-wide agreements because they want to learn from your usage patterns and stabilize revenue. Your goal, as a buyer, is very different. You want to stay free enough to follow the weekly LLM leaderboard changes without doing surgery on your stack each time. If you commit to one vendor while the market is still settling, you limit your ability to tap best-of-breed technology as it appears.  

History suggests the patterns we’ve seen with cloud services and big data will continue. I know companies that went “all in” on org-wide OpenAI deals, handing licenses to everyone from developers to sales. It sounded like a market-leading move in 2025. Now those same leaders are hearing the buzz about Claude Code and Claude CoWork are regretting their vendor picks and trying to figure out how to get out of their contracts. 

Future-proof your product by keeping your model layer modular  

I think about AI as a stack with four layers: data centers at the bottom, then GPUs and base models, and finally the applications users actually touch. Most of us aren’t in the data-center or GPU business, even if we rent a lot of both. Those layers will keep shifting as new hardware appears and hyperscalers jockey for position.   

Where we do have a choice is how tightly we couple our applications to a base model or vendor. My company decided to build its AI features on vendor-agnostic architecture so that we can swap in a model when its quality, cost, or security improves without changing the end user’s experience.  

In practice, this means:  

  • We avoid wiring business logic into one provider’s proprietary features if we can reach the same outcome in another way. 
  • We treat LLMs as components in a supply chain rather than a once-in-a-decade platform decision.  

If a new Anthropic release suddenly performs better on a key workload, or an open-source model reaches parity for a use case we care about, we can change our routing behind the scenes instead of re-platforming.  

Why open-source belongs in your architecture plan 

Speaking of open-source models, part of maintaining a vendor-agnostic stance includes making space for evaluating them with your own GPUs and within your own environments.  

Once you start making a large volume of LLM calls, usage-based pricing can add up quickly. Running the right open-source model yourself can change that curve, especially for predictable, repeatable tasks. It also lets you keep sensitive workloads in a closed system, where you decide exactly what leaves your environment and what never does. 

We’ve seen a similar pattern before with Linux and open-source databases. Early on, it made perfect sense for many companies to pay for commercial software across the board. Over time, as open-source options matured, the economics flipped for a lot of use cases. Companies kept paying for differentiated value but shifted big chunks of their stack to open-source components.  

I expect something comparable in AI. As open-source models close the gap on quality, heavy LLM users with the right scale and skills will have strong reasons to bring parts of that layer in-house.   

Should every CTO start buying GPUs and standing up clusters tomorrow? If your application is welded to a single proprietary API, you’ll watch open-source models get better and cheaper each year without being able to use them meaningfully.   

But if you’ve built clean seams between your app and your models, you can experiment, compare, and gradually route the right workloads to the right place — commercial or open-source, hosted or self-run — without a full rebuild whenever the economics make sense. 

Checklist: 5 ways to pressure-test an AI vendor before you sign  

Our commitment to vendor-agnostic design naturally shapes how we approach buying conversations. As a fintech CTO, I evaluate AI vendors based on their ease of migration and the guardrails they’ve built for finance workflows. In finance especially, an LLM-generated error can have disastrous consequences. Here’s the short checklist I use to make sure an AI vendor won’t keep us stuck behind the rest of the market. 

1. Can they swap models without a rewrite? 

I ask how their application would change if a different LLM became the better option in terms of quality, cost, or security next year. I listen for answers that describe models as easily interchangeable. I also listen for any path to bring in open-source or self-hosted models as those improve.

2. Do they separate probabilistic from deterministic work? 

LLMs are probabilistic by design. In finance, I want probabilistic models handling language (such as understanding a user’s intent) while a deterministic engine does the math. If the LLM itself is responsible for producing numbers people trust, that’s a structural risk.

3, How honest are they about hallucinations? 

I’m looking for vendors that accept hallucination as inherent and explain how they constrain tasks, route critical steps through deterministic systems, and expose uncertainty when needed. A promise that it “doesn’t hallucinate” tells me they don’t understand their own tools.

4, Can they show value in real workflows? 

If they can’t map AI to concrete jobs like running and interpreting standard reports, building board materials faster, answering ad-hoc leadership questions inside the system in an explainable way, I discount the rest. 

5. What’s the security posture for sensitive data? 

My default assumption in finance is that customers don’t want confidential data sent to external models. I look for designs where sensitive data stays in the app’s environment, calls to third-party LLMs are limited and deliberate, and there’s a credible path to move more in-house — potentially with open-source models — if the requirements change. 

Don’t ignore the early signs of lock-in 

The AI gold rush has all the same markers of the big data and cloud waves we’ve lived through: long lists of suppliers, a few emerging leaders, heavy incentives to sign multi-year, organization-wide deals, and a lot of uncertainty about who will still be on top five or ten years from now. Vendor-agnostic architecture is how I de-risk some of that uncertainty.  

There’s a simple way to tell when you’ve drifted into the wrong side of that trade. If your engineers are excited about the latest and greatest Claude Code or Opus 4.6 but you can’t switch because of a commitment to another vendor, you’re looking at early-stage symptoms of lock-in. If you’re hearing rumblings of “why can’t we try that?” take that as a reminder that the safest AI bet is on an architecture that gives you room to change your stack.  

 

Author

Related Articles

Back to top button