
Discussions around any new product, partnership, or initiative in tech these days always seem to start with AI. Over the past two years, the rush to newly embrace AI has led the industry to approach AI with a range of emotions: excitement, hype, and disappointment. As we look further into 2025, it’s important to take stock of how far we’ve come and what’s next with this fast-paced technology.
The big catalyst to this most recent “AI boom” occurred in 2022 with ChatGPT. It wasn’t necessarily a new technology, but it was a revolution in the sense that it transformed the way people could interact with AI. The basic capabilities of Large Language Models (LLMs) were developed years earlier, but in order to be consumer-friendly, the tech needed a user experience overhaul. ChatGPT succeeded in democratizing the technology with a simple chat interface and almost magical results very quickly. It provided immediate value that captured the attention of the wider public before AI found significant traction in corporate spaces. Enterprises were left playing catch-up.
Generative AI is still growing and evolving and will continue to play an important role, but the conversation is pivoting from talking about integration and use cases to coming to terms with how AI can be most effectively leveraged to add value and transform processes. We are already seeing the marketing shift – “GenAI” is old news and “agentic” is the buzzword of the day. The hype cycle will continue well before ROI is optimized unless a disciplined approach is applied.
Lessons Learned
A common challenge in 2024, and before, was starting with AI and then figuring out where and how to apply it (i.e., “if you have a hammer, everything starts to look like a nail”). While AI provides a target-rich environment, this approach often leads to assigning AI to use cases that are impractical, unnecessary, or too costly. In a recent survey of senior IT leaders, 32% of respondents said they had not yet seen significant ROI from AI investments or efficiency improvements post-implementation.
To achieve ROI and make an impact, tech leaders should first identify problems and pain points within the organization and then determine what AI applications, tools, or services can make those tasks faster, better, and simpler. This problem-first approach ensures alignment between the technology and the organizational goals, just like with any technology investment.
Most failed AI projects today are due to issues with data quality, infrastructure limitations, poor goal definition, or excessive costs — not necessarily because of the AI or technology itself. Common pitfalls include inadequate expectation setting and pursuing the wrong problems – or just poor problem definition. There is also a tendency to go after complex challenges. Sometimes building up skills, corporate confidence, and a habit of ROI realization is better achieved going for easier targets initially.
Additionally, organizations must grapple with external governance, privacy, and security demands, as well as the internal infrastructure and workflows needed to support AI intelligently. Many of these concerns stem from traditional business or IT challenges rather than AI-specific ones. As such, software and IT practices will likely see a renewed focus as AI adoption continues.
Technical Considerations
From a technical standpoint, the success of AI projects hinges on robust data pipelines, scalable infrastructure, and effective model deployment strategies.
1. Data Pipelines: Clean, labeled, and diverse datasets are fundamental for training AI systems. Inconsistent or biased data often results in poor model performance. Organizations must invest in automated data preprocessing and feature engineering pipelines to ensure high-quality inputs.
2. Scalable Infrastructure: AI applications can demand significant computational resources, particularly during training. Enterprises must adopt cloud-based or hybrid infrastructure to accommodate the scalability needs of AI systems.
3. Model Monitoring: Continuous monitoring and feedback loops are essential to ensure AI models remain accurate and unbiased over time.
4. Integration with Existing Systems: AI’s impact increases when seamlessly integrated into existing software and business workflows. APIs, microservices architectures, and middleware are critical for connecting AI functionalities with legacy systems.
5. Explainability and Ethics: As AI becomes more pervasive, tools that facilitate explainability and transparency are critical to making models explainable. This not only aids debugging but also ensures regulatory compliance and builds trust with stakeholders.
What Comes Next
It’s very easy to move from one technological hype cycle to another and simply be overwhelmed or blinded by the hype or the disappointment from failed initiatives. It is important to realize you do not have all the answers, and things are changing extremely rapidly. Build quickly and flexibly and choose projects that are conceptually simple (regardless of technical barriers). Failing fast is OK. The most important thing is to learn the skills to grow and build momentum on success and incorporate lessons of where things didn’t work.
Start with business goals that are measurable and demonstrable – or alternatively, inarguably useful and super simple. Building the habits corporate-wide that allow you to experiment and explore, to initiate projects with defined success criteria, and to communicate success and failure objectively and transparently will pay massive dividends as the AI ecosystem continues to rapidly evolve.
Agentic AI has the potential to become a predictive, task-oriented system that understands needs and acts as a partner rather than a tool. This evolution also opens the door for organizational AI systems to interact and integrate with other technologies, creating a seamless ecosystem of interconnected AI entities. Having said that, I would still counsel simplicity and starting with the business problems and business goals that you face instead of focusing so much on “how do we leverage this new tech.” One of the surest ways to achieve underwhelming ROI is to overly focus on tools and rush to something impractical or complicated.
In the second half of 2025 and beyond, AI will also become increasingly commoditized, especially the myriad of LLM. This makes it even more critical for professionals interacting with AI to understand its foundational capabilities, operational principles, and best practices for leveraging it alongside other technologies. Vendors, aiming to avoid releasing undifferentiated products, must focus on addressing specific customer pain points with tailored solutions.
As tech executives enter the next phase of AI deployments, they will take stock of lessons learned, recalibrate their approach, and address AI with a clearer picture of how it fits into their business. This reset will be essential if organizations hope to keep pace with the movement toward agentic (or any) AI and unlock the full potential of this transformative technology.
It is timely to remember the leap forward of GPT was a user experience leap, not fundamentally a leap in “AI tech”. While these new technological advancements and capabilities hold great promise, there is absolutely no substitute for staying close to your customers and having intimate understanding of their pain points, motivations, and opportunities. Hugging Face CEO Clément Delangue recently expressed this perfectly, noting that models will increasingly be commoditized (and at some point, how does one even differentiate?), but “carefully focusing on user experience, you can create something that your customers will be very excited about.”
In this exciting time of AI, it is worth reminding ourselves of that eternal truth.