
The release of ChatGPT back in November 2022 marked a turning point in how the world perceives AI. Before ChatGPT, AI was a technology reserved for academics, research labs and tech geeks. Now, everyone can use it.
This is incredible for accessibility, but it has also given rise to a strange new phenomenon. Suddenly, everyone is an “AI expert”, and yet, these experts have never actually built AI models, have no idea how they really work, and know very little about data science.
I’ve worked in engineering and data science for over a decade now, and in this time have already seen AI develop through multiple cycles. Today it has evolved to the point that the barrier to entry is so low that the line between a power user and a practitioner has blurred. While this enables rapid prototyping, it can create a dangerous expertise gap that threatens the stability of enterprise systems.
Low barrier to entry?
The allure of Generative AI lies in its natural language interface. Because we can talk to these models, there is a prevailing myth that we understand them. This has led to a flood of “prompt engineers” who believe that a clever series of instructions is the same as a technical implementation. Being a power user of an LLM does not translate to running a production-grade AI program.
In reality, prompt engineering is just the tip of the iceberg in the modern tech stack. According to research by Gartner, most AI projects fail because they lack the underlying data architecture. A clever prompt will not fix a broken data pipeline or a lack of governance. Companies cannot hire for the magic of the output while ignoring how the behind-the-scenes tech works.
Writing prompts is only 10% of the battle
This focus on the output is why many organisations are hiring for the wrong 10% of the work. True AI implementation is roughly 10% prompting and 90% “boring” engineering. This includes the heavy lifting of data cleaning, MLOps and infrastructure management. This extends to building vector databases, managing latency and monitoring cost-efficiency.
If a company focuses only on the “easy” 10%, it will often find itself stuck without having reached production. You may have the prompt figured out, but you will not have the pipeline. This lack of basic infrastructure leads directly to the next issue: a total absence of data science fundamentals.
Because the entry point is so simple, the “new” experts are frequently skipping the skills that prevent technical disaster. There’s a lack of understanding regarding bias, variance, and data distribution shifts. Without these basics, practitioners cannot identify when a model is hallucinating or when its performance is degrading.
When you don’t understand the math behind the magic, you cannot troubleshoot effectively. For instance, many new users struggle to explain why a model produces a specific result. This lack of explainability is a massive red flag for highly regulated industries. You shouldn’t be using the output for critical decisions if you can’t audit the logic. Otherwise, it can lead to significant security risks.
Users who don’t understand the underlying architecture often overlook vulnerabilities like prompt injection or data leakage. They may inadvertently feed proprietary company data into public models without proper masking. This creates a surface area for attacks that traditional IT departments aren’t always prepared to handle.
This new rush to implement “quick-fix” AI solutions creates a mess with unmaintainable code. Without robust MLOps practices, these systems become fragile and prone to failure. Tools are being built on top of other tools, with no one understanding the core dependencies. This becomes expensive for organisations to maintain, and more expensive to fix if it were to collapse.
A return to proper engineering
The industry needs to bring its focus back to principled engineering. We must stop hiring for the ability to write a creative instruction and start hiring for the ability to build resilient systems. This means valuing the “data geeks” who understand how to structure a database or optimise a query. The goal is a workforce that understands both the creative potential of GenAI and the math that powers it.
Education also needs to catch up with the pace of the boom. We need training programs that emphasise statistical literacy alongside tool-specific skills. It is not enough to know how to use the latest version of a model. Core users must understand the principles of machine learning that remain constant even as specific tools change.
The Generative AI boom is a gift to productivity, but only if handled with care and deep technical respect. We cannot afford to let the “easy” nature of these tools lull us into a false sense of competence. The most successful companies will be those that pair the speed of GenAI with the discipline of traditional engineering. They will hire the people who aren’t afraid of the “boring” 90% of the work.
Companies need to look closer when they come across “AI experts” during the hiring process. You don’t need that; you need a problem solver that actually understands LLMs to deliver value and solve complex challenges. And I’m not saying that non-techy people shouldn’t use Generative AI. But when it comes to building and implementation, companies need to re-focus on data science basics, and move away from the hype of easy-to-use tools, to ensure that your system is built to last and brings value to the organisation.


