AI & Technology

Extend, Don’t Replace: What We Keep Getting Wrong About AI

By Igor Kulatov

Critical thinking about AI-generated output should be part of the curriculum in every school. Until it is, we need to keep reminding ourselves — and everyone around us — that these systems are tools for extending human capability, not substitutes for human judgment. 

I learnt to check every output from my AI dialogues, because sometimes the chat behaves like a student who has heard something about the subject but is very lousy at the exam. The quality of outputs is improving, with every updated model or tool, but genuinely, it is still not a conscious creature. Even splitting agents for different types of tasks, like Gemini for retrieval and summarisation, or ChatGPT for pattern detection across technical comparisons, I observe that the latest version produces more generalised answers. The audience wants answers faster, not more accurate, but trading accuracy for speed works only when you are choosing the movie to watch in the evening. If the suggestion is off, the worst you lose is just an evening. And the stakes change significantly when the tools are pointed at serious decisions. 

What extension actually means 

The useful frame for AI in your workflow is an extension. These systems extend what a human can accomplish in a given period of time: summarising, pattern detection, flagging inconsistencies, and generating first drafts that a human then adjusts. The judgment and decisions about what to do with that output stay human. Once the judgment loop loses its human, you have crossed from extension into something that tends to disappoint. 

Most AI deployment failures are not technical losses. They are badly managed scope expectations: organisations assigning the model a role that was never part of its design. The technology itself often works exactly as built, but the problems start when it’s pointed at the wrong issues. 

Analysis, yes; synthesis, no 

Current models are based on things created by people before. We are entering a period where more and more content will be generated from pre-existing content from other models, but at the core, it is still humans doing the work. The models take existing knowledge, recombine it, surface patterns, and produce outputs that resemble reasoning — and resemble is the tricky part. They mimic understanding but never actually have it. Genuinely new understanding, synthesis from nothing, is outside what they do. Ask a model to step off its training rails, and it will sound confident while producing something wrong — again, like the student at the exam who invents new theories to avoid failing in front of the professor. 

These models appeared before human cognition was fully understood at a basic level. Building something that genuinely thinks — not pattern-completes, but reasons from novel situations — remains an open scientific question. More computing does not close that gap because the lack of hardware is not an issue. We do not have the scientific discoveries yet that would tell us how to build it. 

The analysis-synthesis thing is best described through examples from different professions. A therapist senses what a person is not saying, detecting fragility before it becomes a crisis — reading the mood in a voice or the type of emojis from a regular client, and knowing when it is time to act. A teacher builds a picture of a specific student over months, accounting for their gaps, motivation, and whether a hard exam just happened or an important sports game was lost. An experienced driver handles an unfamiliar road in heavy rain mostly below the level of conscious attention, processing peripheral vision, road texture, and the behaviour of surrounding cars simultaneously. None of that is just pattern completion. 

Where it genuinely works 

And no, this is not an argument against using AI tools (although these talks will escalate at some point in future, especially in terms of legal regulations). Used in the right scope, they deliver real value. They speed up tasks that used to take hours or days: summarising large volumes of information, finding patterns, handling well-defined, repetitive tasks at speed, and flagging inconsistencies. For all of that, the technology works as designed, and the return on deploying it is solid. 

But does this machine replace an experienced content creator? Consider someone billing by the hour, where one hour of their work carries thirty years of accumulated judgment — drafts thrown out, instincts built through failure, knowing which idea to cut before it becomes a problem. A language model produces output in seconds, and it has not thrown anything out or developed any instincts. The hour that goes into production after thirty years of experience is categorically different, and treating it as interchangeable is how organisations end up with results they did not expect, in a bad way. 

The instrument, not the decision-maker 

At a fintech company, I built a process to handle high-frequency trades across multiple markets simultaneously. Automation handled execution at speeds no human could match. The risk parameters, the architectural decisions about what the system was permitted to do, the oversight of edge cases — those stayed human. The automation made human judgment more powerful by giving it more to work with, not by removing it from the process completely. 

That model is worth carrying into AI deployment. If you can identify which parts of a workflow benefit from speed and scale, you can hand those to the model. Where judgment, accountability, and context have accumulated over time, a person must stay in the loop. 

The tools are genuinely useful, and no one is arguing with that modern commandment. What they cannot do is understand you, and that is not a limitation that will be engineered away anytime soon. 

Bio: 

Igor Kulatov is a systems architect and technical leader who builds high-performance infrastructure for financial markets and IoT. He co-developed NAGA’s trading platform and contributed to an IoT system later integrated into AWS, combining low-latency execution with resilient, scalable design.

Author

Related Articles

Back to top button