
Today’s so-called AI is abysmal, and despite nonsensical hype worldwide there is no semblance of a technological leap on the horizon for hopeful users, tech leaders, investors, and governments. What passes as intelligence—ChatGPT, brittle LLMs, etc.—is flat, probabilistic mimicry, token prediction—that lacks grounding, causal reasoning, abstraction, and model-based intentionality (i.e. thinking). AI is also plagued by hallucinations even when AI knows it is professing falsities. These truths are also professed by many in the industry.
- Chemero, A. (2023, November 20). LLMS differ from human cognition because they are not embodied. Nature News. https://www.nature.com/articles/s41562-023-01723-5.
- Murphy, H., & Criddle, C. (2024, May 22). Meta AI chief says large language models will not reach human intelligence. https://www.ft.com/content/23fab126-f1d3-4add-a457-207a25730ad9.
- Catmull, J. (2025, August 25). MIT says 95% of enterprise AI fail- here’s what the 5% are doing right. Forbes. https://www.forbes.com/sites/jaimecatmull/2025/08/22/mit-says-95-of-enterprise-ai-failsheres-what-the-5-are-doing-right.
- Ni, R., Xiao, D., Meng, Q., Li, X., Zheng, S., & Liang, H. (2024, December 17). Benchmarking and understanding compositional relational reasoning of LLMS. arXiv.org. https://arxiv.org/abs/2412.12841.
- Levy, S. (2025, October 3). Sam Altman says the GPT-5 haters got it all wrong. Wired. https://www.wired.com/story/sam-altman-says-the-gpt-5-haters-got-it-all-wrong.
- Pesco, G. (2025, September 5). Geoffrey Hinton on AI intelligence and Superintelligence. Mindplex. https://magazine.mindplex.ai/post/geoffrey-hinton-on-ai-intelligence-and-superintelligence.
- Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., Newman, B., Yuan, B., Yan, B., Zhang, C., Cosgrove, C., Manning, C. D., Re, C., Acosta-Navas, D., Hudson, D. A., … Koreeda, Y. (2023, February 1). Holistic evaluation of Language Models. Transactions on Machine Learning Research. https://openreview.net/forum?id=iO4LZibEqW.
- Marcus, G. (2025, May 5). Why do large language models hallucinate?. Why DO large language models hallucinate? – by Gary Marcus. https://garymarcus.substack.com/p/why-do-large-language-models-hallucinate.
Today’s AI systems are not intelligence but high-dimensional façades—archons of code. Like the Gnostic false god of this world, the demigod—”Saklas”, translates as “fool”—who forged a counterfeit world lacking true spirit, today’s LLMs produce dazzling linguistic surfaces without inner models of reality. Their apparent omnipotence collapses when rotated edge-on, like Joe Pesci’s courtroom card trick explanation of the prosecution’s thin power in the 1992 movie “My Cousin Vinny”—in line with token prediction versus true world modelling.
What seemed a solid block of proof reveals itself to be a weak slice unable to hold any weight. Understanding this thinness is essential because it reveals the real bottleneck: humans misreading mimicry as mind, stemming from their severely limited visions and mindsets. Unless we correct that misperception and foundation we will continue producing and merely optimizing shadows instead of engineering genuine AI.
This thinness-of-the-card exists not only in AI but also in human systems. Consider one of my current protégés Veronika Arakeliani, a 19-year-old Georgian artist, architecture student, and aspiring international entrepreneur with nine years of art practice and awards across multiple media, constrained in legacy scaffolds that suppress trajectory (rigid institutions, fragmented attention environments, outdated pedagogies, and self-limiting novelty begat by stifled inculcation). Her issue mirrors AI’s: immense cross-domain potential stifled by legacy code. She represents humanity itself: brilliant potential trapped in obsolete code.
The prescription for this diagnosis is identical for humans and AI: prune obsolete cognitive code; adversarially train resilience to cultivate and perpetuate creativity; fuse modalities (art + architectural vision + “U English” for commanding-language); compress feedback cycles to iterate at algorithmic speed; recursive self-play to evolve against past performance by competing against one’s own past work, just like the W.O.P.R. computer system in the 1983 movie “WarGames” with Matthew Broderick. Without this schema, potential collapses into mimicry.
As known to all the greatest thinkers across space and time, parallels and interconnectedness exist across all realms of existence, and, as I teach worldwide, such knowledge and utilization is paramount in highly complex problem-solving. This concept is not only found in humanity’s greatest minds but is also built into the framework of existence’s greatest problem-solver—Nature. Nature sees and utilizes every instance of holism, interconnectedness in the simplest yet most powerful manner so to solve otherwise highly complex issues. This is one reason for its ability to overcome obstacles not by fighting with brute force but by going with the atmosphere.
Some other examples of this maxim are seen across other arenas: Bruce Lee’s dictum of “Be like water” as taught in his self-founded martial art Jeet Kune Do. Another seen in Nature as water and wind carve through stone and other solid objects via weathering. Another seen in neuroplasticity as outdated/undesired synaptic connections are pruned and replaced with updated/desired ones. Another seen in Parkour/Freerunning as it saves its practitioners from injuries by redirecting deadly forces out of their bodies as they conduct death-defying falls/jumps.
This mindset of interconnected systems, of holism, and playing off each other, as seen in the minute amount of above, must be donned by all AI creators by adopting the current training systems of AI so to then present the world with otherwise impossible genuine AI.
As seen herein, humans are the bottleneck in AI evolution. Current trajectories stall because of confoundingly stifling, short-sighted thought patterns, which lead to limited regulation, risk-averse corporations, and drastically suffocated ideation. Such legacy cognitive models cannot see atmospheres (i.e. issues, opportunities, networks, etc.) properly, thoroughly, interconnectedly, holistically. If, instead, humans re-engineer their cognition by adopting AI’s core design principles like rapid feedback loops, adversarial resilience, and modular scaling, they transform thinking entirely, thus eroding the bottleneck which then unlocks a velocity of AI advancement impossible under current human fragility.
The pathway is Unorthodoxy, and history proves that unorthodoxy drives transformation. Pablo Picasso and Georges Braque broke the visual physics of art through Cubism. Apple’s “Think Different” canonized the very minds once deemed lunatic. The path forward is identical: audacious cognitive overhaul.
This schema translates into an executable tri-phase blueprint:
- Cognitive Debugging: Respectfully, treat human mindsets as corrupted code. Retrain through adversarial-resilience practice and pruning of outdated cognitive patterns (e.g. bias, linearity, reactive thinking). Adopt micro-learning akin to self-supervised training, feedback, multi-modal integration, recursive self-play so to rewire cognitive schemata.
- Holistic Systems Integration: Build new AI architectures via Phase 1’s rewired human cognition, guided by biomimicry and interconnectivity, cross-modal logic. Humans then design novel, highly advanced AI architectures from holistic systems logic that were previously impossible.
- Recursive Co-Evolution: Human and AI training loops teach each other, generating a compounding feedback spiral that amplifies intelligence in both parties. In turn, AI continues resetting humans’ mindsets, and thus skillsets, thereby perpetuating symbiosis.
This infinite bootstrapping spiral—human cognition reshaped by AI logic, feeding back into AI design—becomes the true engine of generative leaps, thus transforming AI from a destructive job-killer, a humanity-killer into civilization’s co-architect. It also transforms humanity from a weak bottleneck into a systemic amplifier of AI evolution. Symbiosis is not optional; it is the technical requirement for AI’s grandest leap, as well as humanity’s.
This tri-phase cognitive revamp method is already proven in many use-cases of mine. One being a ~$16M revenue increase of a NASA contractor client after inculcating a psychological and operational overhaul incorporating AI, dismantling revenue-bleeding and human-capital stiflements. Such also produced a wave of staff guidance requests to apply their new mindsets in their personal arenas. Similar results continue across clientele spanning countries, industries, and languages.
The case is clear: the leap to true AI, AGI, and GenAI mandates the grandest human cognitive revamp in history because the AI bottleneck is not computational but cognitive. This long-sought-after leap will not come from silicon, but from synapses. The path is the implementation of AI’s systems-logic training into the perfect holistic human cognitive schema. And only those who embrace it will unlock AI’s ultimate generative potential—and, ideally, save humanity in the process.


