
A senior executive glances over the financial district skyline. The quarterly risk report, generated in less than a minute by an agentic AI, displays on her screen: “No material threats detected.” She is one click away from forwarding it to the board. She hesitates. Something feels not right. Nothing tangible is there to underscore this feeling. She feels her throat clenching. She looks at the document again and asks the system to show its training cut-off. Then she sees it: the discrepancy is subtle but fatal. She cancels her dinner date and works until late to override the AI. A major downfall has been avoided. The algorithm was right by design, but wrong by consequence.
This is not about intuition as spirituality. It is embodied critical thinking. As AI is taking over cognitive labor across every industry, the combination of experiencing and thinking is one of the crucial human capabilities to preserve control over AI.
The cost of overreliance on AI
Overreliance on large language models (LLM)’s, such as ChatGPT, reduces critical thinking and decision-making by encouraging unverified acceptance of AI outputs.1 Studies show that dependence on AI explanations increases agreement with incorrect results, especially during difficult tasks.2 A large survey revealed a strong negative correlation between AI use and critical thinking, with younger users most affected due to cognitive offloading.3 Moreover, excessive AI use promotes cognitive shortcuts that impair analytical reasoning and decision-making.4. Experimental evidence further shows that ChatGPT-assisted writing reduces brain connectivity, memory retention, and task ownership…5
AI should not be viewed as the source of fault; the underlying issue is a deficiency in human oversight and engagement. The brain, like any organ, follows a use-it-or-lose-it logic.6 Delegating pattern recognition, anomaly detection, and ethical weighting to algorithms, besides its efficiency wins, leaves human minds without this mental exercise. Humans, though still concerned with outcomes, increasingly defer observation and judgment to AI, reducing themselves to passive approvers of algorithmic decisions rather than active evaluators.
The Atrophy of Judgment: Three examples
In healthcare, AI serves as both a vital asset and a potential risk. Hospitals increasingly deploy AI for imaging, diagnostics, and clinical decision support, among other applications. Yet a decline in certain clinical skills is beginning to surface. A recent study found that continuous exposure to AI-assisted polyp detection during colonoscopy significantly reduced the adenoma detection rate in subsequent non-AI-assisted procedures, suggesting a deskilling effect on endoscopists’ performance.7 Another research shows that, unless physicians deliberately intervene, the growing delegation of decision-making to AI in healthcare might hollow out clinical expertise.8
In the consumer health-tech market, independent research and litigation have raised questions about the accuracy of wrist-based sensors used by Apple and Fitbit for heart-rate and blood-oxygen monitoring. In December 2022, a proposed class-action lawsuit alleged that the Apple Watch’s blood-oxygen sensor was racially biased, producing less accurate readings for people with darker skin tones. The complaint claimed Apple failed to disclose this limitation. The case was dismissed with prejudice in August 2023.9 Fitbit has faced similar scrutiny. A 2016 class-action lawsuit accused Fitbit of overstating the accuracy of its heart-rate monitors, particularly during vigorous exercise.10
The insurance industry has als not been immune to the risks of AI-driven decision-making. Increasingly, algorithms are taking the lead in critical functions such as underwriting, claims processing, and risk assessment.
One particularly troubling case involves an AI system that determined when to end coverage for extended care among elderly patients. The algorithm reportedly produced a high rate of errors, leading to patients being discharged from care facilities prematurely. Even when physicians deemed continued care medically necessary. The resulting lawsuit, still underway, highlights a key accountability question, as the insurer shifts responsibility to healthcare providers, arguing they retain final authority over patient discharge decisions. In response, the American Medical Association has underscored that insurers must ensure human review of patient records before denying coverage, warning that algorithmic shortcuts should never override clinical judgment.11
While AI offers advantages in speed and consistency over human decision-making, its ultimate purpose must remain the enhancement of human well-being. This example illustrates that algorithmic decisions can have significant downstream effects across sectors. Premature termination of coverage, for instance, can result in patient relapses, subsequent hospital readmissions, and higher overall healthcare costs. Such outcomes raise important questions about the balance between operational efficiency and long-term societal impact, highlighting the need for careful oversight and human judgment in AI-driven processes.
Another example where AI systems have failed is in the energy sector. During the Texas winter storm in February 2021, electricity prices surged to the legal cap of US$9,000 per megawatt hour and remained there for several days. AI-driven pricing and demand-response systems, including automated smart thermostats and demand-optimization algorithms, were unable to adapt effectively. This failure led to significant financial losses and unnecessary power shutdowns. Griddy Energy, a retail electricity provider, eventually filed for bankruptcy and agreed to forgive $29 million in customer debt. The company’s automated pricing and billing system continued charging customers at cap-level prices because its algorithms were not designed to suspend operations during grid emergencies.12
Rethinking human thinking: Experience as the foundation of sound decision making
The way we think about AI is not just a technical issue. It is about how people and organizations make decisions and stay responsible in a world that is changing fast. When employees engage consciously and critically with AI, they make better choices, prevent harm, and save costs.
Critical thinking is more than a set of teachable skills. It is tied to the human individual, who gains meaning through sensing, reasoning and reflecting. Too much trust, or too little trust in AI leads to suboptimal human performance.
The goal is not to compete with AI, but to question and interpret its output. Just as we do not inspect a car’s engine until it makes strange noises, we often ignore AI until it gives a wrong answer. The real challenge is to stay alert before that happens.
Since the era of Descartes (“I think, therefore I am”), human identity has been closely tied to cognitive ability. Today, as AI assumes many cognitive functions, there is a risk that our reasoning skills may diminish. This underscores the importance for organizations to prioritize the human dimensions of intelligence: the body, the senses, and lived experience. Our bodies often respond to situations before we consciously process them.13
Knowledge resides not only in the mind but also in our experiences and the memories they create. We first encounter events, then interpret them, and finally determine how to act.
Evaluating AI outputs effectively requires human experience. Experience enables employees to discern accuracy, raise critical questions, and exercise creativity. It guides decisions, helping us distinguish between what aligns with organizational goals and what does not.
Experience manifests in two forms. Short-term experience involves direct engagement with real-world situations. Long-term experience, often referred to as wisdom, develops over time through repeated exposure and reflection.
Employees with deeper familiarity with practical conditions are better positioned to leverage AI to enhance outcomes. Intuition, the internal signal that something is amiss, emerges from this accumulated experience and insight.
Leveraging human judgment across key industry sectors: the three cases outlined earlier
Intuition in medicine is a validated component of clinical decision-making, not merely anecdotal. Research shows that general practitioners’ intuitive judgments can independently predict serious illness, even in the absence of objective clinical signs.14 These rapid assessments are informed by professional experience and tacit knowledge, often triggered by subtle patient cues. Acting as an early warning mechanism, intuitive insights prompt timely investigation or referral, complementing analytical reasoning. Integrating intuition into clinical practice enhances diagnostic accuracy and supports patient safety.
Consumers also carry a degree of responsibility. Consider a user preparing for a morning run who straps on a smartwatch and notices an elevated heart rate. Rather than relying solely on the device, the user observes that the readings appear inconsistent during sprints. By reflecting on personal sensations, such as a racing heart or shortness of breath, the user questions the accuracy of the data.
On the developer side, this feedback informs algorithm improvements and encourages transparent communication about device limitations. Through this combination of self-awareness and critical evaluation, both user and developer reduce the risk of misleading health information, contributing to better health outcomes and minimizing potential legal liability.
An critically engaged insurance company employee can serve as a crucial safeguard against flawed AI-driven decisions. When reviewing claims for extended care among elderly patients, the employee pays close attention to both the data presented by the algorithm and the human context behind each case. By trusting their professional intuition and reflecting on patient needs, they may notice patterns or anomalies the AI overlooks. By combining human judgment with AI insights, the employee ensures that coverage decisions are balanced, fair, and medically appropriate. In this model, accountability remains with the human decision-maker, not the system, preventing errors that could otherwise lead to premature discharges or costly litigation.
In the energy sector, a mindful operator, attuned to the broader system and the real-world impact on customers, might have noticed anomalies that the AI could not interpret. By sensing unease as temperatures plummeted and homes dropped to unsafe levels, the operator could have activated a pre-planned emergency override.
These cases demonstrate the value of embedding critical human judgment, informed by embodied awareness, into AI-driven processes. By doing so, organizations can prevent errors, protect people, and uphold integrity, while also ensuring financial resilience across industries.
Humans and AI: A New Relationship
The growth of AI is not just a technological change. It is an existential shift in how we work and lead. As autonomous AI systems enter business operations, the difference between success and failure will not be computing power. It will be human presence: the ability to pause, question, and reason. Professionals who can tolerate uncertainty, notice when something feels off, and trace that feeling back to a weak assumption will become premium assets in any organization.
The future is not humans versus AI; it is humans working with machines. In terms of raw cognitive processing power, we are already outpaced. As AI systems become increasingly agentic, they are no longer passive tools. They require human creativity and novel input to function effectively and accurately.15 The objective is not to resist AI, but to engage with it intelligently. This requires understanding both how AI operates and how humans think, feel, and make decisions. Across any business, clarity of purpose and alignment with strategic goals must remain firmly in human hands.
In a world that values efficiency and certainty, the most strategic action to allow better insight and more responsible decisions is the moment when the human steps back into the process: “Reculer pour mieux sauter”: retreat to advance…


