AI & Technology

The Strategic Power of Less: How Behavioral Science Can Unlock AI Transformation

By Devon Brunner

In my work advising CEOs and executives on AI transformation, one study keeps coming back to me. It begins, unexpectedly, with a three-year-old and a Lego bridge.ย ย 

Engineer Leidy Klotz was building with his son Ezra when they hit a problem: the support towers were different heights, so the bridgeย wouldn’tย span. Klotz instinctively reached for another block to add to the shorter tower. When he turned back, Ezra had already solved the problemโ€”by removing a block from the taller one.ย ย 

Intrigued, Klotz partnered with behavioral scientist Gabrielle Adams. Their question: do adults systematically overlook subtractionย as a way toย make improvementsโ€”and what does that mean for howย leadersย architect change?ย ย 

The answer, published inย Nature, was unequivocal. Across eight experiments involving more than 1,500 participants, people reliably defaulted to adding when solving problems. In some trials, only 2% considered removing somethingโ€”even when addition explicitly introduced extra cost. Additionย isn’tย inherently problematicโ€”oftenย it’sย required.ย ย 

However, because subtraction rarely surfaces as a firstย option, leaders routinely overlook solutions that could be simpler, faster, and far more effective. Thisย isn’tย just aย tactical oversight;ย it’sย a leadership mindset challenge. In an era of rapid AI transformation,ย failing to counteractย the “addition default” risks overengineering systems and missing the strategic power of less.ย ย 

These findings resonate deeply in organizations navigating AI-driven transformation.ย Leadersย todayย aren’tย just deciding what to build.ย They’reย deciding what to stop doing, what to remove, and what to simplify. In many cases,ย that’sย where the greatest leverage lies. Subtractionย isn’tย about doing less for its own sake;ย it’sย about clearing space for better performance, clearer accountability, and more resilient systems.ย ย 

The Wedding Is Overย โ€“ย Now Comes the Marriageย 

In the past few years, organizationsย didn’tย just adopt AIโ€”they raced into it. Leadership pressure was real: board expectations, competitive dynamics, vendor momentum. Announcements were made, contracts signed, ambitious timelinesย launched. Theย nuptialsย wereย elaborate, and given the pressures leaders faced, an understandable response.ย ย 

But weddingsย aren’tย marriages. The playbook for traditional technology rollouts is insufficient for AI, which interacts differently with identity,ย expertise, and autonomy.ย ย 

According toย Stanfordโ€™sย 2025 AI Indexย Report,ย 78% of organizations surveyed reported using AI in 2024 (up from 55% in 2023), and corporate investment in AI that year reached US$252.3ย billion. Yet,ย even among organizations reportingย financial impact, most estimatedย the benefits at low levelsโ€”typically under 10% in cost savings and under 5% in revenue gains.ย ย 

Technology aloneย isn’tย theย issue.ย What’sย challenging is integrating AI into the lived reality of workโ€”a frontier where professional identity, autonomy, and accountability all shift. To understand the investment-to-impact disconnect, organizations need better assessments and adaptive strategies.ย They need deep behavioral-scienceย expertise, more nuanced leadership approaches, and stronger communication environments that empower and enable workers rather than overwhelm them.ย ย 

Change fails when people feel like it is happeningย to them,
notย with them.ย 

So, nowย comesย the marriage, and marriages require different work.ย To do that work effectively, leaders need to understand the behavioral dynamics that shape how people actually experience AI-driven change.ย ย 

A Behavioral Science Case for Rethinking AI Transformationย 

When behavioral scientists study how people respond to AI, consistent patternsย emergeโ€”not as “soft” issues to manage, but as predictable dynamics thatย determineย whether adoption succeeds.ย Understanding them requires following the psychologicalย journey employees actually experience.ย AI transformation is, in many ways, identity transformation, not just workflow change.ย ย 

The Identity Questionย ย 

The journey typically begins with identity. For knowledge workers, professionalย expertiseย isn’tย just a skill setโ€”it’sย a source of meaning, status, and self-worth. AI systems can signal that hard-won skills are losing value, triggering what researchers call AI identity threat: the fear that one’s professional role is under siege. Thisย isn’tย resistance to change per se, but protection of self. Leaders who position AI as augmenting rather than replacingย expertiseย help mitigate this threat and sustain engagement.ย 

The Autonomy Questionย ย 

If the first reaction is identity-based, the second concerns control. Self-determination theory underscores autonomy as a core psychological needโ€”not a preference, but a powerful predictor of motivation. When AI tools are mandated without meaningful input, employees often push back even when the tools wouldย benefitย them. The resistanceย isn’tย to the technology;ย it’sย to the loss of agency.ย ย 

Givingย employees early autonomy over AI decisionsโ€”including the ability to override system recommendationsโ€”increases both motivation and learning. Involving teams early in decisions about workflows, use cases, and guardrails transforms passive compliance into active ownership.ย ย 

The Trust Questionย ย 

With identity and autonomy addressed, employees face aย subtlerย challenge: calibratingย appropriate trust. People tend toward extremes โ€“ either over-trusting AI systems and overlooking errors, or under-trusting them and forgoing efficiency gains. Neitherย servesย the organization well.ย ย 

Effective adoption requires helping teams develop nuanced judgment: understanding when AI is reliable, when human judgment should override it, and how to navigate the boundary between the two. This calibrationย doesn’tย happen automatically. It must be designed.ย ย 

The Accountability Questionย ย 

Trust calibration leads directly to questions of responsibility. AI-supported decisions introduce uncertainty about who owns outcomes. Practitioners report significant ambiguity about who is accountable when AI systems contribute to decisionsโ€” ambiguity that creates friction and impedes adoption.ย ย 

Without clear decision rights, employees may hesitate to rely on AI recommendations or avoid using the tools altogether. Establishing explicit governance around AI-assisted decisionsย isn’tย bureaucratic overhead;ย it’sย the infrastructure that makes confident adoption possible.ย ย 

The Social Proof Questionย ย 

Even when individual concerns are addressed, adoptionย remainsย a social phenomenon. Behavioral research consistently finds that social norms and peer behavior shape uptake more powerfully than directives from above. Employees watch what respected colleagues do, not just what leadership says. Visible early wins, credible peer champions, and safe spaces for experimentation accelerate adoption more effectively than formal mandates.ย ย 

The Safety Questionย ย 

Underlyingย all ofย these dynamics is a fundamental questionย thatย employees rarely voice directly:ย Is it safe to struggle with this? To admit Iย don’tย understand? To say thisย isn’tย working?ย Unspoken questionsโ€”Who knows what? Will AI replace me? Whatย doย I learn to stayย relevant?โ€”can undermine even well-designed transformations.ย ย 

AI adoption can erode employees’ sense of psychological safety, which in turn harms mental health. Ethical leadershipโ€”characterized by fairness, integrity, and genuine concern for employeesโ€”significantly buffers this effect. When peopleย don’tย feel safe voicing concerns or asking questions, the human costs mount.ย ย 

These challengesย aren’tย obstaclesย to “engineerย around.” They are features of how people navigate change. AI transformation succeeds when leaders recognize that behavioral dynamics are as consequential as the technology itself.ย ย 

These behavioral patterns also help explain why organizations so often reach for the wrong solutions whenย AI adoption stalls.

The Addition Default โ€” And What It Costs Usย ย 

Here’sย whereย the behavioralย science becomes directly practical.ย ย 

With AI transformation, organizations instinctively lean toward addition: adding onto legacy systems, adding more training,ย addingย more communications,ย addingย more tools to support the tools. Sometimesย that’sย exactly right. But because addition is our cognitive default, we risk narrowing our solution space before fully exploring it.ย ย 

AI adoption today resembles the dynamic Klotz and Adams uncovered: we default to additionโ€”the big launch, the new tools, the new commitments. But sustainable change, like sustainable marriage, requires subtracting friction, ambiguity, and noise. If weddings are additive, marriages depend on subtractionโ€”removing friction, pruning commitments, clarifying roles. The same applies to organizations.ย ย 

The addition default obscures essential questions: Are we architecting on top of workflows never designed for AI? Are we layering tools instead of coordinating them? What might we streamline or reorganize to achieve better outcomes?ย ย 

One life sciencesย organization I worked with had deployed AI for technical documentation. In practice, the review burden exceeded the original authorship effortโ€”and the risks of misrepresentation in a clinical context outweighed any efficiency gains. The right answerย wasn’tย betterย AI. It was no AI.ย ย 

The research offersย a striking lesson. When experimenters added a simple cueโ€” “removing pieces is free”โ€”the likelihood of discovering simpler solutions increased dramatically. In organizational terms,ย leadersย may need only to legitimize subtraction asย an option.ย ย 

Before we add another AI initiative, what could we remove,ย reorganizeย or simplify instead?ย 

Practical Questions for Executivesย ย 

Leaders navigating AI transformation might find these diagnostic questions useful:ย ย 

Identity & Autonomy:ย How does AI intersect with identity, autonomy, and professional meaningย for peopleย in our organization? Where might unaddressed identity threats be inhibiting engagement?ย ย 

Organizational Fit:ย Do our incentives, metrics, and workflows support the behaviorsย we’reย asking for? Are there structural contradictions that make adoption harder than we acknowledge?ย ย 

Adoption Reality:ย Where is AI adoption genuinely occurring versus performative compliance? What does behavioral usage data reveal about how people are working with these toolsโ€”not deployment metrics, but actual workflow integration?ย 

Addition Default:ย When weย encounterย obstacles, is our instinct to add? What might we reorganize, simplify, or coordinate instead?ย Where does AI actually create value in our workflows?ย Where does it introduce new cognitive or operational costs weย haven’tย accounted for?ย ย 

These questions shift attention from “What else should we deploy?” to “How do we create conditions where what we’ve deployed can succeed?”ย ย 

AI Stewardshipย 

Organizationsย moved quicklyย into AI. The pressures were real, and the decisions madeย sense. The question now is whether we are willing to do the different work that comes nextโ€”building sustainable adoption by aligning technical, business, and behavioral dimensions.ย ย 

Stewardship requires more than capability training. It requires strategic governance, behavioral design, and systems that evolve as people gain confidence, skills, and clarity.ย ย 

Stewardship means architecting systems that learn, not just systems that run.

AI’s real valueย emergesย when organizations align technology with identity, autonomy, and meaning. The organizations that thriveย won’tย be those that add the most.ย They’llย beย those that architectย most intentionallyโ€”those who ask what to subtract and how to orchestrate, not just what to add.ย ย 

We planned the wedding. Nowย it’sย time to invest inย the marriage.ย 

ยฉ 2025 Brunner Ventures, LLC. All rights Reserved.

About the Author

Devon Brunnerย advises boards, CEOs, and senior leaders on AI transformationโ€”helping leaders architect change that accounts for how peopleย actually experienceย it.ย ย Devon brings a grounded, relational approach to the work.

Follow Devon Brunner onย LinkedInย 

Author

Related Articles

Back to top button