AI

The Strategic Power of Less: How Behavioral Science Can Unlock AI Transformation

By Devon Brunner

In my work advising CEOs and executives on AI transformation, one study keeps coming back to me. It begins, unexpectedly, with a three-year-old and a Lego bridge.  

Engineer Leidy Klotz was building with his son Ezra when they hit a problem: the support towers were different heights, so the bridge wouldn’t span. Klotz instinctively reached for another block to add to the shorter tower. When he turned back, Ezra had already solved the problem—by removing a block from the taller one.  

Intrigued, Klotz partnered with behavioral scientist Gabrielle Adams. Their question: do adults systematically overlook subtraction as a way to make improvements—and what does that mean for how leaders architect change?  

The answer, published in Nature, was unequivocal. Across eight experiments involving more than 1,500 participants, people reliably defaulted to adding when solving problems. In some trials, only 2% considered removing something—even when addition explicitly introduced extra cost. Addition isn’t inherently problematic—often it’s required.  

However, because subtraction rarely surfaces as a first option, leaders routinely overlook solutions that could be simpler, faster, and far more effective. This isn’t just a tactical oversight; it’s a leadership mindset challenge. In an era of rapid AI transformation, failing to counteract the “addition default” risks overengineering systems and missing the strategic power of less.  

These findings resonate deeply in organizations navigating AI-driven transformation. Leaders today aren’t just deciding what to build. They’re deciding what to stop doing, what to remove, and what to simplify. In many cases, that’s where the greatest leverage lies. Subtraction isn’t about doing less for its own sake; it’s about clearing space for better performance, clearer accountability, and more resilient systems.  

The Wedding Is Over – Now Comes the Marriage 

In the past few years, organizations didn’t just adopt AI—they raced into it. Leadership pressure was real: board expectations, competitive dynamics, vendor momentum. Announcements were made, contracts signed, ambitious timelines launched. The nuptials were elaborate, and given the pressures leaders faced, an understandable response.  

But weddings aren’t marriages. The playbook for traditional technology rollouts is insufficient for AI, which interacts differently with identity, expertise, and autonomy.  

According to Stanford’s 2025 AI Index Report, 78% of organizations surveyed reported using AI in 2024 (up from 55% in 2023), and corporate investment in AI t

hat year reached US$252.3 billion. Yet, even among organizations reporting financial impact, most estimated the benefits at low levels—typically under 10% in cost savings and under 5% in revenue gains.  

Technology alone isn’t the issue. What’s challenging is integrating AI into the lived reality of work—a frontier where professional identity, autonomy, and accountability all shift. To understand the investment-to-impact disconnect, organizations need better assessments and adaptive strategies. They need deep behavioral-science expertise, more nuanced leadership approaches, and stronger communication environments that empower and enable workers rather than overwhelm them.  

Change fails when people feel like it is happening to them,
not with them. 

So, now comes the marriage, and marriages require different work. To do that work effectivel

y, leaders need to understand the behavioral dynamics that shape how people actually experience AI-driven change.  

A Behavioral Science Case for Rethinking AI Transformation 

When behavioral scientists study how people respond to AI, consistent patterns emerge—not as “soft” issues to manage, but as predictable dynamics that determine whether adoption succeeds. Understanding them requires following the psychological journey employees actually experience. AI transformation is, in many ways, identity transformation, not just workflow change.  

The Identity Question  

The journey typically begins with identity. For knowledge workers, professional expertise isn’t just a skill set—it’s a source of meaning, status, and self-worth. AI systems can signal that hard-won skills are losing value, triggering what researchers call AI identity threat: the fear that one’s professional role is under siege. This isn’t resistance to change per se, but protection of self. Leaders who position AI as augmenting rather than replacing expertise help mitigate this threat and sustain engagement. 

The Autonomy Question  

If the first reaction is identity-based, the second concerns control. Self-determination theory underscores autonomy as a core psychological need—not a preference, but a powerful predictor of motivation. When AI tools are m

andated without meaningful input, employees often push back even when the tools would benefit them. The resistance isn’t to the technology; it’s to the loss of agency.  

Giving employees early autonomy over AI decisions—including the ability to override system recommendations—increases both motivation and learning. Involving teams early in decisions about workflows, use cases, and guardrails transforms passive compliance into active ownership.  

The Trust Question  

With identity and autonomy addressed, employees face a subtler challenge: calibrating appropriate trust. People tend toward extremes – either over-trusting AI systems and overlooking errors, or und

er-trusting them and forgoing efficiency gains. Neither serves the organization well.  

Effective adoption requires helping teams develop nuanced judgment: understanding when AI is reliable, when human judgment should override it, and how to navigate the boundary between the two. This calibration doesn’t happen automatically. It must be designed.  

The Accountability Question  

Trust calibration leads directly to questions of responsibility. AI-supported decisions introduce uncertainty about who owns outcomes. Practitioners report significant ambiguity about who is accountable when AI systems contribute to decisions— ambiguity that creates friction and impedes adoption.  

Without clear decision rights, emplo

yees may hesitate to rely on AI recommendations or avoid using the tools altogether. Establishing explicit governance around AI-assisted decisions isn’t bureaucratic overhead; it’s the infrastructure that makes confident adoption possible.  

The Social Proof Question  

Even when individual concerns are addressed, adoption remains a social phenomenon. Behavioral research consistently finds that social norms and peer behavior shape uptake more powerfully than directives from above. Employees watch what respected colleagues do, not just what leadership says. Visible early wins, credible peer champions, and safe spaces for experimentation accelerate adoption more effectively than formal mandates.  

The Safety Question  

Underlying all of these dynamics is a fundamental question that employees rarely voice directly: Is it safe to struggle with this? To admit I don’t unders

tand? To say this isn’t working? Unspoken questions—Who knows what? Will AI replace me? What do I learn to stay relevant?—can undermine even well-designed transformations.  

AI adoption can erode employees’ sense of psychological safety, which in turn harms mental health. Ethical leadership—characterized by fairness, integrity, and genuine concern for employees—significantly buffers this effect. When people don’t feel safe voicing concerns or asking questions, the human costs mount.  

These challenges aren’t obstacles to “engineer around.” They are features of how people navigate change. AI transformation succeeds when leaders recognize that behavioral dynamics are as consequential as the technology itself.  

These behavioral patterns also help explain why organizations so often reach for the wrong solutions when

 AI adoption stalls.  

The Addition Default — And What It Costs Us  

Here’s where the behavioral science becomes directly practical.  

With AI transformation, organizations instinctively lean toward addition: adding onto legacy systems, adding more training, adding more communications, adding more tools to support the tools. Sometimes that’s exactly right. But because addition is our cognitive default, we risk narrowing our solution space before fully exploring it.  

AI adoption today resembles the dynamic Klotz and Adams uncovered: we default to addition—the big launch, the new tools, the new commitments. But sustainable change, like sustainable marriage, require

s subtracting friction, ambiguity, and noise. If weddings are additive, marriages depend on subtraction—removing friction, pruning commitments, clarifying roles. The same applies to organizations.  

The addition default obscures essential questions: Are we architecting on top of workflows never designed for AI? Are we layering tools instead of coordinating them? What might we streamline or reorganize to achieve better outcomes?  

One life sciences organization I worked with had deployed AI for technical documentation. In practice, the review burden exceeded the original authorship effort—and the risks of misrepresentation in a clinical context outweighed any efficiency gains. The right answer wasn’t better A

I. It was no AI.  

The research offers a striking lesson. When experimenters added a simple cue— “removing pieces is free”—the likelihood of discovering simpler solutions increased dramatically. In organizational terms, leaders may need only to legitimize subtraction as an option.  

Before we add another AI initiative, what could we remove, reorganize or simplify instead? 

Practical Questions for Executives  

Leaders navigating AI transformation might find these diagnostic questions useful:  

Identity & Autonomy: How does AI intersect with identity, autonomy, and professional meaning for people in our organization? Where might unaddressed identity threats be inhibiting engagement?  

Organizational Fit: Do our incentives, metrics, and workflows support the behaviors we’re asking for? Are there structural contradictions that make adoption harder than we acknowledge?  

Adoption Reality: Where is AI adoption genuinely occurring versus performative compliance? What does behavioral usage data reveal about how people are working with these tools—not deployment metrics, but actual workflow integration? 

Addition Default: When we encounter obstacles, is our instinct to add? What might we reorganize, simplify, or coordinate instead? Where does AI actually create value in our workflows? Where does it introduce new cognitive or operational costs we haven’t accounted for?  

These questions shift attention from “What else should we deploy?” to “How do we create conditions where what we’ve deployed can succeed?”  

AI Stewardship 

Organizations moved quickly into AI. The pressures were real, and the decisions made sense. The question now is whether we are willing to do the different work that comes next—building sustainable adoption by aligning technical, business, and behavioral dimensions.  

Stewardship requires more than capability training. It requires strategic governance, behavioral design, and systems that evolve as people gain confidence, skills, and clarity.  

Stewardship means architecting systems that learn, not just systems that run.

AI’s real value emerges when organizations align technology with identity, autonomy, and meaning. The organizations that thrive won’t be those that add the m

ost. They’ll be those that architect most intentionally—those who ask what to subtract and how to orchestrate, not just what to add.  

We planned the wedding. Now it’s time to invest in the marriage. 

© 2025 Brunner Ventures, LLC. All rights Reserved.

About the Author

Devon Brunner advises boards, CEOs, and senior leaders on AI transformation—helping leaders a

rchitect change that accounts for how people actually experience it.  Devon brings a grounded, relational approach to the work.

Follow Devon Brunner on LinkedIn 

Author

Related Articles

Back to top button