
Most AI transformation advice tells leaders what to do. Build the data infrastructure, invest in tools, hire the talent, develop the use cases. The playbooks are detailed and, in isolation, largely correct. So why are only 5 percent of organizations generating substantial value from AI, while 60 percent report minimal or no returns despite real investment?
That is the question Patrick Furu, Director of Top Management Solutions at Aalto EE, and I kept returning to as we worked with leadership teams across industries over the past two years. The answer, we find, is that most organizations are solving the wrong problem. The binding constraint is not technology. It is culture. And the missing ingredient in culture is not enthusiasm for AI, but the psychological safety to experiment with it honestly and to learn from what those experiments reveal.
The gap between activity and learning
There is a paradox at the heart of most AI programs today. Individual employees are experimenting constantly. Microsoft’s Work Trend Index found that 78 percent bring their own AI tools to work, outside of any official program. The curiosity is there, the willingness is there, and in many organizations the budget is there too.
What is rarely there is the organizational infrastructure to turn that individual experimentation into shared learning. When an AI pilot produces unexpected results or fails to deliver what was promised, the instinct in most organizations is to move quietly past it. Nobody writes up what went wrong. Nobody surfaces the learning for the next team. The organization accumulates activity without accumulating capability, and investment keeps flowing toward the next initiative rather than into a deeper understanding of why the last one fell short.
This is not a technology failure. It is a culture failure rooted in something straightforward: in organizations without genuine psychological safety, people optimize for not being blamed rather than for learning as fast as possible. They hide experiments rather than surface them. MIT Technology Review research found that 83 percent of business leaders believe psychological safety directly impacts the success of AI initiatives, yet only 39 percent rate their organization’s psychological safety as high. That gap does not stay constant. It compounds, widening the distance between organizations that are genuinely learning and those that are merely spending.
This is why culture deserves serious leadership attention, not as a soft complement to the technical work, but as the prerequisite without which the technical investment is largely wasted.
The five traits that distinguish the leaders
The main factor separating organizations capable of building self-reinforcing learning cycles from those that are not yet ready is character rather than capability. Boston Consulting Group research makes this concrete: 70 percent of the strategic effort required for AI success must go into people, processes, and culture, with only 10 percent into algorithms. Before processes can change, leaders must focus on changing mindsets.
Five character traits consistently distinguish organizations achieving transformative AI value.
Transformative ambition means the organization intends to become different, not just do different things. Leaders set growth and innovation objectives, frame AI as a catalyst for transformation rather than a tool for optimization, and signal through their own decisions that incremental improvement is not the goal.
Growth mindset culture normalizes learning, sharing insights, and addressing difficulties without apportioning blame. Early AI initiatives fail frequently; organizations with a growth mindset treat those failures as data to harvest rather than disgraces to conceal.
Learning velocity as identity reframes how the organization defines itself. Traditional companies anchor their identity in what they have built; leading AI adopters center it in how fast they can learn. This forward-looking orientation enables continuous iteration rather than the periodic transformation cycles that older models depended on.
Identity flexibility is perhaps the most demanding trait. It requires an organization to distinguish between what is essential, why the firm exists and what it stands for, and what is historical, how it has operated and what made it successful in the past. The former can remain constant, preserving the heart of the organization, while the latter must be open to change. The organizations struggling most with AI are often not the weakest; they are frequently among the most experienced and capable, which is precisely the problem. Their past success has built structures, cultures, and instincts that now resist the kind of discontinuous change AI demands.
Leadership as gardening reflects an approach to management that nurtures bottom-up change, as opposed to leadership as carpentry, where change is planned and constructed from the top down. This allows new ways of working to emerge and take root naturally throughout the organization, rather than being installed and then quietly abandoned when the next priority arrives.
The question behind the question
Embodying these five traits is challenging precisely because it requires leadership to address a more fundamental question than most transformation programs ever ask. Most AI transformation conversations begin with “how.” How do we implement this at scale? How do we govern it responsibly? How do we build the capability across the organization? These are important questions, but they carry a hidden assumption: that the organization asking them already has the trust, the psychological safety, and the identity flexibility to act on the answers.
In our experience working with leadership teams, most organizations do not, and no amount of technical sophistication will substitute for that foundation. The common mistake is beginning the AI transformation with data and technology, and expecting trust, learning, and value to emerge afterwards. In reality, the order must be reversed.
The question that precedes everything else is simpler and harder: who must we become? It does not have a framework answer, and it cannot be addressed through a program or a roadmap alone. But it is the question that separates organizations genuinely building compounding AI capability from those investing heavily while falling further behind. Asking it honestly, and building accordingly, is the work that actually matters.


