AI & Technology

People-first AI: why the future of work depends on who’s really in charge

By Jason DaPonte, Managing Director, Magnetic

When political and business leaders gathered for this year’s World Economic Forum (WEF), there was little surprise when AI dominated the agenda. The promise was familiar: unprecedented gains in efficiency, productivity and growth. AI, we’re told, is ready to strap rocket boosters to the global economy. 

But beneath the optimism sat a quieter, more uncomfortable question. If automation is creating margin at speed, what happens to the people whose work and learning are being compressed or removed altogether? 

According to S&P global, the UK’s services sector is now experiencing its longest period of job losses in 16 years. Much of this is driven by rapid AI adoption and a reluctance to rehire for roles technology can now perform. Unsurprisingly, many of these are junior positions, the very jobs that form the first rungs of the leadership ladder. 

For today’s leaders, this moment demands reflection as opposed to just enthusiasm for new tools. They should ask themselves how they can create a workforce where the early steps in careers will look completely different to when the current generation of leaders started out. 

AI transformation may be inevitable but we have the opportunity to design a transformation where people and AI generate more value together, rather than people being collateral damage as companies race to the bottom of an efficiency battle. 

From tools and hope to intent and design 

Despite the scale of change underway, many organisations still approach AI in a surprisingly shallow manner. Strategy often starts with a tool and ends with the hope that productivity rises, risk stays low and people somehow adapt. But hope isn’t a strategy. A more effective approach begins by defining the purpose and problem, then clarifying the role of people – who sets intent, who carries risk and where judgement sits – before turning to the technology.  

When done well, AI frees people to focus on the things only they can do. That opens the door to services and innovations that were previously impossible, allowing organisations to combine efficiency with creativity and turn productivity into growth on a whole new scale. 

From this perspective, three models of AI integration are emerging, each with very different implications for accountability, learning and leadership. 

Model one: the Augmented Human 

In this first model, humans remain the decision-makers and AI acts as a copilot. Here, AI accelerates craft without stealing responsibility: research assistants surface relevant signals, legal tools flag risk and coding engines remove friction so people can focus on deeper problem-solving. A study of radiologists interpreting chest X-rays found that AI assistance improved accuracy and calibration but results varied by task and individual, reinforcing the need for human judgement to stay firmly in the loop. 

This model matters because judgement is learned. When AI compresses decision-making too far, it risks stripping away the experiences people need to become effective leaders later on. 

Model two: the Symbiotic Partnership 

The second model divides work by strengths. AI takes on data-heavy, repetitive and real-time tasks, while humans focus on strategy, creativity, ethics and timing. Consider Unilever’s ice cream supply chain, spanning factory production lines and millions of freezer cabinets worldwide. AI-driven, weather-aligned forecasting has improved accuracy and reduced waste, while human planners concentrate on strategic allocation and market decisions. 

This model creates space for people to own work rather than simply do it. This shift was highlighted by the WEF, which noted that workers using AI as a strategic partner report faster learning, greater engagement and increased willingness to experiment. 

Model three: the AI-managed workflow 

The most challenging model involves an inversion of control. In AI-managed workflows, complex processes run end to end under machine control. Telefónica, for example, has achieved highly advanced autonomous network operations in some use cases, with engineers monitoring performance and intervening only when anomalies appear. 

Here, leaders become auditors rather than operators, governing outcomes, exceptions and values rather than tasks. 

This model offers enormous efficiency but also carries the greatest risk. When something goes wrong, accountability cannot belong to an algorithm. It must remain human. 

Across all three, the most credible implementations share two traits: clear human accountability and validation of AI performance within real workflows. 

This is where the conversation shifts from managing tasks to governing values. Leaders must decide when human deliberation – pausing for quality, ethics or equity – should override the machine’s relentless efficiency. 

And they must do so while considering a broader definition of value. If AI gains come at the cost of long-term capability, wellbeing, sustainability or fairness, they aren’t true advantages. 

Towards a NewHuman approach 

What emerges from this is the need for a redesigned operating model, one that treats AI strategy as inseparable from people strategy. 

That means rethinking workflows, governance and progression simultaneously. It means creating modular learning pathways that help employees outpace changing job descriptions and designing roles where creativity, experimentation and failure still have room to exist, especially for those early in their careers. 

Most of all, it means embracing a “NewHuman” approach: one that recognises that humans and AI together create more value than either could alone but only if leadership, judgement and accountability remain firmly human. 

AI Armageddon isn’t here yet. There is still time for organisations to be deliberate and shape AI adoption in ways that don’t just optimise the present but actively build the leaders of the future. 

Because companies will still be led by humans. And success will go to those who design AI to expand their people, not just their output. 

Author

Related Articles

Back to top button