AI Business Strategy

Scaling Your Business Without Sacrificing People

By Aaron Gibson, Founder & CEO, Hurree

The future of work is asking the wrong question 

Every future of work conference panel or discussion still seems to fixate on a similar line: “How will AI replace teams?” A better question is “How can AI enhance teams?” If you start from a replacement mindset, you inevitably design for replacement. 

Using AI as a cover for layoffs is the fastest way to hollow out your culture, your brand and future company leaders. You might impress investors for a quarter but quietly lose the trust of the people who actually keep the place running. Most of the C-Suite aren’t choosing between AI and no AI. They’re choosing between moving too slowly and moving too recklessly. 

Where work is really breaking 

Look at how people actually spend their week, and the problem becomes obvious. Research from Eagle Hill Consulting found that 68 percent of employees say they regularly spend time on low-value, inefficient work. That is not a rounding error; that is most of your organisation quietly stuck in busywork. 

Another survey of knowledge workers suggests more than 40 percent spend at least a quarter of their week on manual, repetitive tasks like email, data collection, and data entry. One analysis puts it as high as 69 workdays a year lost to work that could be automated or redesigned. On top of that, a Gartner study reported that nearly half of digital workers struggle to find the information they need to do their jobs. People are tired, but a lot of that tiredness comes from fighting systems, not solving problems.  

Stop treating AI as a replacement engine 

If you assume AI exists mainly as a shortcut or worse, to remove people, you will optimise for that outcome. Someone will inevitably build a slide showing “FTE savings” and call it progress. The more honest question is, “What percentage of this role is genuinely human, and what percentage is administration that no one signed up for?” 

In some functions, the admin share is huge. Studies of workplace productivity suggest up to 40 percent of a typical employee’s time goes on non‑strategic work, such as compiling reports and updating internal documents. You end up paying highly trained people to behave like glorified data compilation systems. 

Treat AI as an augmentation layer that supports people, instead of a machine you point at headcount. The Berkeley BRIE report on generative AI and work argues that the most durable gains will come from augmentation rather than full automation. Leaders have more control over that balance than they sometimes admit. 

When you start from augmentation, you ask better questions inside your company. You go after the parts of the job that are repetitive or bottlenecked by bad tools. You keep the parts that require judgment, relationships, and context, which is where humans earn their keep. 

AI should behave like an agent, not another app 

Most teams do not need one more analytics tool that nobody logs into. They need a way to ask a clear question in normal language and get a clear answer across all the messy systems they already use. At home, people can search for a film and get a result in seconds. At work, the same person is suddenly doing a three-day dig through spreadsheets, exports, and email threads; that gap is where frustration lives. 

“Agentic AI” is a slightly ugly term for something simple; instead of just predicting the next word in a chat window, these systems can plan and execute multi-step tasks on your behalf, within guardrails you set. Think of an AI agent that can pull data from your finance system, CRM, and marketing tools, reconcile it, and hand you a decision-ready view that makes sense to a non-specialist. Agentic AI only works if it can reliably reach the systems where truth lives, and most stacks break right there. The companies getting value aren’t the ones with the fanciest model, but the ones that can define the problem clearly enough to ask a question that matters.  

In practice, that might look like: “Show me this quarter’s churn by customer segment, and highlight anything that looks like an anomaly I should understand before the board meeting.” The system does the grunt work, not your most patient team member. Sometimes the output is a detailed breakdown; sometimes it is a two-line summary someone can paste straight into a deck. When AI behaves like this, it starts to feel less like a flashy toy and more like a colleague who happens to love cleaning data. 

Trust is something you design 

There is a lot of vague talk about “trust in AI”, as if it is a feeling that appears by magic. In reality, trust is the result of hundreds of design, policy, and communication choices. People are right to be wary of any system that affects their work if they cannot see how it works or challenge its outputs. If you let AI do the thinking and the deciding, you don’t get efficiency, you get complacency. 

If your team has no idea where the data comes from, who owns it, or how an answer was generated, they are not being negative by hesitating; they are being rational. Asking employees to stake their reputation on numbers inside a black box is not a trust problem; it is a design problem. Fixing it is not glamorous, but it is necessary.   

You need to map your data sources, decide which systems are authoritative, and give people a way to inspect how an answer was produced. You need clear rules about who can see what, and when a human review is mandatory. Trust grows when people can see the wiring and feel they still own the outcome. It disappears the moment AI feels like something is being done to them, rather than with them.  

Four standards for human-first AI 

If you want to scale without throwing your people under the bus, you need some non‑negotiables for how AI is used inside the business. These are the four I value: 

  1. Adopt an augmentation first standard
    Measure AI by time returned to teams and the quality of decisions it supports, not headcount you can cut. If your main success metric is a redundancy chart, you areoptimising for the wrong thing. The question should be, “Does this let our people do more of the work only they can do?” 
  1. Insist on transparency
    You should always know which systems an AI tool is pulling from and whois responsible for that data. If a number looks strange, there must be an audit trail you can follow. Ownership and observability are not nice extras; they are prerequisites for using AI in any serious way.
  1. Design for broad accessibility
    If only specialists can talk to the system, you are creating a new bottleneck. Well-designed AI tools should feel conversational and forgiving, so a CFO, a head of operations, and a team lead can all ask for what they need without a manual. Every company has super users, casual users, and sceptics. Your rollouthas to work for all three, or it doesn’t scale. 
  1. Lead with humanity
    Culture will decide whether any of this actually works. If your AI rollout increases fear and burnout, you will quietly lose your best people and keep the ones who feel trapped.Research on high-trust workplaces shows better performance and lower burnout when people feel respected and supported. AI should support a culture like that, not undermine it. 

Using AI to fight burnout, not fuel it 

Burnout is often framed as a personal problem that can be solved with better sleep routines and mindfulness apps. Those things may help at the margins, but they do not fix the core issue, which is that many people spend their days doing work that drains energy instead of building it. AI will not magically heal this, but it can remove large chunks of the friction that make good jobs feel bad. The goal isn’t another dashboard; it’s one source of truth, in plain language, with a next best action you can defend in the boardroom.  

The most obvious place to start is tedious work that sits on top of complex data. Pulling reports, reconciling numbers across systems, preparing recurring decks, and answering the same questions for different stakeholders are exactly the kind of tasks that machines are good at and humans find exhausting. If you use AI to handle the first 80 percent of that workload, you can keep humans focused on reviewing, challenging, and deciding. 

There is growing evidence that when AI tools are used to simplify complex information while keeping humans in the loop, organisations see productivity gains without the same level of backlash or risk. The Berkeley BRIE working paper on generative AI argues that augmentation‑centric deployments tend to outperform blunt automation. In other words, the AI that sharpens human judgment beats AI deployed as a blunt cost‑cutting tool. 

For leaders, this is not a purely moral choice, although ethics matter. It is a performance choice. Teams who are less burnt out and better informed will almost always outthink tired teams who feel like they are competing with every new tool you bring in.  

The teams that will win the next era of work 

I do not think the winners of the next decade will be the companies that brag about how many roles they automated away. The companies that really benefit from this wave of AI will be the ones whose people feel clearer, safer, and more informed and capable because of how the technology is used around them. Those are the teams that attract talent, keep it, and compound it. 

The future of work will not be defined by AI acting alone. It will be defined by the teams that pair human judgment with AI, which makes complex areas like data instant, clear, and actionable. Our job is not to choose between people and technology; it is to design systems that give people their time, focus, and pride in their work. If we do that, we can scale our companies without sacrificing the very people who make them worth scaling in the first place. 

Author

Related Articles

Back to top button