AIHR

The CIO’s HR challenge: Leading the agentic workforce

By Mike Anderson, Chief Digital and Information Officer, Netskope

Imagine your team just doubled in size overnight, but half of your new hires are digital. They learn in seconds, never sleep, and can make decisions without asking. They also have the potential to make costly mistakes if left unmanaged.  

This is the reality of agentic AI. For Chief Information Officers (CIOs), the role is no longer just about managing technology. It is about leading a new kind of workforce made up of both human and digital colleagues. The skills that once defined great IT leadership, such as technical vision, strategic planning, and operational excellence, must now be paired with the people skills of an HR leader.  

From prompt-responder to employee 

Generative AI waits for instructions. Agentic AI does not. It acts. It identifies tasks, makes decisions, and executes without being told. 

In other words, it behaves more like an employee than a tool. Agentic AI does not just respond, it joins your workflows. It needs role clarity, performance goals, and supervision. 

Today, most agentic tools remain task-specific, automating narrow, repeatable processes with defined guardrails. As they evolve, CIOs must think less like system administrators and more like hiring managers, ensuring each “new hire” is set up for success. 

Managing risk in your AI workforce 

Every HR leader knows the risks of giving a new employee too much responsibility too soon. In the agentic workforce, the stakes are higher because digital colleagues can operate at speed and scale. 

With generative AI, the primary worry was data leakage, such as users pasting sensitive corporate information into prompts. That concern is real. Recent research found that the average amount of data uploaded to generative AI apps by enterprises rose from 7.7 GB to 8.2 GB quarter over quarter. 

With agentic AI, the bigger risk is uncontrolled system access. These tools often require deep integration into enterprise systems, but excessive access creates new attack surfaces. 

According to research based on a sample of aggregated, anonymized enterprise traffic data: 

  • 39% of organizations now use GitHub Copilot 
  • 5.5% have users running agents built on AI agent frameworks on-premises 
  • 66% have users making API calls to api.openai.com 
  • 13% have users making API calls to api.anthropic.com

Here the CIO wears the security leader hat. You must apply least privilege principles, granting each AI agent only the access needed to perform its role and expanding that access only when trust is established. This is a core zero trust practice and critical when introducing autonomous systems. The Identity Defined Security Alliance (IDSA) 2023 Trends Report found that excessive permissions already account for nearly a third of security incidents. 

Once the guardrails are in place, you put on your HR leader hat — defining roles, setting expectations, and ensuring each digital colleague is productive without introducing new risks. 

The CIO as the head of HR for AI 

Thinking of AI as a new hire changes the conversation. You would never bring in a human employee, skip the onboarding process, and expect flawless performance. Yet many organizations deploy AI tools in exactly that way, often encouraged by vendor promises of instant productivity.  

An effective onboarding process for the agentic workforce should include: 

  1. Role definition and scope – what this AI is meant to do and not do 
  2. Access permissions – systems and data required for the role 
  3. Performance KPIs – speed, accuracy, cost savings, and quality measures 
  4. Review and retraining schedule – to catch drift and adjust behavior 

This structure not only reduces risk, it builds trust across the human workforce by showing that AI colleagues are held to the same standards. 

Scale your AI workforce like a human team 

No new hire takes over a business function on day one. Agentic AI should be deployed in phases, starting small and expanding as reliability is proven. This approach builds trust without slowing innovation.  

As the AI workforce grows, match each tool to the right role. Just as you hire for different skills, you may need specialized AI agents for customer service, analytics, or software development.  

Assign them accordingly, and do not overload one agent with responsibilities beyond its design. 

Scaling this way ensures a stable, adaptable workforce where human and digital colleagues complement each other rather than compete. 

Performance reviews for digital colleagues  

The best people leaders measure and coach performance continuously. The same applies to agentic AI. CIOs should regularly evaluate whether each AI is meeting expectations, delivering quality results, and working efficiently. 

Unlike humans, AI can drift from its intended role in days or weeks. And like humans, it can make poor decisions, only faster. Frequent reviews, guardrails, and retraining are essential to keep your AI workforce aligned with business goals. 

In regulated industries, ultimate accountability must stay with humans. AI can advise, but it cannot take responsibility for outcomes. 

Leading the workforce of the future 

Agentic AI is not just another software category. It is the beginning of a blended workforce where human and digital colleagues work side by side.  

CIOs who embrace both hats — technology leader and HR leader — will do more than keep systems safe. They will shape an organization where trust, accountability, and performance apply equally to every team member, human or digital. By doing so, they will unlock entirely new possibilities for how their organizations think, decide, and grow. 

Author

Related Articles

Back to top button