If you’ve used GenAI recently to draft a memo, test a business idea, or hand off a tedious task to an autonomous agent, you’ve probably felt – like I have – a mixture of awe and anxiety. That reaction is more than warranted. GenAI models like GPT-4o and the emerging ecosystem of autonomous agents aren’t just impressive productivity tools; they represent a shift in what it takes for humans to compete in the modern workplace. We’ve all heard plenty about GenAI replacing jobs, but I have come to believe that’s the wrong lens. What GenAI does is escalate competition. It shortens the gap between concept and output, removes friction from experimentation, and undermines the idea that tenure or title alone is enough to secure domain expertise. As the bar for average gets automated, human value continues to be redefined and pushed upward. The real pressure isn’t coming from the machines; it is coming from other humans who know how to use them well.
And that’s where the nature of expertise begins to shift.
AI Isn’t Replacing Us—But It IS Forcing Us to Evolve, And That’s Uncomfortable
In a paper released earlier this month, Future of Work with AI Agents by Shao et al. (2025), researchers introduced the Human Agency Scale (HAS), a spectrum capturing how much humans want to stay involved in agent-driven tasks (from fully hands-off to fully hands-on). They argue that the adoption of AI agents needs to be co-designed with workers’ desires, not just driven by what AI can technically do.
This desire for humans to be part of the design mirrors what a group of Harvard Business School researchers have called the jagged frontier of AI. In the paper, the researchers showed that GenAI systems are often extraordinarily good at certain tasks but fail unpredictably in others. The problem is it’s not always obvious where the frontier lies, and the jagged edge is ever shifting. This creates a new kind of asymmetry in work: those who know how to navigate and change with these boundaries thrive. Those who don’t risk irrelevance. Expertise now comes in at negotiating the boundaries, and experts in the future are the designers within their domains who can create the human-AI hybrid work models.
Expertise in an AI World Is About Questions, Not Answers
In the world of GenAI, the definition of expertise is evolving quickly. Expertise in a GenAI-centric world isn’t just defined by how much you know, it’s now more about how well you can navigate, interrogate, and direct intelligent systems. Experts more than ever need to know which questions to ask, what signals to trust, and how to align machine outputs with nuanced, complex human objectives that span from individuals to organizations to external stakeholders along the entire AI value chain.
In practical terms, that looks like a manager who uses GenAI not only to co-create strategy, but to know to how stress test it from multiple angles. Or a product designer who feeds the model a thousand variations and uses their judgment and understanding of business context, not the model’s confidence score, to choose the breakthrough look. In this environment, using GenAI and simply accepting its output is no longer a differentiator, it’s barely average. Knowing how to interrogate it, manipulate it, and collaborate with it in nuanced manners that align with organizational, business, personal objectives become the superpower, the why and how versus the what.
This is where curiosity comes in, not as a buzzword, but as a competitive advantage.
Curiosity Is the New Performance Edge
In study after study, the biggest performance boost from GenAI isn’t coming from the top-tier experts; it’s coming from those willing to experiment. At BCG, consultants in the Harvard study using AI saw a 40% increase in performance, even if they lacked prior expertise and performance. In experiments with law and consulting professionals, GenAI consistently improved outcomes, as long as users didn’t interfere too much. One interpretation is that the best results came when people knew when to lean in and when to get out of the way.
Curiosity isn’t just about poking around ChatGPT to take shortcuts. Curiosity is developing a mental model of how these systems work, where they fail, and how to push them to the edge of their capability as you push your own. It’s the willingness to treat GenAI as a thought partner to move beyond efficiency into exploration. Most importantly – curious minds don’t just complete tasks faster and better, they find new and bigger tasks worth doing.
This is what GenAI enables, not just automation but imagination at scale. In this way, it demands more from humans much like other revolutionary technologies have done, and I argue that’s a good thing.
The Trust Problem (And Why Humans Still Matter)
But as these systems take on more dynamic and high-stakes roles, including making decisions such as triaging customer complaints to analyze sensitive financial or medical data, the consequences of poor alignment grow harder to detect. We’ve already seen how confidently an LLM can generate false claims, fabricate citations, or issue instructions that don’t align with policy or ethics, causing embarrassment and even worse, irreparable brand damage. In their recent paper, CRMArenaPro: Holistic Assessment of LLM Agents Across Diverse Business Scenarios and Interactions” by Huang et al. (2025), researchers at Salesforce AI Research show that today’s leading models still often fail at multi-step reasoning tasks, misunderstand instructions, and require careful prompting to avoid critical errors. But because the errors are often wrapped in fluent, persuasive language, they’re dangerously easy to overlook.
There are many ways to argue for or against the findings of the paper, but one thing remains consistent regardless of which side you choose: the more capable these systems become, the more crucial human oversight gets, not just to find and correct the errors, but to decide what outcomes we want in the first place. That brings the need for expertise to a whole new level. No agent, no matter how sophisticated, understands context like a human does, but only if humans are paying attention and developing their own minds alongside their GenAI counterpart.
That’s why the companies who win with GenAI won’t just be the ones with the biggest models. They’ll be the ones who can attract, train and retain the most curious and context-savvy humans directing them.
Smarter Systems Require Smarter People
If this moment feels like a turning point, it is. GenAI is not flattening the workplace; it’s actually making it steeper. It’s turning “average” into a commodity and elevating very quickly the kind of intelligence that’s harder to automate: judgment, intuition, synthesis and imagination.
It rewards people who can imagine bigger, move faster, and reframe problems. It challenges complacency and yes, it favors the curious, because curiosity is what fuels the leap from using the system to redefining what’s possible with it.
We’ve seen this dynamic many times before. The printing press didn’t just print books; it made literacy essential to advanced human understanding. The internet didn’t just give us information; it made connectivity and filtering a requirement to increasing our knowledge base. GenAI, with its accelerating reasoning capabilities, is now doing the same.
Preparing Teams for an AI-Driven Future
So how do you prepare a team for this kind of shift? You don’t start with mandates—you start with energy. Build AI enthusiasm inside the organization. Give people room to experiment without fear of doing it wrong. Educate them, yes, but also push them to do the hard work of going beyond productivity and efficiency gains into new exploration. Encourage teams to think beyond the prompt and use AI not just to the same things better and faster, but to do fundamentally bigger and better things.
Offer space for distributed innovation for your employees to play with GenAI but be honest about the tradeoffs between speed and reliability. Publish an internal AI policy that people can actually understand and get behind. Redefine what expertise means for every function and rewrite your competency models accordingly, then rewrite them often. These are all the things that can be done today. I know it is possible because we have done them at Credibly.
And above all, accept that it won’t be clean cut. Change never is. You’ll move at different speeds across different parts of the company, some experimenting, some lagging, some resisting. That is normal. That’s what transformation looks like. I haven’t met a single company with a fully cohesive AI strategy. And that’s okay. In fact, it might even be necessary. Just don’t let the messiness of action be an excuse to sit still. When something like GenAI comes along that forces us all to be better, to level up, my view is simple: accept the challenge and have fun.