
Most businesses are asking the wrong question about AI.
They’re asking, “Which AI tool should we use?” when they should be asking, “Can our people actually think with AI?”
I run an innovation team at a marketing agency. We’ve spent the last two years building AI into everything we do, including measurement, content, strategy, and automation. We’ve got lots of tools, 18 different products to be precise. Below is what I’ve learned. But the tools aren’t always the bottleneck; sometimes the skills are.
The tennis racket problem
A colleague put it perfectly recently: “AI is a tool. Think of it as if you’ve got a smart assistant sat there. But it’s saying, I’m going to give you the best tennis racket, now go and play in the Grand Slam.” That metaphor stuck with me because it captures something the AI hype cycle keeps missing. We’ve convinced ourselves that AI democratises everything. That anyone can now do anything. That the barrier to entry has collapsed. And there’s truth in that, but it’s incomplete.
The barrier to access has collapsed, but the barrier to effectiveness hasn’t. Give someone GPT-4, and they can generate text. Give them the best tennis racket, and they can hit a ball. But the gap between hitting a ball and playing at Wimbledon is still vast. Most organisations are stuck in that gap, wondering why their AI investments aren’t transforming anything.
Three skills that aren’t always present
When I look at where teams struggle and where I see the same patterns across other businesses, three specific competencies keep showing up as gaps.
1. Problem decomposition
Not everyone knows how to break down complex work into chunks that AI can help with. This sounds simple, but it isn’t. Most people approach AI with whole tasks such as “Write me a marketing strategy”, “Analyse this data” Or “Create a campaign.” AI will then produce something, but it’s usually mediocre, because the person hasn’t done the harder work of understanding which specific parts of that task AI is good at, and which parts need human judgment. The skill isn’t using AI; it’s knowing what to give it. Someone who is brilliant at their job but can’t decompose problems will get worse results from AI than someone more junior who understands how to break work into the right pieces.
2. Output assessment
How do you know if what AI gives you is good? This is where intuition becomes essential, and it’s also where the “AI replaces expertise” narrative falls apart.
You need domain knowledge to evaluate AI output. You need enough experience to feel when something’s off, even if you can’t immediately articulate why. You need the pattern recognition that comes from years of doing the actual work. AI doesn’t replace that intuition; it requires it. The best AI users I’ve observed aren’t the most technical; they’re the ones who’ve built up enough expertise in their field to quickly assess whether AI output is useful, directionally correct, or completely off base. They know what good looks like, so they can recognise it when they see it, or notice when it’s missing.
3. Articulation
Can you clearly express what you really want? This is the unglamorous core of the whole thing. Some people struggle to articulate their requirements to other humans, let alone to AI. We’ve all sat in meetings where someone spends 20 minutes explaining what they need, and you’re still not sure what they want. AI makes that problem worse. The skill isn’t “prompt engineering” in the technical sense; it’s the much older skill of clear thinking and clear communication. If you can’t articulate what you want specifically, precisely, with the right context and constraints, you won’t get useful output from AI or from anyone else.
The uncomfortable implication
Here’s what this means for how businesses should think about AI investment:
Stop leading with tools: Most organisations have tool fatigue already. Another platform, another integration, another training session on which buttons to click. It’s not working.
Start with the human work: Before asking “what AI should we use?”, ask “can our people break down problems, assess output, and articulate requirements?” If they can’t do those things well without AI, they won’t do them well with AI either.
Invest in the skills, not just the access: This doesn’t mean AI prompt engineering courses; it means developing clearer thinking, better problem decomposition, and sharper articulation. These are old skills, applied to new tools.
Accept that expertise still matters: The people who’ll use AI best are the ones who already know their domain deeply. AI amplifies competence; it doesn’t create it.
Connected intelligence isn’t about connected systems
I’ve spent a lot of time thinking about how different marketing channels and data sources connect and how you build intelligence across systems rather than in silos. But I’ve come to think the more important connection isn’t between systems, it’s between human judgment and AI capability. The integration layer that matters most is the one between the person and the tool. Get that wrong, and it doesn’t matter how sophisticated your AI stack is. Get it right, and even basic tools become powerful.


