Future of AIAI

AI Literacy as a Workforce Skill

By Russell Ward, CTO, Leapfrog Technology

I’ve been watching job postings evolve over the past year and a phrase that’s suddenly appearing everywhere is: “AI Literacy Required”, whether that’s for Marketing Manager positions, HR coordinators, Business Analysts, Project Managers, through to Software Engineers and Software Testers and even roles that have nothing to do with technology are now listing AI literacy as an essential skill. 

When I’ve spoken to recruiters in my circles, I’ve asked what they understand that phrase truly means, and they say something like “they need to know ChatGPT” and “some other AI tools popular at the moment”. 

This disconnect between AI Literacy as a hiring requirement and AI Literacy as a practical capability reveals something important about where we are right now. Companies know that they need people who can work with AI, but most haven’t figured out what that means beyond basic tool familiarity. 

The speed at which “AI Literacy” has become a standard requirement is remarkable, but masks a more fundamental issue. What I am seeing in the work that crosses my desk tells a different story than what job descriptions suggest we all need. 

When efficiency masks dependency 

Increasingly, I’m seeing outputs that are clearly AI-generated or AI-curated. proposals that read like they’ve been through the likes of ChatGPT, emails that have slightly too perfect grammar, reports that summarise information impressively but somehow lack insight, all suggest AI-assisted tooling at play. 

There is genuinely nothing wrong with that, and in some cases, it’s helpful as it overcomes language barriers more effectively and can produce collateral faster than ever before. But it raises a fundamental question: 

“Are we building AI Literacy or AI Dependency?” 

Speaking with peers in my industry, I’ve heard stories of impressive marketing strategies being produced using AI tools, but the marketers struggled to explain the reasoning behind the recommendations when asked to walk through their thought process. That alone showed the individuals learned to work through AI rather than with it. 

This pattern appears across disciplines and seniority levels, including closer to home in my sector, people who can generate code that works but can’t debug it manually, relying on AI to do it for them. 

Or, Business Analysts who can create sophisticated reports with AI but struggle to interpret raw data independently and Content creators who produced polished material with AI but haven’t developed their own voice or editorial judgment. 

Evolution or erosion 

Are we witnessing the natural evolution of work or the deterioration of fundamental professional skills? 

On the evolution side of things, perhaps we’re moving up the value chain, and commoditising routine writing, analysis and communication, allowing us to focus on higher-level strategy, creativity and judgement. The efficiency gains from AI have been undeniable, especially for globally distributed teams working across different languages and cultures.  

On the deterioration side, there’s something irreplaceable about the thinking that happens when you wrestle with problems personally. When you write from scratch, you develop your own voice and learn to structure complex thoughts. When you analyse data manually, you develop intuition about patterns and relationships. When you debug code step by step, you build mental models of how systems actually work. 

Will we regret relying on AI later, especially as these tools become commonplace across all types of work?  

I always liken this to buildings with exquisite architecture, and what was once our ability to design buildings with character and individuality has been lost to industrialisation. I believe we’re seeing a similar jump or shift across a number of industries today with AI. 

What job descriptions get wrong 

Most job descriptions that mention AI Literacy are really asking for tool proficiency. They want people who can use ChatGPT to write emails, Claude to summarise documents and write code or other AI tools to generate different types of content. This is digital literacy, where you learn the features these tools provide, master the techniques and become more productive.  

What I believe companies actually need when it comes to AI literacy is something a little more sophisticated, which is people who can think critically about when and how to use AI tools, who can evaluate AI outputs intelligently, and who can maintain their professional judgement even when working with systems that can sound remarkably convincing.  

The gap between what job descriptions ask for and what the roles actually require reveals a deeper challenge in how we think about AI integration in the workplace. Many are optimising for efficiency without considering what capabilities we might be losing in the process. 

Two types of AI users 

Progressive companies are now including fundamental checks and questions not to reject AI use, but to understand whether candidates can think through problems independently and when needed. 

The results have been revealing. Some people use AI as a powerful amplification for their existing capabilities, meaning that they can work with or without it; they know when to trust AI outputs and when to be more sceptical. They maintain their own professional voice even when using AI assistance.  

Others have learned to work through AI rather than with it, and they can produce some impressive outputs when the AI is available to them, but struggle when asked to think through problems or the rationale behind them independently.  

The difference isn’t about intelligence or capability, it’s about whether they’ve developed the foundational skills before or alongside AI tooling and whether they can maintain independent judgment whilst working with these tools. 

Based on observations across diverse organisations, effective AI literacy isn’t about the tools; it’s about the thinking that makes tool use valuable. 

The most effective professionals have learned core skills in their field before they started relying on AI tools. They can write, analyse, code or create independently, which gives them the domain knowledge to evaluate AI outputs critically and the confidence to override suggestions that don’t make sense. 

They understand when AI adds value versus when it introduces unnecessary complexity or risk, which requires a deep understanding of their work context, not just the AI tools themselves. 

They assess AI outputs intelligently, they think strategically about which tasks to handle personally and which to delegate to AI, they maintain their voice with distinctive approaches and perspectives that make their AI usage more valuable than less.  

Better questions to ask 

Rather than job descriptions listing AI literacy required, companies might consider being more specific about actual needs: 

For example,  

  • Can you evaluate the quality and relevance of AI-generated content in your field?   
  • Can you identify when AI tools are appropriate versus when human judgment is essential? 
  • Can you maintain a professional voice and approach whilst using AI tools? 
  • Can you work with and without AI tools? 

These are the types of questions that get at the real capabilities that matter, rather than identifying familiarity alone. 

The challenge for companies is that this has created a more complex evaluation framework to follow than simply checking if someone can use a tool. We need people who can work effectively with AI, and that is increasingly non-negotiable, but we also need people who can think independently and maintain their professional judgment that makes AI usage valuable. 

The opportunity for us as leaders 

For leaders, this represents a challenge and an opportunity. We’re in a unique moment where we can shape how AI is used and its integration within our organisations and industries. 

The goal is most certainly not to slow down AI adoption or to romanticise pre-AI times; it’s to ensure that we use AI, building on our capabilities, rather than replacing them entirely, recognising the value added vs what it loses. 

This requires creating environments where both AI-assisted outputs and independent problem-solving techniques are valued. It means fostering psychological safety for practising skills without AI assistance, at least initially, and it means modelling thoughtful AI use rather than wholesale adoption or rejection. 

So what is AI Literacy? It’s not just understanding tools, it’s knowing how to use them in the right way.  

We should all look to hire people who can think critically, solve problems creatively and maintain professional judgement whilst working with increasingly sophisticated AI tools.  

This is fundamentally different from digital literacy, where you learnt how to use the tools to tell them what to do. AI literacy is learning how to work with tools that generate responses based on probabilistic models, tools that can be remarkably helpful and remarkably wrong sometimes in the same conversation.

The difference matters enormously for how we think about hiring, training and development in an AI-enhanced workplace. 

The rush to add AI literacy to job requirements reflects genuine recognition that work is changing rapidly. Forward-thinking organisations are moving beyond the checkbox requirements, building capabilities and fundamental skills within their teams rather than just tool proficiency.  

The most valuable employees in the immediate future won’t be those who use AI most efficiently, but those who use it the most intelligently. 

That’s the workforce skill I believe is worth building in the AI era. 

Author

Related Articles

Back to top button