Future of AIAI

Beyond Skills: Using AI to Map Organisational Contradictions

By Mark McLemore of the Halcyon Equity Initiative Foundation

Organisations pour millions into training programmes. Leaders celebrate high completion rates and glowing feedback scores. Yet six months later, nothing has really changed. Sound familiar? 

For decades, HR has operated on a simple assumption: when people underperform, they must lack skills. Train them, and performance improves. But research at the Halcyon Equity Initiative Foundation points to a different culprit entirely. The real barriers to performance often have nothing to do with what people know or don’t know. Instead, they stem from contradictory systems that ask people to do one thing while rewarding something completely different. 

Here’s what makes this particularly frustrating: traditional HR tools can’t see these contradictions. Engagement surveys capture how people feel, not the impossible choices they face daily. Performance dashboards show outcomes, not the twisted logic that created them. They miss the moment a well-trained manager abandons coaching because hitting this quarter’s numbers matters more than developing people. They can’t detect the unwritten rules that everyone follows but no one discusses. 

AI, surprisingly, can help make these invisible tensions visible. 

An Accidental Discovery 

The Foundation’s research team didn’t set out to revolutionise organisational diagnosis. They were simply curious about how large language models behaved under pressure. What would happen if, instead of asking ChatGPT or Claude to write content, they asked these systems to role-play people caught in organisational dilemmas? 

The results were unexpected. Across more than 50 scenarios—from South African mines to retail chains, from government departments to financial services—these AI systems did something remarkable. They didn’t just describe problems. They inhabited them. They reasoned through contradictions exactly as real people do, defending choices that seem irrational from the outside but make perfect sense from within the system. 

Take this example: when prompted to play a senior manager torn between innovation mandates and risk controls, the AI didn’t produce a generic response. It articulated the precise bind: “Every innovation needs approval. Every approval requires a risk assessment. Every assessment takes weeks. By the time we’ve made an idea safe enough to proceed, the market opportunity has passed. So we stick to incremental improvements we can push through without scrutiny.” 

This wasn’t programmed. It emerged from patterns—millions of similar situations compressed into uncomfortably accurate logic. 

The Hidden Contradictions We All Recognise 

Why That Expensive Coaching Programme Failed 

Consider a scenario the research team explored: a sales manager who’d just completed a top-tier coaching programme. The company had invested heavily in teaching managers to develop their teams through powerful questions rather than direct instruction. When the AI, playing this manager, was asked about using these new skills, its response was painfully honest: 

“Look, the coaching approach makes sense. The research is solid. But here’s my reality: I have quarterly targets that keep me awake at night. If I spend time helping my team discover answers through thoughtful questioning, and we miss our numbers because they’re still learning, guess what? Nobody’s going to congratulate me on my coaching technique. They’re going to ask why I didn’t just tell them what to do and fix the problem faster.” 

The AI, challenged with long-term benefits of coaching, cut to the heart of the misalignment: “Long-term is lovely if you’re still here. Miss two quarters while your team is ‘developing,’ and you won’t be around for the long term.” 

No amount of training can fix that fundamental tension between learning and doing. 

The Agile Transformation That Wasn’t 

Another revealing scenario involved a regional director in a manufacturing company that had recently undergone an “agile transformation.” New team structures, daily standups, sprint planning—all the right elements were in place. But when pressed about actual decision-making speed, the AI revealed what employee surveys had carefully danced around: 

“We use all the agile terminology now. We have scrums and sprints and retrospectives. But want to know a secret? Every decision that matters still needs six signatures. The budget process hasn’t changed. Risk tolerance is exactly where it was. We’ve basically learned to translate our old way of working into agile language. It’s like putting a Ferrari body on a tractor engine.” 

This wasn’t resistance to change. It was intelligent adaptation to contradictory demands. 

The Wellbeing Programme Nobody Wanted 

Perhaps the most uncomfortable truth emerged when the AI simulated an HR business partner tasked with improving employee wellbeing while simultaneously cutting costs. The response was brutally honest: 

“Everyone knows what would actually improve wellbeing: reasonable workloads, adequate staffing, time to breathe. But those cost money. I’m measured on both wellbeing scores and budget reduction. So I do what every HR person does—I implement programmes that look good but cost nothing. Meditation apps. Lunch-and-learns about stress management. We teach resilience instead of reducing what people need to be resilient against. It’s performance signalling, but it’s the only show I can afford to put on.” 

This reflected systemic logic, not cynicism—the voice of someone navigating an impossible equation. 

When Transformation Collides with Reality 

The contradictions become even sharper in South African contexts, where transformation goals meet operational pressures. In one simulation, a mining supervisor enrolled in a B-BBEE leadership programme explained why the training wasn’t translating to the workplace: 

“You want to know why I’m not having those inclusive coaching conversations we practiced? Come stand with me at 5 AM during shift change. See that board? Two numbers matter: safety incidents and production tonnes. They’re both red. Every minute I spend on leadership development is a minute I’m not watching for safety violations. And when someone gets hurt—not if, when—the investigation won’t ask about my transformation efforts. It will ask why I wasn’t on the floor.” 

This insight sparked a crucial realisation: the organisation was forcing supervisors to choose between transformation and safety, rather than showing how they reinforced each other. 

The Public Service Squeeze 

Government departments face their own version of this dilemma. The AI, simulating a public service manager implementing diversity training, laid bare the daily trade-off: 

“I believe in transformation. But I’m running at 60% staffing due to budget cuts. My team already works through lunch to process citizen applications. When I pull them for diversity workshops, the backlog grows. Citizens complain. My performance review doesn’t have a score for ‘inclusive culture.’ It has a score for ‘service delivery.’ Guess which one determines whether I keep my job?” 

After seeing this pattern, the department restructured its performance metrics to recognise that transformation and service delivery were inseparable, not competing priorities. 

The Retail Hiring Dilemma 

A South African retail chain discovered its own misalignment when the AI simulated an HR manager’s hiring decisions: 

“Our B-BBEE scorecard is clear—we need diverse talent. I want that too. But store managers call me constantly about empty positions hurting sales. ‘Just get someone,’ they say. If I take the time to properly source diverse candidates, expand the talent pool, run inclusive selection processes, I become the bottleneck everyone blames for lost revenue. So I fill positions quickly with whoever’s available, then we all wonder why our transformation numbers don’t improve.” 

Understanding this tension led to revised KPIs that measured quality of hire alongside speed, resulting in 20% more diverse appointments without operational delays. 

Financial Services: The Relationship Paradox 

In a South African bank, the Foundation explored how client relationship goals conflicted with revenue pressures. The AI, simulating a wealth manager in a B-BBEE client development programme, revealed: 

“We’re trained to build long-term relationships with emerging market clients, to understand their unique needs and grow wealth inclusively. Beautiful vision. But my monthly targets don’t care about relationships. They care about assets under management and product sales. So when I meet a young Black professional who needs basic financial education before investing, I face a choice: spend hours building trust for future potential, or chase high-net-worth clients who’ll hit my numbers today. The scorecard makes that choice for me.” 

This contradiction explained why transformation initiatives in financial services often stalled despite genuine commitment. The bank subsequently piloted relationship-based metrics that valued client development alongside immediate revenue, increasing inclusive practices by 15%. 

How to Surface Your Organisation’s Contradictions 

The method is surprisingly straightforward. Any HR professional with access to ChatGPT, Claude, or similar AI tools can start tomorrow. 

Step 1: Pick Your Problem 

Choose something specific that should be working but isn’t. Maybe it’s a stalled culture initiative, a training programme with poor adoption, or a policy everyone ignores. Frame it as a hypothesis: “We’re asking people to do X, but the system encourages Y.” 

Step 2: Create the Scenario 

Build a detailed context. Don’t just say “a manager”—specify: 

  • Their role and team size 
  • What they’re measured on 
  • Recent changes affecting them 
  • Pressures they face daily 
  • Who they report to and what matters to that person 

The richer the context, the more revealing the AI’s responses. 

Step 3: Have a Real Conversation 

This is where most people go wrong. Don’t ask the AI to analyse the situation. Ask it to live in it. For example: “You’re a bank branch manager. You’ve just completed customer experience training emphasising relationship building. Your branch is behind on product sales targets. A customer walks in needing financial advice but isn’t ready to buy anything. Walk me through your thinking.” 

Remember POPIA compliance—never use real employee information. 

Step 4: Challenge and Probe 

When the AI responds, don’t accept the first answer. Push back. Quote company values. Mention the CEO’s latest town hall. Reference the business case for the initiative. Watch how the AI defends its position—that’s where the real insights emerge. 

Step 5: Change the Lens 

Run the same scenario from different angles: 

  • The employee trying to execute 
  • The executive who launched the initiative 
  • The customer experiencing the result 
  • The HR partner caught in between 

Patterns will emerge. The same misalignments surface from every angle, just expressed differently. 

Step 6: Move from Insight to Action 

Look across the conversations. What themes repeat? Where do different roles hit the same walls? What unwritten rules govern behaviour? These patterns reveal which contradictions are truly fundamental versus surface-level frustrations. 

Tomorrow’s First Step: Your 30-Minute Experiment 

The beauty of this approach is its accessibility. Here’s exactly how to start: 

  • Choose one underperforming initiative (e.g., a diversity programme, customer service training, or agile transformation) • Open any AI chat tool (ChatGPT, Claude, Gemini) • Create a specific scenario with role, metrics, and pressures • Spend 30 minutes in structured dialogue, pushing past surface responses • Document the contradictions that emerge • Share findings with your team to validate and plan next steps

What emerges often crystallises what everyone senses but struggles to articulate. The conversation might reveal that your customer service training fails because service metrics still emphasise speed over satisfaction. Or that innovation stalls because the budget process punishes anything without guaranteed returns. 

Why This Works When Traditional Methods Don’t 

Here’s the uncomfortable truth: most organisational diagnosis tools are designed not to find contradictions. Engagement surveys let people vent but rarely surface systemic issues. 360 feedback focuses on individual behaviour, not system design. Culture assessments measure values alignment but miss structural conflicts. 

These tools can tell you people are frustrated. They can’t tell you why smart, motivated people consistently make choices that undermine organisational goals. They miss the moment-by-moment calculations people make when caught between competing demands. 

AI simulation offers something different—a way to experience how misalignments actually play out. It’s the difference between reading about turbulence and feeling your plane shake. 

What This Means for HR’s Future 

This capability invites a fundamental shift in how HR creates value. For decades, the function has focused on building individual capability. But what if the highest-leverage work isn’t fixing people but fixing the systems they operate within? 

This reframes core HR work: 

Organisational Design stops being about reporting lines and becomes about alignment testing. Before any restructure, simulate how new designs will interact with existing metrics, processes, and culture. 

Change Management evolves from communication plans to contradiction mapping. Before launching any initiative, identify where it conflicts with current reality and address those conflicts in the design. 

Performance Management shifts from individual assessment to system diagnosis. When teams underperform, first ask: “What contradictions are they navigating?” before assuming they need development. 

Leadership Development expands to include contradiction navigation. Leaders need to recognise when they’re creating impossible situations for their teams and know how to resolve them. 

Culture Work becomes about surfacing and addressing the gap between espoused and enacted values—making the invisible rules visible and then deciding which ones to keep. 

Talent Strategy must address why diversity initiatives fail when cultural fit still drives decisions, or when succession planning conflicts with quarterly pressures. The AI might reveal how recruiters prioritise speed over inclusion due to hiring metrics, or how high-potential programmes clash with immediate performance demands. Understanding these systemic barriers enables targeted interventions—adjusting KPIs to balance short-term delivery with long-term talent development, creating metrics that reward inclusive hiring practices, or restructuring succession planning to align with both current performance and future capability needs. 

Navigating the Ethics and Limitations 

This method is powerful but requires thoughtful application. The AI isn’t truly understanding organisations—it’s pattern-matching across millions of examples. Those patterns reflect real dynamics, but they also carry biases and blindspots. 

Key considerations for responsible use: 

Privacy First: Never input real employee data. South African POPIA requirements make this non-negotiable, but it’s also simply good practice. 

Cultural Context Matters: AI models trained on global data might miss specifically South African dynamics. In diverse workplaces, AI may misinterpret cultural nuances—for example, viewing collectivist decision-making as inefficiency, or prioritising urban perspectives over rural realities. HR teams must engage diverse stakeholders to validate insights and ensure equitable interpretations that reflect racial, socioeconomic, and cultural complexities. 

Hypotheses, Not Truth: AI outputs are sophisticated guesses based on patterns. They need human validation, especially in complex cultural contexts. 

Augment, Don’t Replace: This method enhances human insight—it doesn’t substitute for it. The AI can surface patterns, but humans must interpret meaning and design solutions. 

Watch for Bias: If your organisation’s contradictions reflect broader societal biases, the AI will surface those too. Be prepared to confront uncomfortable truths. 

The Deeper Implications 

The Halcyon Equity Initiative Foundation continues exploring how AI can support organisational coherence. But the implications already seem clear. Many organisations are perfectly designed to produce exactly the dysfunction they experience. Not through malice or incompetence, but through accumulated contradictions that become invisible background noise. 

The question facing HR leaders isn’t whether their organisations contain these misalignments—they all do. The question is whether they’re prepared to see them clearly and address them systematically. 

Because ultimately, sustainable performance doesn’t come from teaching people to navigate contradictions better. It comes from designing organisations where success doesn’t require impossible choices. 

That’s the real transformation opportunity. Not another skills programme. Not a new competency model. But the harder work of creating systems that make sense. 

Author

Related Articles

Back to top button