Future of AIAI

Caught in the Loop: AI, Graduate Recruitment and the Crisis of Authenticity

By Ella Robertson McKay, Managing Director, One Young World

The AI skills gap is not closing. It is accelerating. And contrary to many optimistic forecasts, this is not merely a matter of supply failing to meet demand. The gap is being widened by deeper, systemic contradictions – ones playing out most starkly in graduate recruitment, where the twin pressures of automation and authenticity are clashing in ways that are eroding trust on all sides.Ā 

The growing challenge of the graduate job hunt: AI to the rescue?Ā 

Across the UK and US, the number of applications for graduate roles is rising sharply, while the number of available jobs continues to fall. In the UK alone, graduate vacancies have declined by two-thirds since 2019. As rejections mount, the logic of the system changes. I remember it all too well: staying up late researching companies I didn’t even want to work for, shaping my personality around endless competency questions, and crying over rejections from roles I never truly wanted. The process didn’t just test my ability, it impacted my self-esteem. I was once told by two male law partners that I ā€œexpressed my opinions too stronglyā€ to be offered a job.Ā Ā 

A friend’s theory was that if they rolled the dice enough times, they were bound to get a job offer eventually. If that’s how PPE students at Oxford feel, I can only imagine how much heavier the burden is for graduates without the same safety nets. In today’s context, the temptation to turn to large language models becomes overwhelming. You can spend two hours crafting a tailored cover letter or let ChatGPT draft one in seconds. When you know rejection is the most likely outcome either way, expedience starts to feel like strategy.Ā 

Caught in the AI echoĀ 

But recruiters know what AI sounds like. Some even brag about their ability to detect it. Generic phrasing, over-polished responses, and oddly impersonal enthusiasm are red flags rather than assets.Ā 

What we are seeing is the collapse of trust in slow motion: Applicants feel they have to use AI to compete; employers, sensing AI use, discount applicants they believe to have used it. The result is a system in which both sides are performing, but neither is persuaded.Ā 

At One Young World, we see this play out on a global scale. Last year we received more than 70,000 applications for just 500 places on our flagship scholarship programme. Our team continues to have a human read every application and we estimate that over half of the submissions involved some form of AI-generated content. In some cases, the tools are sophisticated enough to create fluent and compelling responses, but the technology is far from perfect. We even receive answers to the question ā€œWhy do you want to attend One Young World?ā€ that include the response: ā€œI am an AI chatbot and cannot attend conferences.ā€Ā Ā 

The deeper concern is that even when AI-generated responses are technically correct, they tend to flatten individuality: stripping out personal cadence, diluting tone, and applying an identical structure that makes every application sound the same. What’s left is often polished but generic; paragraphs that read fluently but reveal nothing of the person behind them. Our Head of Programmes can practically smell ChatGPT, and recruiters will certainly feel the same.Ā 

The real issue at hand: a crisis of confidence and critical thinkingĀ 

It’s more than a stylistic problem or lack of effort; our real concern is that we are seeing a loss of confidence among young people. Faced with rejection after rejection, many are becoming reluctant to write for themselves at all. They feel it is inevitable that AI will outsmart them and that their unique skills and experience are devalued. The damage is cumulative, not only to applicants’ self-worth, but to the quality of our talent pipelines.Ā 

Sam Altman, CEO of OpenAI, has referred to students treating ChatGPT as their ā€œoperating systemā€. It was meant as an observation, but it reads as a warning. We are raising a generation who are increasingly fluent in manipulating prompts, but less comfortable when it comes to grappling with ambiguity, failure or original synthesis. At precisely the moment when the world needs new thinking about how to use AI responsibly, we are outsourcing that very thinking to the tools themselves.Ā 

Inadequate organisational solutionsĀ 

The irony is that for all this surface-level engagement, true organisational understanding of how to implement AI remains severely limited. According to Nash Squared, 89% of UK tech leaders say they are piloting or investing in AI, but the very nature of being midway through the revolution is that nobody has developed deep capabilities – yet. In many cases, AI is seen as an adjunct – a smarter spreadsheet, a faster search engine – rather than the disruptive force it is.Ā 

Nowhere is this more apparent than in sectors where high-status professionals feel under threat. Legal, publishing, and engineering institutions are vehemently resistant. Rather than reskill fee earners, they are framing AI as a project management tool, ring-fencing core functions as if they can be insulated from change. But disruption rarely respects boundaries. Pretending AI will leave skilled professions untouched is not just naive, it is a disservice to the very people those sectors claim to protect.Ā 

It is also telling that many industry associations are steering clear of public AI discourse altogether. Some are quietly commissioning research. Others are engaged in defensive lobbying. But few are offering serious guidance to their members on how to evaluate, adopt, and adapt to the tools that are already reshaping workflows and will fundamentally alter the work itself. The silence is creating a vacuum. And into that vacuum rushes confusion, misinformation, and a growing cynicism among the next generation of workers.Ā 

Preserving originality and credibility requires clarityĀ 

The way forward must include clarity. Clearer expectations around when AI is acceptable in applications. Transparent criteria from recruiters. Investment in AI literacy for all levels of employees that goes beyond prompt engineering. Open and regular company-wide consultations on the obstacles employees are facing and the opportunities they identify for AI use. And perhaps most urgently, a cultural shift that gives young people permission to sound like themselves.Ā 

Because the real skills gap we are facing is not only technical. It is human. Bridging it will require the courage to face reality, not retreat from it. The disruption AI will bring is likely to be deeper and more far-reaching than we can yet imagine. No chatbot, however advanced, can truly prepare us.Ā 

Author

Related Articles

Back to top button