
AI is fundamentally changing the way we work, improving efficiency, driving innovation and increasing productivity. In fact, few areas of business remain untouched, and recruitment is no exception.
This year’s ‘Future of Recruiting’ report from LinkedIn found that 37% of talent acquisition professionals are already actively integrating or experimenting with Generative AI (GenAI) tools, compared to 27% last year – and that figure is only likely to rise.
GenAI tools such as ChatGPT, DALL-E and Gemini have become increasingly popular, often being used to generate new text, video and audio content in a matter of minutes.
However, there are concerns around its impact on the hiring process, particularly when it comes to gaining an unfair advantage.
Concern over the ethical use of AI in recruitment assessments
Talogy recently sought the views of 560 hiring managers from across the globe and found that a significant proportion (65%) are concerned about candidates using GenAI to cheat on recruitment assessments.
Of course, using GenAI to cheat during an assessment will most likely undermine the psychometric validity of the outcome, resulting in a higher volume of unsuitable candidates gaining disproportionate ground during the selection process.
This concern has led to multiple discussions around how to adapt hiring strategies and processes in order to maintain the integrity of the assessment process. Both the Society for Industrial and Organizational Psychology (SIOP) and the Society for Human Resource Management (SHRM) have issued recommendations for AI based assessments to adhere to ethical practices. These recommendations emphasize the importance of key considerations such as: building trust, prioritising transparency, encouraging accountability and protecting data privacy.
The core purpose of a talent assessment is to make an informed decision on whether the candidate is likely to succeed in the role and be a good fit for the organisation so transparency and accountability are fundamental to success.
With this in mind, it is crucial for hirers and candidates alike to use GenAI with great care and due consideration.
Only a small proportion of job seekers are actually likely to use GenAI to cheat
Despite a significant number of hiring managers expressing concern, just 15% of the 702 job seekers and early career professionals we surveyed said they were likely to use GenAI when completing recruitment assessments.
At the end of the day, candidates are looking for roles and organisations that meet their needs and represent their values, so the majority understand that although GenAI may provide more opportunity to cheat, it is unlikely that this will serve them well in the longer term.
It is clear however, that using GenAI to cheat in recruitment assessments is a real and acknowledged challenge, particularly in remote, unsupervised settings and this is something that needs to be carefully monitored and addressed on an ongoing basis.
From the recruiters perspective, it’s imperative that future assessments continue to match the right candidate to the right job.
What can be done to minimise misuse of GenAI during assessments?
It can be fairly obvious when a candidate has had help from GenAI on an assessment. Common signs include the use of stilted or overly formal language, inconsistencies in answers, difficulty in expanding on details and excessive use of punctuation. However, as GenAI improves, these ‘tells’ may become harder to spot.
It’s a small step, but clearly stating that the use of GenAI is forbidden can effectively deter the majority of candidates from cheating. Our research found that 42% of candidates think GenAI tools will be allowed so simply making it clear that this is not the case is a straightforward first step.
There are other, more specific initiatives that can be deployed such as:
- Monitoring score trends: check for any significant differences in scores between candidates and assessments. For instance, a sudden spike in scores – particularly where the response required critical thinking or original written responses.
- Using technology: add simple features to the assessment such as disallowing the option to copy and paste or even consider remote proctoring to monitor test-takers. Perhaps ironically, there is AI powered software that can help detect AI use, looking for behaviours such as moving between screens or identifying different key strokes.
- Changing test formats: interactive assessments or those that involve more complex responses are harder to cheat on than being asked to give a ‘right or wrong’ answer.
- Introducing an honesty contract: Talogy recently conducted an experiment with more than 2,000 assessment participants which showed that when no honesty contract was in place, 28% of candidates used some sort of assistance (such as GenAI, search engines or asking family and friends). When an honesty contract was introduced, that figure dropped to just 13%.
Considering the purpose of a talent assessment is to determine a candidate’s potential for success in a specific role and within a particular organisation, the human element of a talent assessment cannot realistically be replicated by AI.
Guidance, frameworks, legislation and policies continue to emerge
There is a growing need for proper guidance when it comes to the ethical use of AI in business and this extends beyond talent management.
In the US, AI legislation is currently underway across both federal and state jurisdictions and in Europe, the EU’s AI Act which began phased implementation in 2024, stands as the world’s first comprehensive legal framework on AI. In fact, most countries are developing – or have already implemented – ‘use of AI’ frameworks and policies.
For multinational firms with high-volume hiring strategies, navigating this complex and evolving patchwork of regulations across different markets presents a significant challenge in the current climate.
Embracing the future of AI in recruitment
Despite presenting many new considerations, especially around integrity, the rapid evolution of AI in recruitment also opens up new opportunities for progress.
If we proactively address the challenges and remain open to integrating AI into our working practices, we can ensure that recruitment remains a fair, insightful and human-centric process.
The goal isn’t to replace human judgement, but to empower both recruiters and candidates to arrange better matches, more fulfilling careers and thriving workplaces.