Future of AIAI

AI will reshape the workforce – but are employers ready?

By Hannah Mahon, Partner, Eversheds Sutherland

AI has the potential to transform how work is done. By automating process-driven, repetitive tasks, AI will create new job opportunities, enhance productivity and free up the workforce to focus on more strategic and creative aspects, as well as revenue generation. Many experts predict that increasing AI use will prompt a reshuffling of the workforce, as the demand for some types of job role increase, while others decrease. The World Economic Forum Future of Jobs Report 2025 predicts that, by 2030, AI and other tech will create 170 million new jobs globally, while displacing 92 million.

As AI begins to reshape the workforce, how can employers best prepare for change? Although every organisation will be at a different place on their AI journey, with some more advanced than others, below are five key areas that all employers can consider now to ready their workforce for a smooth and compliant transition into their next phase of AI adoption.

Training and development 

As AI use increases, job roles will evolve and demand new skillsets. Upskilling and reskilling the workforce will be critical to future proofing every organisation. All employers will need to consider: how is learning and development aligned to AI strategy, including risk training?

For those employers who fall within the scope of the EU AI Act (i.e. if their establishment or location is within the EU, or outside the EU but with AI system output used in the EU), from February 2025, the Act requires that all employees and operators of AI systems have a sufficient level of AI literacy. This includes having an understanding of the technical aspects, ethical considerations and practical applications of AI systems.

Managing employee relations

In some circumstances, there may be information and consultation obligations to comply with at law prior to the launch of new AI tools and employers will need to understand if and when these obligations apply. Looking ahead, the UK government’s Plan to Make Work Pay commits to requiring consultation and negotiation on the introduction of surveillance technologies (for example, in the context of AI, tools which monitor employee productivity/performance). This could effectively prevent employers from implementing these types of tools without the agreement of relevant representatives and employers should stay abreast of developments in this area.

Regardless of whether information and consultation obligations apply, carefully considered workforce messaging and communications strategies will be key to transparency and building workforce trust and confidence when launching new AI tools; ensuring workers understand how AI is being used and why. An initiative designed to improve productivity could easily have the opposite effect if staff become disengaged.

This will become particularly important as AI agents are increasingly deployed within organisations and employees are working alongside, or as managers of, multiple AI agents. Whilst advances in AI are undoubtedly exciting, they can also be daunting for some employees who are concerned about what this means for their roles. It will be important to clearly communicate the remit of any AI agents and explain where these will augment roles and provide benefits (e.g. time-saving, automating routine tasks, as above).

How are new AI tools communicated to your workforce and how could this be improved as your organisation moves forward on its AI journey?

Policies 

Employers should ensure that existing policies are updated to reflect advances in technology and new ways of working. Depending on how it is being used, AI has the potential to impact all HR policies in some way. As a starting point however, data privacy, information and technology and disciplinary policies will all likely require updates.

If your organisation permits the use of GenAI tools, does it have a GenAI policy outlining guidelines for employee use? For example, are there any limits to what information can be inputted into GenAI tools by employees, considering issues such as data protection, confidentiality and the potential for loss of legal privilege if inputting legally privileged information, depending on the circumstances? This could include ensuring that where any internal or external lawyers are using AI tools as part of their preparatory work (such as using AI for legal research, or summarising documents), that anything inputted, or AI’s output, is still legally privileged (considering the “working paper” rules). Setting clear guidelines for use can help to reduce associated risks and confirm any consequences of employee misuse, including, the potential for disciplinary action.

Employment contracts

As AI use increases, employers may need to ask employees to carry out their roles differently. Particularly if aspects of their roles are taken on by AI and/or employees are required to focus on new areas. When implementing new AI tools, to what extent do employers have the authority to change job roles, or is some form of consent or consultation required?

Changes within managerial prerogative, such as altering methods of working, setting new organisational goals, or asking employees to use different technology, typically do not require employee consent as these changes do not usually alter any contractual terms and conditions of employment (but transparency and clear communication will still be important – see employee relations above). A large proportion of AI-related change could fall within this category.

For more significant changes to ways of working that affect an employee’s terms and conditions of employment – such as changes to key employment duties, hours/place of work (etc.) – employers will need to consider:

  • the extent to which the employment contract, or any collective agreement, permits the change to be made without seeking additional consent; or
  • whether employee consent is required to lawfully make the change to terms and conditions.

In the absence of employee consent, or the contractual right to make the change, employers may need to follow a legal process to effect the change (including but not limited to following Acas’ statutory Code of Practice on dismissal and re-engagement, if applicable).

Understanding this differentiation will become more critical for employers in light of new contractual change reforms proposed by the Employment Rights Bill (“the Bill”). The Bill proposes to introduce new “fire and rehire” provisions which, in broad terms, will make it very difficult for employers to change specified terms and conditions of employment (including pay, working hours, and more) in the absence of employee/union consent and where the change is not otherwise permitted by the contract, save for in very narrow circumstances (essentially where the organisation is facing extreme financial difficulties).

Critics of the Bill’s proposals argue that the new legislation could make it difficult for employers to make changes to certain terms and conditions of employment for economic, technical or organisational reasons in the ordinary course of business (including where certain contractual changes may be needed to implement/reflect new technology), in certain circumstances. Employers may wish to consider and strengthen, where possible, the wording of current contracts, consultation processes and negotiation strategies in preparation for these reforms.

Assessing and mitigating employment law risks

Although AI is an exciting prospect, it also has the potential to create legal and commercial risks if it is not implemented lawfully and fairly. Key employment law risks include the potential for discrimination and bias, data protection and privacy considerations, employee relations (as above), industrial action, and more. The potential for such risks may only increase as the workforce is tasked with using, or interacting with, AI technology. AI may also increasingly be tasked with supporting decision-making relating to the workforce, including assessing performance, carrying out pay reviews, sifting and selecting candidates (and so on), which can also invite risks if employers do not put in place the right guardrails.

Testing new AI tools will be key to identifying legal risks and putting in place measures to mitigate these – both at the procurement/development stage and on an ongoing basis. Risk mitigation strategies include: human in the loop checking; robust governance frameworks; clear warranties and indemnities in contracts with suppliers; and having a channel for reporting concerns and ensuring these are promptly resolved.

Such strategies will be increasingly important as AI has more autonomy, including the use of AI agents.

Conclusion and a look to the future

AI’s integration into the workplace offers significant opportunities however, as above, employers will need to carefully plan for, and manage, change. By investing in training, being transparent with the workforce about AI use and maintaining good employee relations, keeping policies up-to-date, considering the impact of any change on employment terms, and assessing and mitigating legal risks, employers can successfully navigate the transition to an AI-driven workforce and maintain a positive working environment.

As AI use increases and the workforce evolves, what might the future world of work look like? Could we see a future in which we are all managers of AI agents? This certainly seems a possibility. Could it be possible for certain AI tools to gain employment status, obtaining rights currently only afforded to the human workforce such as an employment contract, the right to raise a grievance, or join a trade union? This remains to be seen, but the workplace certainly has an exciting future where AI is set to take a central role.

Author

Related Articles

Back to top button