
Workplace surveillance has moved from the margins to the mainstream of modern working life.
A recent survey reported that around one-third of UK employers now use digital monitoring or “bossware” to track staff, alongside more traditional controls such as swipe cards and CCTV. The Institute for Public Policy Research has warned that this growing reliance on surveillance risks undermining workers’ rights and wellbeing. Together, these findings highlight why regulators and courts are increasingly being asked to consider whether technologies such as live facial recognition (LFR) are compatible with fundamental rights.
That question is currently before the courts. The Equality and Human Rights Commission (EHRC) has been granted permission to intervene in a judicial review of the Metropolitan Police’s use of LFR. The case will examine whether the practice complies with the European Convention on Human Rights, particularly the rights to privacy, freedom of expression and freedom of assembly. Although the claim concerns policing, its outcome is likely to have implications for private-sector employers considering biometric or AI-enabled monitoring.
How live facial recognition works
LFR compares facial images from live video footage to a watchlist and flags matches for human review. It is powerful, quick and scalable. Those same qualities make it potentially intrusive and high-risk if deployed without appropriate safeguards.
Employers may not be creating police-style watchlists, but any system that scans workforce biometrics is more likely to raise questions from workers about transparency, necessity, proportionality and fairness.
The risk of bias
One of the most prominent issues with the technology to date has been its accuracy. Research has shown that facial recognition tools can produce higher error rates for people with darker skin tones. This is not just a technical quirk but can be a potential basis for unlawful discrimination.
The Employment Tribunal case of Manjang v Uber Eats illustrated this point. The claimant, supported by the EHRC, alleged that Uber Eats’ use of facial recognition technology (FRT) to verify worker identity left Black and non-white workers more vulnerable to losing their jobs.
FRT differs from LFR. With FRT, the individual participates directly and knows why and how their data is used. LFR, by contrast, can capture biometric data from anyone within range of the camera automatically. Although the case settled before it reached a final hearing, it highlighted how bias in AI-driven tools could underpin legal claims.
Regulatory enforcement in the private sector
In 2024, the UK’s data regulator, the Information Commissioner’s Office (ICO) ordered Serco Leisure to stop using facial recognition and fingerprint scanning to monitor staff attendance at leisure centres. The regulator found there was no lawful basis for the processing and that the system was neither necessary nor proportionate.
Serco said that the technology was “well received by colleagues”. The ICO was unmoved, finding the processing unlawful and disproportionate. Following the ruling, other operators reviewed or abandoned similar systems. This is a clear signal that regulators see workplace use of biometric data as high-risk and are prepared to act.
The legal framework
Employees cannot bring stand-alone claims under the Human Rights Act 1998 against private employers. However, human rights principles inform how regulators and tribunals assess fairness in other causes of action. Discrimination claims under the Equality Act 2010 and complaints under UK data protection law will often reflect the same underlying rights.
Much of the UK’s equality, human rights and data protection law was written in an earlier era, before the proliferation of biometric surveillance and algorithmic management became part of working life. Applying those laws to modern technologies often exposes tensions that legislators did not anticipate. For now, employers must work within a patchwork of overlapping obligations rather than a comprehensive AI statute.
Questions every employer should ask
In view of the ICO’s approach to this issue and the latest judicial review brought against the Metropolitan Police, before deploying facial recognition in the workplace, employers should consider several key questions as part of their mandatory data protection impact assessment (DPIA):
- Necessity: If relying on legitimate interests as the lawful basis for processing, what specific problem is being solved, and is facial recognition usage the least intrusive, proportionate solution?
- Choice: Are employees free to opt out, with a workable alternative available?
- Accuracy and bias: What evidence exists that the system works equally well across different groups?
- Data minimisation: Is the biometric data limited to what is strictly necessary and securely stored?
- Accountability: Could the employer explain the rationale and safeguards to a regulator or tribunal?
Practical steps
There are a number of practical measures that may help further reduce risk:
- Consult early: Involve the organisation’s Data Protection Officer, HR and legal team at the design stage.
- Conduct robust DPIAs: Map data flows, identify risks, and document mitigation strategies.
- Offer genuine alternatives: Provide non-biometric options such as ID cards, without stigma or disadvantage.
- Test in context: Pilot systems, monitor error rates across demographic groups, and suspend use if necessary.
- Tighten vendor contracts: Secure commitments on accuracy, bias monitoring, audit rights and incident reporting.
- Communicate transparently: Ensure employees understand the purpose, safeguards and rights available to them.
What comes next
The EHRC’s intervention in the judicial review of the Metropolitan Police’s use of LFR will bring further scrutiny of how the technology is deployed in the UK. At the same time, the ICO is already active in regulating workplace use, and litigation is likely to test bias and discrimination claims more directly.
Looking ahead, the ICO has recently reiterated that any use of facial recognition must still satisfy existing requirements of lawfulness, fairness, proportionality and meet the other standards set out in data protection legislation. Independent bodies such as the Ada Lovelace Institute continue to call for a more comprehensive governance framework for biometric technologies, highlighting the gap between current piecemeal regulation and the scale of modern surveillance.
Employers should not assume that the absence of dedicated legislation governing AI in the workplace means freedom to experiment. In reality, existing frameworks already provide regulators and tribunals with tools to intervene where wrongdoing is alleged.
Key takeaway
Facial recognition technology can offer convenience and security, but it can expose businesses to legal, regulatory and ethical challenges. The risks are greatest when the technology is inaccurate, intrusive or imposed on staff without a clear alternative.
For UK employers, the lesson is clear. Deploying LFR requires careful consideration of the various equality, data protection and reputational dimensions at stake. In this context, necessity, fairness and proportionality are not abstract principles—they are the tests against which use of the technology will be judged.



