Machine Learning

Will Clearview’s £7.5m fine become a turning point for facial recognition?

Facial recognition technology is being applied in a wide range of products and services. But Nigel Jones, ex Head of Legal at Google EMEA and co-founder of the award-winning Privacy Compliance Hub, argues we shouldn’t sleepwalk into a surveillance society

When the UK’s Information Commissioner’s Office (ICO) fined controversial facial recognition company Clearview AI £7.5m this week, it became the latest in a long line of countries to impose similar penalties. The Italian, French and Australian commissioners have all similarly fined and/or banned Clearview from collecting and processing any more images of their residents and ordered it to delete the ones it already has. The company has amassed a database of 20 billion images of people’s faces from Facebook, other social media platforms and the wider web. Clearview also settled a case in the US earlier this month where it agreed not to sell its database to most private companies.

Clearview’s fine has grabbed the headlines, but the growth of facial recognition software is not new. But as the number of applications rapidly escalates, it threatens our fundamental human right to privacy. In Ireland, for example, the Minister for Justice has announced police will gain new powers to use facial recognition and AI to identify criminals within minutes. It might want to consider the experience of South Wales Police, which had a 95% false positive rate after testing its own facial recognition system for 55 hours. In 2019, the UK’s Police National Database were holding images of around 20 million faces, a large number of whom had never been charged or convicted of an offence.

A surveillance state

Is this right or acceptable? The law says we all have a fundamental human right to privacy but if cameras are used by institutions such as the Metropolitan Police at Notting Hill carnival, a Remembrance Sunday commemoration and at Westfield shopping centre, are our rights being respected?

Even in and around our homes, Ring doorbells and competitor products have evolved into a global CCTV network, providing the police with a new way to fight crime. As well as collecting video, these devices also have microphones, some of which are powerful enough to capture audio from people walking past on the pavement. Likewise surveillance technology is expanding

beyond just recognising people’s faces. Biometric technologies already exist to identify someone’s fingerprints, voice, iris and even the way they walk. Mastercard recently announced a new feature called ‘smile to pay’ it says will accelerate transaction times.

Creeping scope

When the Covid-19 pandemic was at its height, surveillance technology was accepted as a necessary evil to track the virus’s movements and limit its spread. Billions of people around the world had their movements logged by various test and trace apps and authorities used facial recognition technology to check whether people were wearing masks on public transport, or maintaining social distance limits. In Australia, software was trialled to check whether people were staying home during quarantine. More than half (54%) of Brits surveyed in 2020 said they were happy to sacrifice some of their data privacy to shorten the length of lockdown.

But that doesn’t mean sruveillance should automatically become a way of life. With the move to hybrid working, workplace surveillance technology, such as the monitoring of emails and web browsing, to video tracking and key logging, has become commonplace. According to trade union body, the TUC, 60% of workers say they have been subject to some sort of surveillance or monitoring by their employer, with three in 10 agreeing this has been since the start of the pandemic.

In some instances, regulators are taking action where these practices breach existing GDPR legislation. H&M in Germany, for example, was handed a €35.2 million fine in 2020 for excessive surveillance of employees, and in the UK, Barclays is under investigation for its use of software to track staff computer activity.

Invasive technology isn’t progress

The swift expansion of artificial intelligence and machine learning means technology is moving beyond what many would have thought possible just five years ago. But just because this technology exists doesn’t mean it should be applied in all situations. The ICO stepped in last year after nine Scottish schools added facial recognition technology as a means of speeding up its lunch queue, encouraging a less intrusive and more proportionate response.

Almost seven in 10 people say they’d avoid doing business with a brand if its data usage policies were too invasive. Privacy is a fundamental human right and, under the UK GDPR, it’s protected by law. Startups investing in developing these new technologies need to do so with an ethical lens, where questions such as ‘what does this mean for privacy?’ are asked every step of the way.

It remains to be seen whether Clearview AI will pay the sizable fine imposed by the ICO, but it certainly cannot operate in the UK in future under its current guise. Let’s hope this represents a line in the sand for other organisations looking to make money from biometric information. Let’s have a real national debate about the kind of society we want to live in.

Author

  • Nigel Jones

    Nigel Jones is the co-founder of the Privacy Compliance Hub, a no-nonsense platform created by two ex-Google lawyers that makes data privacy compliance easy for all organisations to understand and commit to.

    View all posts

Related Articles

Back to top button