Future of AI

AI Algorithms and Facial Recognition Technologies: Should We Trust the Process?

Surveillance technologies have been around for over a decade mostly in the shape of commercial applications on our smartphones, but now more pervasively as cameras and CCTVs with facial recognition capabilities, which have exponentially multiplied their presence in private and public places.

AI algorithms in live facial recognition (LFR) are now at the center of the discussion with regards to ethical implications and legal considerations, not just with their present applications, but also with potential deployment in future scenarios such as in predictive policing.

By sharing this opinion itā€™s not my intention to frame AI algorithms as a threat by default, on the contrary, I think it is important to be more aware of their potential both positive and negative.

Experts around the globe, including independent regulatory bodies, have raised their concerns about the current state of the art if the extensive use of such technologies is not properly implemented and regulated. AI algorithms in LFR are rapidly developing and spreading across CCTV and intelligent camera applications, but not much has been done to address issues such as misidentification, bias, and protection of fundamental rights of individuals.

Let me give you some context using an example from the UK. Itā€™s the early morning of a winter day, and you are walking down the road. You are passing by a police van station that is testing facial recognition tools. Itā€™s a cold day, and frankly, you donā€™t want to pass by the police hotspot, so you decide to cover your mouth and face while walking.

Just after you pass by the van, you get immediately surrounded by a group of officers, that decided to intervene based on a matching alert they received from their facial recognition tool. They stop you; they check your ID, and you comply with their requests.

However, because you have been searched, questioned, and warned, you feel threatened by such an attitude, therefore there is a chance you might vehemently protest for being stopped for no good reason. As a result, you receive a Ā£90 fine for disorderly behaviour such as ā€˜shouting profanities in public viewā€™. After being fined, you understand you were not wanted by the police and so released. This is exactly what happened in 2019, to a middle-aged unwanted man walking down a public street in London.

Letā€™s share some data

This episode is just one of very many. From 2016 to 2020, both the Metropolitan Police and South Wales Police have performed a variety of tests on the use of their LFR in public places. These are the main data published by the police, following a claim under the Freedom of Information Act requested by the Big Brother Watch:

  • Since 2016, the Metropolitan Police live facial recognition surveillance has been 93% inaccurate; in 2020 the new technology developments had a 100% failure rate
  • Central London in February 2020, the Metropolitan Police intervened in 71% of misidentifications, stopping innocent individuals, showing a strong presumption to intervene by the officer despite a high rate of misidentification
  • Since June 2017, South Wales Police has used facial recognition technologies for surveillance purposes 70 times, with 88% of inaccurate matches

It seems for far that AI technology in LFR has not provided sufficient results and accuracy to justify the extensive use of live facial recognition in a civil democratic society.

At this point, the main question is:

  • What are the main issues arising in the UK on LFR used for public security purposes?

Letā€™s move step by step in the analysis, looking forward to potential solutions.

What are the main issues arising in the UK on LFR used for public security purposes?

The extended use of biometric LFR raises multiple ethical concerns for any liberal democracy, including the UK. To navigate yourself in this reasoning, it is key to understand the dichotomy between the need for security on one hand, and individual privacy and autonomy on the other.

To strike a balance between these two variables is not an easy game, and a draconian shift in any of the two directions would not produce a healthy society. Although this potential conflict between privacy and security will continue in the years to come, there is still often a lack of understanding with regards to the values and principles at stake.

Recalling one of the latest reports from the Ada Lovelace Institute in 2019 focused on public attitudes to the use of facial recognition technologies in the UK, the main concerns relate to Accuracy; Validity; Bias and discrimination; Transparency, Privacy and trust; and Security.

As reinforced by Matthew Ryder QC, some of them require particular attention to better address the concerns about the use of LFR: a) Accuracy; b) Discrimination; c) Framework.

  • Accuracy: is the technology used reliable? As mentioned above, trials of LFR technologies used for policing purposes in the UK have reported mismatches and false positives. These high error rates reflect the serious consequences of deploying the use of technology outside of controlled development environments.
  • Discrimination: is the technology discriminating between different categories of people? On these terms, the police in the UK have repeatedly claimed that their LFR technologies are non-discriminatory with about ethnic patterns in individuals. This is a complex issue, so it is worth mentioning that two researchers from the University of Essex have used a range of tools and expertise to conduct a detailed, interdisciplinary analysis of the Metropolitan Police Service (MPS)ā€™s trials of LFR and produced a report on the technical evaluation of the LFR schemes. It emerged the performance of AI algorithms used on LFR by the MPS could generate diverse outcomes if some ethnicities are underrepresented in the databases or if they are not statistically significant.
  • Framework: is there a proper legal framework to guarantee the safe use of LFR towards the wider public? At present, only a blurry combination of statutory law, common law principles, case law, and independent agencies and regulatory policies is tailoring the legal framework in the UK. Good relevance is given to the opinions and reports provided by independent agencies, nevertheless, they do not have the necessary authorization to halt or at least limit LFR deployment, and their mandates lack to deliver a meaningful oversight on this matter.

Due to its exponential use by the police, public authorities and companies despite lack of appropriate regulation, the Ada Lovelace Institute once more announced an independent review of the governance of biometric data led by Matthew Ryder (a senior QC at Matrix Chambers and former Deputy Mayor of London for Social Integration, Social Mobility and Community Engagement).

In undertaking the review, the team of experts in law, ethics, technology, criminology, genetics, and data protection will examine the current regulatory framework for biometrics, including faceprints, to identify solutions to protect people from misuse of their biometrical data.

The ā€œRyder reviewā€, which is expected to report its findings in autumn 2021, aims to address three main aspects: i) updating and reassessing the regulatory and policy framework for biometric data; ii) ensuring an independent and impartial analysis led by data and evidence; iii) ensure proposals for regulatory reform involve consideration of social justice and human rights.

Along with the review just mentioned, also the Citizensā€™ Biometrics Council – which also includes a range of diverse members of the public from different social, economic and political backgrounds and different perspectives on technology – have given voice to the debate on biometrics and especially facial recognition technologies to provide a final report including three final recommendations:  i) A need to develop more comprehensive legislation and regulation for biometric technologies; ii) Establishing an independent, authoritative body to provide robust oversight; iii) Ensuring minimum standards for the design and deployment of biometric technologies.

Should We Trust the Process driven by AI Algorithms?

In the end, what is interesting to acknowledge is that there is no unconditional support by the British public to the police to deploy facial recognition technologies.

A really interesting statistic emerged from a public survey on public attitudes to the use of those tools: most people think the police should use LFR only assuming that appropriate safeguards to the rights of individuals and best practices are in place, and in a recent survey almost 55% of people (virtually more than the half of the UK population) think the government should limit policing the use of facial recognition only to specific circumstances.

Direct human intervention on AI reports and mitigating outcomes within their contest shall be regarded as fundamental factors in this development. The safeguards offered by Article 22 GDPR support this view when stating that ā€œthe data subject shall have the right not to be subject to a decision based solely on automated processingā€,

What does this tell us in a nutshell? Well, the British public clearly shows some lack of trust towards police and public authorities on the use of facial recognition technologies. Therefore, despite all the excitement for the latest AI solutions and advanced biometric systems in our daily lives, a blindsided belief in technology could have dangerous consequences.

Just to name a few: i) limiting our fundamental rights as individuals; ii) increasing discrimination between people; iii) and creating a society where dystopian scenarios and reality as we know it, could merge all in one if we do not mitigate their interactions within a context that is of service to the community.

Author

  • Marco Mendola

    Marco is a Community & Customer Success @Majoto and Strategic Advisor @MetaCourt. He obtained a Master's of Law in Italy with a focus on legal informatics, followed by an Adv. LL.M. in Law & Digital Technologies at Leiden University in the Netherlands researching surveillance. He is a lawyer completing his professional qualification in England and Wales. Founder of MM3 Legal, he is a legal tech enthusiast and legal design professional. His mission is to empower the community through educational content and create better business relationships by delivering legal services where trust and accessibility are core values.

    View all posts

Related Articles

Back to top button