Future of AIAI

Why biometric AI holds the key for the UK’s Online Safety Act

By Clive Summerfield, Founder & CEO of FARx

The next phase of the UK’s Online Safety Act, which prescribes how content platforms can serve children, has finally come into force following years of campaigning and political debate. As of July, any tech platforms operating in the United Kingdom will be legally obliged to remove harmful content from the feeds of children, such as pornography and material encouraging self-harm.  

This set of laws, referred to as the first Children’s Safety Codes of Practice (COPs), created to protect children as well as adults online, also requires content sharing platforms (e.g., social media sites, search engines etc) to proactively remove illegal content such as topics discussing child sexual abuse or terrorism.

Despite overwhelming support for the Act, many campaigners say these new rules don’t go far enough. For example, content shared on messaging apps or AI chatbots, which are capturing children’s attention, isn’t included.

To enforce these rules, tech platforms now have a responsibility to verify the ages of users. If they don’t meet these COPs, the UK regulator, Ofcom, will be able to issue hefty fines of up to £18 million or 10% of global turnover.

But what does the legislation actually look like in practice?

Enforcing the Act

Age verification could be a potential minefield for platform owners. Nowadays, email addresses, passwords and other biographical information is no proof that a user is truly of the age they say they are. Additionally, it is far too easy for anyone to create or get a hold of an email address, telephone number and so on to get around a simple biographical age verification process.

The only way to truly verify someone’s age is through a biometric characteristic such as facial recognition algorithms, which are carefully tuned to determine age. Generally, these work effectively.

However, there are other biometric traits that can also convey age – such as voice characteristics. Voice biometric analysis, for instance, can be used to supplement face-based age verification, and fusing both of these identifiers can create a clearer picture.

Whilst the introduction of the Online Safety Act is a good start, it’s not a silver bullet, something which Ofcom admitted recently during an interview with the BBC. Whilst it is not Ofcom’s job to tell businesses how to do age verification, as a technologist, the only way to reliably confirm you have an adult in front of your computer or smartphone is through a biometric analysis, whether that is face, voice or a combination of the two. It is crucial that tech platforms have all of these biometric tests working alongside one another, meaning they are able to ensure users are who they say they are at log in, and continuously verify their identity throughout their session.

However, how do you ensure someone is who they say there are, the entire time their account is logged in?

Continuous verification

Verifying age consists of two processes that should not be conflated: –

1. Onboarding – verifying an individual’s age and identity when they first register (through matching IDs and selfies, for example)

2. Verification – When someone logs in, ensuring that the person accessing the account is who was onboarded initially

On-boarding involves providing documentary evidence that you are who you claim to be, including proof of age. This process can be quite involved, but once your identity has been verified and biometric data captured, biometric verification then confirms that you are still the same (biological) person.

Biometrics can be very useful during onboarding, not only to verify your identity but also to confirm that you are real and not a deepfake or a copy/recording, and that you are not masquerading as multiple different identities, such as setting up multiple bank accounts under fake names.

But onboarding is only a small part of the process. The risk is, even if an account has had its owner’s age verified, children could still log onto an adult’s account and be exposed to restricted content. Voice and face recognition software will mean that every time a user logs in, they are verified as the person who owns the account. This is critical if the legislation is to have the desired impact on young people.

Continuous biometric ID verification

Using biometric systems which integrate into an operating system, platforms can verify the identity of a user throughout the entirety of their session. To do this, the system, driven by AI, will learn about an individual’s appearance, behaviour and voice to build an identity which the user can be verified against. Then, when the user logs on, they are continuously monitored by the system, verifying them consistently until they log out.

It is extremely difficult to collect biometric information for children, and no digital platform will create a policy stating they will do so. However, this isn’t a stumbling block for the Act.

Children do not require biometric verification; it is only adults who need to be biometrically verified. If you claim to be an adult and age verification confirms this, then as an adult you can agree to be biometrically enrolled and verified. If a child then accesses that account, they are rejected not because they are a child; but because they are not you.

While the Online Safety Act is certainly an important and groundbreaking piece of legislation, it relies on platform buy-in, and legitimate interest from third-parties to make it effective. As with many rules, there will be short cuts and loopholes for content sharing companies to take; it is up to them to make the right decision to help enforce the Act. To do this, they need to adopt the next generation of biometric testing, not only to verify a person’s age, but their identity.

Author

Related Articles

Back to top button