Future of AI

2023: the year for genuine implementation of AI-based identity verification

As we move into a new year, businesses must continue to adapt to and implement a range of different advanced technologies to stay ahead of online safety threats and ensure the security of their users. The shift to a digital-first world has left not just consumers, but also businesses and governments, more at risk of online fraud and identity theft than ever before, and this will be accelerated by the challenging economic climate where fraudsters will look to take advantage of vulnerable people. In addition, as the younger generation spends more time online, the need for robust age verification measures has never been more pressing.

As such, 2023 must be the year in which businesses adopt more advanced identity verification measures to protect their customers, both from fraud and to create a safer online ecosystem for the young. In the coming year, organisations will need to move away from outdated verification and authentication measures and towards those based on biometrics and AI. Here’s more on why, and importantly, how.

Online safety becomes a priority

Our recent survey found that two-thirds (67%) of UK parents with children under 18 are concerned for the safety of their children when they use the internet. This, coupled with the UK government facing tough criticism over the delay to the Online Safety Bill, shows that creating a safer online ecosystem for minors has never been more vital. 

In fact, 62% of UK adults support legislation that would require social media sites to implement robust age verification checks, going beyond just stronger age verification for adult-content sites, like pornography, that the Bill currently focuses on. Given this, it’s no surprise that more than half of parents with children who use the internet (54%) say they are not confident that social media sites have robust measures in place to stop underage users from setting up accounts.

In 2023, the government will continue to face pressure to pass the Online Safety Bill, particularly on the age-verification element, and while government legislation is of course an important driving factor in forcing businesses to act, they needn’t wait for the Bill to come into force to take action. The demand to create a safer online world is palpable, and in 2023, social media platforms and other businesses dealing in age-restricted content and products should be proactive in implementing these measures rather than waiting to be told to do so.

Identity fraud continues to challenge financial services

Identity-related fraud will continue to prove one of the biggest challenges financial services organisations will face in 2023. According to UK Finance, losses from card ID theft alone, where a criminal uses stolen or fake documents to open a card account in someone else’s name, increased 86% in the first six months of 2022 compared to the same period in 2021, from £11.5 million to £21.4 million. 

Given the cost-of-living crisis, financial services organisations will need to do all they can to combat opportunistic fraudsters to protect their customers’ finances more than ever. Our recent global research found that over half of UK consumers (57%) would be more likely to engage with an online financial services provider if it had robust identity verification measures in place. Therefore, organisations should build on this consumer appetite in 2023 and truly put in place strong identity-verification measures to stamp out identity-related fraud. 

The answer?

To stay ahead of increasingly sophisticated threats to online security, enterprises need to pursue multimodal biometrics for identity verification, grounded in AI. While the era of passwordless authentication is well underway, with concerns around online safety and identity-related fraud increasing, businesses across sectors need to adopt biometric identity verification for these very reasons. However, as these technologies have improved, fraudsters have found ways to bypass them, using techniques like face morphs and deepfakes to impersonate individuals.

To combat this, businesses will need to adopt multimodal biometric systems, which use multiple forms of biometric data, such as voice or iris detection, in conjunction with machine learning algorithms to verify identities. This not only adds an additional layer of security but also makes it more difficult for fraudsters to impersonate individuals. By incorporating machine learning and AI into their identity verification processes, businesses can keep in front of the constantly evolving threat landscape and protect their interests and the security of their users.

2023 is set to bring a heightened awareness around the need for businesses to embrace robust identity verification measures, and those that are underpinned by AI to speed up and strengthen the process. Demands for change from customers, and pressures from advanced hackers and fraudsters, will only heighten the need for urgency. By leveraging the latest technologies, such as multimodal biometrics and machine learning algorithms, businesses can create a secure online environment that is safe for all users.

Author

Related Articles

Back to top button