Dealing with financial fraud has become an increasingly pressing challenge in recent years. Research from UK Finance reveals that more than £750m of bank customers’ funds were lost in the first half of 2021 alone. This figure represents a steep 20 percent increase in losses compared to the same period in 2020.
The steady rise is in part because criminal gangs are becoming more organised and sophisticated in their methods. They have shown an increasing understanding of the extended financial ecosystem and have expanded their tactics to focus on more vulnerable elements.
The telephony channel is one such target, as voice services usually lack the more robust and layered authentication processes found in online platforms. As such, voice has become a primary focus for fraudsters seeking to harvest account information and access financial accounts.
Most voice channels still rely on highly ineffective manual processes for authentication which are easily defeated by modern fraud tactics in the digital age. The speed, power and accuracy of AI is one of the most promising solutions to protecting voice from this threat.
Why is telephony targeted by financial fraudsters?
Telephony is an attractive target for fraudsters because it has fallen behind security advancements seen in other fields. Multi-factor authentication (MFA), for example, has long since been a standard measure for online financial platforms, with a secondary channel making it harder for fraudsters to access accounts armed only with a set of stolen credentials.
Most voice services meanwhile lack any additional verification measures, instead, relying on knowledge-based authentication (KBA), a set of questions relating to personal details and ‘secret’ factors such as a password or a PIN.
In today’s digital world, it has never been easier for criminals to acquire the information they need to pass KBA processes. Techniques such as vishing, social engineering and man-in-the-middle attacks are used to steal account details from customers and bypass voice security. Criminals can also simply purchase the information they need from others – countless personal records are stolen every day, and full or partial personal records and credential sets are readily available on dark web marketplaces.
The nature of telephony also makes it an ideal target for fraud. Calls are highly anonymous, so imposters need relatively little to assume the identity of a legitimate customer. Voice channels also need to balance authentication against customer experience. Manual KBA processes are already time consuming and often frustrating, and more stringent verification can rapidly alienate customers.
Why voice authentication has the answer
Despite efforts to balance authentication with accessibility and good customer experience, manual processes can ironically be more of a barrier to legitimate callers than they are to imposters. Tellingly, findings from Pindrop’s 2022 Voice Intelligence Report expose how fraudsters are better at answering KBAs than customers – criminals were able to pass the process an astounding 92% of the time. Actual customers meanwhile are likely to struggle with forgetting or misplacing their passwords and PINs, especially with multiple accounts to keep track of.
So, if manual processes aren’t working, how can we overcome the limits of telephony to create a more secure system?
Rather than relying on passphrases and PINs that are easily stolen or forgotten, verification is handled in moments by analysing the caller’s voice and call details to confirm their identity.
Voice authentication is not only much more secure than KBAs, but it also has the potential to be a fast, seamless process that boosts customer experience and call handling efficiency alike.
But for voice authentication to work, analysis must be delivered extremely rapidly, with no invasive processes to disrupt the call flow and put off legitimate customers.
This is where the analytic powers of AI come in.
How AI is the key to delivering authentication over voice
Delivering effective voice authentication with a caller on the line demands analytic technology that can crunch through innumerable data points and deliver a verdict in near real time. A verdict on whether a caller is legitimate must be reached at the very start of the call to avoid disrupting the customer experience. What’s more, the result must be reliably accurate to keep out fraudsters.
AI-powered analytics is up to the task. AI can rapidly analyse data points such as the caller’s voice, device and call metadata in moments, and with a high degree of accuracy. Even if a fraudster has done their homework and stolen or bought all the necessary login credentials and personal data, they will give themselves away as an imposter when they start talking.
Voice authentication can help to create faster and more efficient processes that boost customer experience and enable call handlers to help more customers. One financial application for example has so far enrolled 17 million users for a voice-led call back system. Once users have had their voices verified, they can arrange for call backs at a convenient time for support or unlock their account if they are locked out.
The technology can also aid in engaging with customers over such an anonymous channel. AI can be used to assess valuable caller demographic insights such as predicted age range and spoken language, that can be used to speed up call handling and provide greater support.
Backed by the speed, power, and accuracy of AI, the voice channel can transform from being seen as a weak link in the chain, to becoming a near-impenetrable barrier against even the most sophisticated fraudsters.