Future of AI

Will the EU AI Act Help or Hinder Online Fraud Prevention?

As AI technologies continue to advance, they are emerging as powerful tools not only for online fraudsters but also for those dedicated to preventing fraud. Here, Tamas Kadar, CEO and co-founder of SEON assesses how this balance can continue to be maintained considering recent regulations, such as the EU’s AI Act.  

Artificial Intelligence (AI) is one of the most significant trends of the past few decades. With the introduction of solutions like ChatGPT, Gemini and Claude, AI has become ubiquitous. More broadly, AI systems and Machine Learning tools have begun to transform various sectors in real time. One notable area is fraud prevention. AI assisted fraud prevention solutions are enabling the analysis of more complex data and identifying trends and patterns that might evade human detection.

However, as AI tools, such as ChatGPT, Gemini, and Claude have become more accessible to the general public, they have begun to offer both advantages and disadvantages in the realm of fraud prevention. Fraudsters have capitalised on the immense power these tools offer and have started using them for their own illicit purposes. A notable recent example is a deepfake version of Elon Musk, which the New York Times found had appeared in thousands of inauthentic ads, contributing to billions in fraud. Worrying developments like this highlight a broader concern about the potential misuse of AI tools and the urgent need to establish protocols to mitigate this risk, a topic of considerable debate, particularly as new regulations, such as the EU’s AI act come into force. 

A STEP IN THE RIGHT DIRECTION?

The general logic behind these measures is difficult to fault, as we’ve already seen the problems that AI-assisted fraud can cause. However, whether they will achieve their intended impact remains uncertain. Ideally, these new measures would limit the ability of malicious actors, such as online fraudsters, to exploit AI tools for their own purposes whether that be through AI-assisted phishing scams or the use of audio and video deepfakes. However, there are concerns that implementing such policies might also restrict the ability of well-intentioned actors to continue refining and adapting AI technologies to combat the alarming rise in online fraud.

The EU’s AI Act, which came into force in July, aims to regulate how companies develop, use and apply AI. First proposed in 2020, the law focuses on mitigating the negative impacts of AI proliferation by applying a risk-based approach to different applications of the technology. Within this framework, ‘high-risk’ AI systems include technologies such as autonomous vehicles and medical devices, as well as loan decision-making systems. The EU goes even further by categorising certain AI technologies, such as so-called ‘social scoring’ systems, which refer to solutions that assess or categorise individuals or groups over time based on their social behaviour or known, inferred or predicted personal traits as ‘unacceptable risk’ applications.

Measures introduced by the EU are designed to enforce higher standards of transparency, accountability and ethical use of AI. Specifically in the context of online fraud, the Act outright bans AI practices that involve manipulation, deception or the exploitation of vulnerable systems. The establishment of the EU AI Office also promises enhanced oversight of AI trends, which could play a role in detecting and mitigating AI-assisted fraud more effectively in the future. While this strengthens the argument that these measures will support the fight against online fraud, it may not capture the complete picture.

BALANCING RISK AND REWARD

This is where the EU’s provisions regarding what constitutes ‘high-risk’ AI applications comes back into play. The regulations are especially stringent concerning any application that involves sensitive data processing, biometric identification or decision-making that affects individuals’ access to essential services like banking and insurance. Given that many AI-based fraud prevention tools rely on personal data to assess the likelihood of individuals being fraudsters, these measures could potentially lead to some of these tools being classified as high-risk under the Act.

If this classification was to occur, developers would be required to implement rigorous risk management processes, including conducting conformity assessments and maintaining continuous post-market monitoring. This would increase both the cost and complexity of developing such tools, potentially hindering the introduction of solutions at a time when online fraud is escalating. With the looming threat of substantial fines for non-compliance – up to €35 million or 7% of global revenue – there are concerns that we may see more conservative approaches in the design of AI-based fraud prevention tools in the future.

ASSESSING THE IMPACT

As with most regulations on the scale of the EU’s AI Act, it is challenging to fully assess its impact at this early stage. The concerns surrounding these measures underscore the very real threat of overregulation in this area, which could stifle the development of innovative AI tools that are essential for combating online fraud in the long term. As the first major regulation of its kind, these ‘teething’ problems would be understandable, but if they do exist, they must be addressed promptly for the sake of everyone operating online. 

Ultimately, to ensure AI development remains a positive force in the domain of fraud prevention, we must remain vigilant and outspoken about the risks of overregulation in this field. As we have seen throughout the evolution of AI, this is a rapidly advancing area of technology. We cannot afford any missteps as we develop strategies to curb its most harmful aspects. It is crucial that any measures introduced remain flexible and adaptive, and that the channels of communication between regulators and those working in the public’s interest are both clear and precise.

For more information about SEON, please visit: https://seon.io/

Author

Related Articles

Back to top button