FinanceFuture of AI

Countdown to compliance: Preparing the financial sector for DORA and responsible AI

The financial sector has borne the brunt of nearly one-fifth of reported cyberattacks in the past two decades, incurring $12 billion in direct losses, according to the IMF. With the impact of cyber threats set to intensify as the use of AI continues to grow, the need for more comprehensive cybersecurity measures is becoming increasingly apparent. 

This is where the Digital Operational Resilience Act (DORA) comes in. The regulation is aimed at preparing the financial sector against cyber threats, placing significant emphasis on the responsible use of AI. But with just under six months until the deadline, the countdown for financial companies in Europe to bolster their digital resilience and become compliant is fast approaching. 

Increasingly varied and stringent regulations have come into force in the EU to support the financial sector. Improving sanitisation around data with the General Data Protection Regulation (GDPR) was just the beginning, quickly followed by the DMA (Digital Markets Act) and others. When successful, this type of regulation allows for businesses in a sector to thrive, and the aim for DORA is no different. 

With the clock ticking, it is important for businesses that use AI in the financial services space to understand the threats they are facing with the proliferation of AI, and the purpose DORA serves. Utilising this time to prepare before the regulation is implemented is crucial.

How serious is the AI threat at play?

Despite the potential to use AI for the greater good, like all technology, it can be used for nefarious means. AI-powered cyber-attacks have become a major threat, with nearly 74% of global security leaders stating that AI-powered threats have become a significant issue

Financial institutions have become increasingly likely to come across malevolent actors using AI, with many methods of attack being developed to target different areas of the business. These can include using AI to uncover software vulnerabilities in their systems or generating advanced phishing attacks which target employees. Even when not directly targeted, financial services companies also need to be on high alert when it comes to the potential for market manipulation driven by AI-powered algorithms. 

Despite the awareness that AI-driven attacks pose a significant threat to their organisations, many financial institutions are lagging by not yet preparing their systems to face these threats. This may be because they do not perceive it as a threat in the immediate future but rather as something to consider. An example of this can be seen in the auditing world where only 28% perceived AI as a significant threat at the start of 2024, with respondents believing that it would be more of a threat in two to three years. 

In fact, we have already seen the consequences of the proliferation of AI in cyberattacks: a 2023 study found that 85% of security professionals were already attributing the rise in attacks to the use of generative AI. This view was also reflected by the UK’s National Cyber Security Centre (NCSC) which warned that “all types of cyber threat actors” are already leveraging AI to lower the barrier of entry  for cybercriminals by making high impact attack vectors such as ransomware more accessible.

Decoding DORA’s framework for responsible AI

Recognising AI’s role in driving digital transformation, DORA prioritises its responsible and secure use in order to foster trust. By doing so, the regulation aims to benefit various industries, but particularly finance. 

DORA directly addresses AI risks by mandating a comprehensive framework for responsible AI in financial services. A key component of this framework is algorithmic risk management. Financial institutions must establish frameworks that identify, assess, and mitigate risks associated with AI models such as bias and explainability. By diversifying input datasets when training their AI models, and implementing regular fairness checks throughout the development process, financial institutions can mitigate the propensity for bias.

Building upon GDPR, DORA mandates strong data governance practices to ensure the quality, integrity, and security of AI training data. This also helps to avoid biased or inaccurate outcomes. Financial institutions can place data validation and protective measures to safeguard sensitive information. 

Another key area is vendor oversight.  Financial institutions must rigorously assess the digital resilience of third-party AI providers. This allows for better tracking of risk identification, mitigation, and reporting.

Finally, continuous monitoring of AI models is essential for detecting drifts in performance or potential security vulnerabilities. DORA emphasises logging all actions related to AI models for auditability. If any sector is to be comfortable with logging its actions, the financial services industry should stand out. Logging all actions related to AI models, including training data, model versions, and user interactions, to facilitate auditing and regulatory compliance will factor into broader monitoring already underway.

How to prepare for regulation

Despite the fact that 70% of financial institutions are already preparing for DORA compliance, only 40% feel fully confident in their current strategies. To prepare, financial companies that use AI must manage risks like bias in AI models, oversee AI vendors, keep good data practices, and closely monitor AI models.

Regardless of where your headquarters are, ensuring compliance with DORA will make operating across borders easier. Despite DORA’s implementation focus on the European Union, the fact that it addresses the need to tackle growing threats and drive further innovation makes complying with the regulation a no-brainer. 

With regulation continuing to develop, any future AI models that are developed will likely need to meet specific criteria. These will likely be:

  • Explainability – AI models will need to be able to explain any decisions made, with developers having to ensure that this is prioritised.
  • Removal of bias – Ensuring that discriminatory outcomes are prevented by looking to mitigate bias.
  • Human oversight – Making sure that all critical decisions are overseen by humans.

When choosing a company to help with your own implementation of AI, it is important to identify what steps the company themselves are taking to comply with DORA before it comes into full effect. Questions that should be front of mind when entering a partnership with an AI partner include: Are they monitoring and logging usage in compliance with DORA? How secure is their platform? What are they doing to mitigate algorithmic risk?

Compliance and adoption are key

DORA represents a pivotal moment for developing a more resilient financial sector in Europe. As the countdown to DORA’s enforcement begins, financial institutions should prioritise a holistic approach to AI governance, transparency, and accountability. By investing in robust AI systems, adhering to DORA’s comprehensive framework, and cultivating a culture of responsible innovation, the sector can continue to thrive.

Related Articles

Back to top button