
the first time, it asked prosecutors to consider how companies manage risks linked to artificial
intelligence. In other words, regulators now expect AI to play a role in compliance.
Some companies are already using AI in compliance work. Others are still working out how to make
it fit. But the direction of travel is obvious. Regulators no longer just want to see a compliance
programme. They want to see one that keeps pace with technology.
Is not using AI becoming a risk in itself?
One of AI’s key strengths is its ability to process large volumes of data quickly. In 2020, there was an
estimated 64.2 zettabytes of data in the world. Today, there are 181 zettabytes. If a single zettabyte
were a stack of books, it would stretch from Earth to the Moon 20 times. Now multiply that by 181.
teams to capture the necessary information while filtering out irrelevant data. This will only become
harder in the future. As AI tools become more proven and widely adopted, companies need to think
seriously about how to use them in their compliance processes.
Adequate systems have to keep pace with technology
This is about speed, and onboarding third parties much faster at a fraction of the cost, but also about
accuracy and breadth. Unlike manual processes, which are prone to oversight and inconsistency, AI
ensures that no relevant data is overlooked, whether it’s buried in a local news story or hidden in an
obscure blog post.
AI enables businesses to move beyond structured lists and inconsistent manual searches. AI taps into
vast pools of unstructured data across the length and breadth of the internet. With that in mind, not
using AI could soon be seen as negligent. AI provides a more effective and efficient means of
preventing criminal behaviour, and it is both widely accessible and easy to implement.
Choosing not to adopt it risks falling behind technological progress. It could fail to meet modern
standards. If a company enters a high–risk relationship that could have been identified with AI,
stakeholders will ask why that tool wasn’t used.
Why traditional tools no longer cut it
questionnaires, and ran basic web searches. Screening databases made it easier to flag sanctioned
individuals and politically exposed persons.
But these tools can’t handle the complexity of today’s risk landscape. Global supply chains, opaque
ownership structures, and rising ESG expectations require a more flexible approach. Databases are
often out of date.
They provide yes–or–no answers but miss the story behind a name. A third party might look clean on
paper but still pose a serious risk.
These tools are also limited in scope. They don’t capture local reputational issues, allegations that
haven’t led to formal charges, or complex ownership webs. They don’t learn from previous risks or
adapt to new patterns.
In short, they cannot deliver a risk profile that reflects the true exposure.
Technology has shaped the standard before
In the late 1990s, compliance teams gained access to sanctions screening at scale. By the early
2000s, this became expected. Regulations like the USA PATRIOT Act and the EU’s Money Laundering
Directives made screening mandatory.
At first, screening databases were innovative. Gold standard compliance teams adopted them. Over
time, they became the industry standard, and now a necessity. The same trajectory is emerging now
with AI.
Screening tools deliver structured lists. AI delivers dynamic, context–rich insights that are essential to
understanding those databases, but more importantly to understanding your counterpart.
It gathers data real time and connects information that may appear unrelated. That makes it far
more suitable to today’s risk landscape. Risk today will not be the same as risk tomorrow. As
external conditions change, new risks can surface, requiring organisations to reassess existing
relationships. A third party that poses no risk today may become a reputational liability tomorrow.
Organisations that treat due diligence as a one–time check leave themselves vulnerable to emerging
risks that may only surface months or years later.
What prosecutors are looking for now
Bribery Act, companies must have adequate systems and procedures in place to prevent criminal
behaviour. Proactive prevention means systems must be designed to identify and mitigate risks
before they result in criminal conduct. Traditional due diligence processes are static and fragmented.
processes and systems powered by AI are widely available.
The DOJ’s recent guidance explicitly mentions AI for the first time, asking how companies manage AI
risk within compliance. That implies two things. One, regulators expect AI to be used. Two, they
expect companies to understand how it works.
Being unaware is no longer a defence. Compliance leaders must be able to explain not only their
governance framework, but also the rationale behind their choice of tools. This means demonstrating not just that a system was in place, but that it was chosen and maintained with care.
Existing tools introduce risk
Old tools don’t just miss threats. They can give a false sense of security. A database that says “no
risk” might be missing a recent update or a delisted profile.
Registries in some regions are unreliable. Corporate data may be hidden or out of date. Inconsistent
systems make it easy for risky entities to avoid detection.
A clean record might simply mean no one looked in the right place. Some databases even allow
people to remove themselves. This protects privacy but also creates a gap.
A former official with ties to corruption might be delisted and slip through checks. That’s a problem
if a company is relying on that database alone.
Web search isn’t enough
Teams often use web searches or adverse media checks to go deeper. But this is time–consuming
and inconsistent. Processes are set in place but algorithms change and research is purposefully
restricted to enable human analysts to cope with the vast quantity of data available.
Local news or non–English sources can be missed entirely. Automated keyword searches don’t cut it.
If something doesn’t match the search terms, it doesn’t show up.
Critically, while manual research can be rigorous, it does not scale and is by nature slow. This often
delays onboarding decisions – a frustration to the business and the counterpart alike – or results in
superficial checks when time runs short.
AI fits the risk–based model
Per FATF’s 2003 and later 2012 guidance, robust compliance programmes use a risk–based approach
to corporate compliance. They focus on what matters most. But if risks aren’t fully assessed early on
in the journey this approach becomes somewhat futile. Organisations and individuals are classed as
high risk based on criteria that doesn’t necessarily align with an overall risk profile.
counterparts, only those that genuinely may be risky get passed for closer analysis. This is distinct to
the traditional approach which may use screening tools alone to assess the risk of a counterpart.
Low–risk counterparts will still move on quickly but resources spent on more complex or enhanced
due diligence is preserved for those subjects that genuinely warrant it. This saves time, helps teams
focus where they’re needed most and supports proportionality.
A small vendor in a high–risk geography may pose more risk than a large vendor in a low–risk one. AI
enables compliance teams to calibrate their investigations accordingly.
How to assess AI for compliance use
Not all AI tools are suitable for compliance work.
Some run on open platforms that may expose sensitive data. For example, if information is collected
and held on the wrong entity, it could break data privacy laws such as GDPR. To reduce this risk, look
for tools that prioritise accuracy and entity resolution technology. While no AI will be perfect (yet)
this needs to be a core focus for the technology. Secondly, look for closed–loop systems and ensure
that they have high data protection and privacy standards that prevent against data leakage and
potential breaches.
Many tools are simply not built for business use and teamwork. It’s important to choose a tool that
fits into your workflow, enables collaboration within your team and across your organisation and
keeps an audit trail that can be used in a regulatory setting.
Finally, transparency is key. A good tool should show exactly where its findings come from and how
it reached its conclusions. This level of transparency builds trust and supports regulatory compliance.
Test the tool before adopting it. Run it on third parties that are known to you to see if it surfaces the
right issues. This shows how well the AI performs in real–world scenarios. It also helps teams
understand what to expect from its output, and test how it applies to their workflow.
The operational gains are significant
AI tools are not only more accurate, but much faster. They can process and summarise thousands of
documents in minutes. They can also highlight connections and group risks into categories.
This speeds up onboarding. If a vendor seems risky, teams can act early or choose not to proceed. If
the risks are acceptable, mitigation steps can be planned at the start of the relationship.
But these obvious efficiencies only scratches the surface. The real impact is the ability to move the
business faster. With automated due diligence, compliance leaders must ask what it would mean for
their organisations to onboard third parties ten times faster. That’s exciting for commercial teams, C–
suite executives, and senior leadership and helps to secure buy–in for these new technologies.
Beyond compliance
AI doesn’t just improve compliance processes. It helps the wider business too. Fast–tracking low–risk
vendors reduces delays for commercial teams. This helps move quickly while still managing risk.
AI also helps justify decisions. Business leaders want to see evidence. AI tools can provide clear
reports that explain why a vendor was approved or rejected.
Better speed and documentation are major benefits. But AI also changes how compliance is seen.
When it supports both risk management and business needs, it becomes a valued partner, rather
than a blocker.
The baseline is moving, fast
Data volumes are growing too quickly for traditional tools to keep up. Most legacy systems only
handle structured data like names or dates. But risk often hides in unstructured data like articles,
blogs, or local news.
Compliance teams now need tools that can process this kind of information. AI can scan these
sources and highlight issues quickly. That means risks can be spotted earlier and acted on before
they escalate.
As AI adoption increases, expectations are shifting. Tools that are considered advanced today may
become standard tomorrow. Failing to keep pace could be seen as a weakness in your compliance
programme.
No system can remove all risk. But ignoring tools that clearly improve how you manage risk is
becoming harder to defend. AI helps compliance teams act faster, explain their decisions better, and
protect the business more effectively.
That’s what regulators, customers, and partners are starting to expect.