
Regulators are moving in on AI, and this time they are starting with the systems that influence real financial outcomes. From fraud screening to dispute management, companies that rely on automated decisions will soon need to prove how those systems work, and who is responsible when they go wrong.
AI is changing how decisions get made in payments and financial services. It can be used for a multitude of services, including to assess fraud, approve or reject transactions, or automate customer responses. Those decisions often impact whether a payment clears, a chargeback is filed, or a refund is denied, all of which brings legal and reputational risk. So, it’s no wonder that regulators are paying attention.
The European Union is leading the way with the AI Act, classifying systems by risk. High-risk systems, including those used in financial decision-making, face detailed requirements: human oversight, testing for fairness, technical documentation and strict recordkeeping. The AI Act is expected to take full effect by 2026, with some obligations arriving earlier.
The United States is moving on a different path. While there is no single federal AI law, regulators are acting. The Federal Trade Commission is watching for deceptive or discriminatory algorithms. The Consumer Financial Protection Bureau is already challenging automated decisions that hurt consumers. More than a quarter of Fortune 500 companies now mention compliance with AI regulation as an important investment area in their SEC filings.
Everyone that touches the payments ecosystem needs to be prepared, including merchants. If an AI system makes a decision that blocks a payment, flags a customer for fraud or rejects a dispute, they will need to explain it. Regulators will want to know how the system works, who monitors it and how errors are caught – and so will consumers. These rules will apply whether the system is built in-house or provided by a vendor.
Why is Compliance with AI Regulation Difficult?
One challenge for organizations to comply with AI regulation is fragmentation. The EU’s AI Act is structured and centralised, while the US is developing rules through executive orders, federal agency actions and state-level bills. This fragmentation makes it particularly hard for global or cross-border businesses to standardise their approach.
The second issue is classification and transparency. Many systems used in fraud detection or payments fall into the EU’s high-risk category, which triggers formal obligations, including regular testing, human review and detailed logs. It also limits the use of black-box systems that can’t explain how they reached a decision.
Thirdly, documentation is becoming critical. Businesses must track which models are in use, what data they rely on and how they are evaluated. This level of transparency is difficult for complex environments with layered systems or outsourced tools. Without clear records, compliance audits become a liability.
Last but not least, most companies are still catching up and need to focus their investments to get into the AI game and maximizing the value they get from adopting modern AI technology. As a result, BCG reports that only 4% of firms have mature AI practices and governance in place. Without proper oversight, even well-performing models can fail compliance tests. Managing that at enterprise scale takes more than good intentions. It requires clear policies, accountable owners and engineering support to match.
How can organisations get ahead
Some organisations are already adapting by adopting platforms that automate key aspects of governance. These systems can track model changes, run fairness checks, and log inputs and decisions across the AI lifecycle. This approach is helpful, but falls short for most organizations as they still need to invest a great deal of internal resources to integrate these platforms into their systems and manage these platforms to stay compliant.
A better approach is to adopt integrate solutions that provide a complete payment solution such as chargebacks prevention and management with compliance built-in. This includes tools for strong governance, documentation, and technical accountability that meets and adapts to evolving regulations across the world – in the UK, USA, and the rest of the world.



