
The progress of AI technologies in fintech isn’t determined by spectacular presentations and demo product launches anymore, but by reliable systems that manage risks, capital, and customer operations in an increasingly intelligent manner. Investors should bet on teams that apply so-called boring AI to improve internal operations and drive efficiency.
Rather than focusing on polished product roadmaps and impressive narratives, these teams prioritize execution like delivering measurable results and sustainable margin growth.
Inspiring storytelling about AI are not always reflected in profits
Most financial institutions have already increased their AI budgets and are launching more and more pilots and projects in key business areas.
However, less than half of promising projects in finance achieve or exceed the expected ROI, and 60% of these companies face significant delays and scaling issues. It is safe to say that most of the initially impressive initiatives remain at the pilot stage, and only a few of them actually create a product that impacts key business metrics.
This reflects a global trend that the professional community has been talking about for a long time: flashy AI solutions attract attention and look good on the business showcase, while work with measurable results remains behind the scenes. Real value is created where AI is deeply embedded in the infrastructure. In payments, risk assessment, operations, and compliance, artificial intelligence improves thousands of small decisions every day.
Where AI is already delivering measurable results
According to reviews and industry reports, fraud and AML prevention, credit scoring, algorithmic trading and portfolio construction, as well as customer analytics in the retail and wealth segments have demonstrated maturity and economic viability. These areas are united by a broad historical sample, the ability to accurately measure the effect, and a direct contribution to either reducing losses or increasing revenue.
In fraud and AML, smart models are already identifying more suspicious transactions with fewer false positives, which both facilitates the work of teams and reduces losses and compliance costs. In lending, they analyze transactions and customer behavior, allowing for safe credit expansion and improved portfolio profitability. We can say that AI in operations reduces service costs and ultimately increases customer satisfaction. These projects, which may seem invisible at first, directly impact margin growth.
Risks that investors need to focus on
Professionals are now shifting their focus from questions such as “How accurate is the model?” to questions such as “What will happen if it fails, and who is responsible for it?”
For investors, AI often means the same thing as simple software. But these are probabilistic systems, which means that the risk of error and legal consequences may be bigger than the savings from using these products.
If the model hallucinates when making decisions, the company suffers direct financial and reputational losses. Scandals periodically erupt in the media – even back in 2024, Air Canada was found liable for incorrect information provided by a chatbot and was ordered to compensate the passenger for damages. The court was not satisfied with the arguments about independent decision-making, and the company remained liable.
Another unpleasant fact for the market is that many companies are starting to make the same decisions because their models and data are similar. For example, they use the same cloud service, the same large models, and the same data sources. If such a provider experiences a failure, leak, change in conditions, or accessibility issues, many players are at risk at once.
AI in finance often lacks quality data, or the data is fragmented, there are risks of hacks, models can be wrong or simply not use critical information, and regulatory rules and requirements change rapidly.
Plus, generative AI, on the one hand, speeds up routine tasks – it helps sort through documents, write letters, respond to customers, and compile reports. On the other hand, fraud has the same capabilities.
As a result, the main risk for investors is underestimating the cost of mistakes. If AI is perceived only as a way to reduce costs and speed up processes, it is easy to overlook the fact that dependence on suppliers, system stability, and model risks directly affect the real risk-return ratio. And often this ratio turns out to be worse than it looks in the presentation.
What do we mean when we talk about responsible AI?
When we break down the concept of responsible AI, it means that the system is safe, transparent, and managed in a way that minimizes risks. Non-transparent and under-managed AI systems lead to violations, fines, and ultimately threats to financial stability. Research shows that transparency, user control, and clear escalation paths simultaneously reduce risks and increase customer trust.
When conducting AI due diligence, it is important for investors to look not only at the accuracy of the models, but also at the model risk management process, the clear distribution of responsibilities between product, risk, compliance, and engineering, and model passports describing objectives, data, and tests. It is also important to have a strategy to reduce dependence on a single supplier and the ability to transfer data and models when conditions change.
Who is responsible? – the eternal question
To scale AI while maintaining human control, it is important to clearly define areas of responsibility and the extent to which models can be autonomous. Employees need to understand when to agree with an AI decision, when to challenge it, and what to do in cases of doubt.
In practice, this means setting thresholds at which AI decisions are manually reviewed, logging actions along with data and recommendations, and regularly training employees on how to work with AI tools and their limitations.
In such a scheme, boring AI, which manages the flow of transactions, applications, and risks, becomes a stable investment asset. It provides scalable, reproducible results that are built into the business’s cash flows and protected by a combination of governance, quality data, and the trust of customers and regulators.
Author
Alexander Rugaev — a serial entrepreneur and venture capital expert with over 20 years of experience in technology, public markets, and startup development. He has founded and scaled multiple companies in AI, robotics, and blockchain, bridging early-stage innovation with institutional and public investors worldwide.


