
AI has become the headline technology in fintech, but the most valuable deployments rarely look like science fiction. They look like fewer bottlenecks, fewer manual checks, fewer handovers, and fewer costly mistakes.
At Coinspaid, the team takes a deliberately pragmatic approach: use AI where it reliably compresses cycle time, improves consistency, and increases visibility, but keep humans firmly responsible for decisions that carry security, financial, or compliance risk.
We spoke with Aliaksei Tulia, Chief Technical Officer at Coinspaid, about where AI already delivers measurable impact across engineering and operations, why “private-by-design” usage matters in payments, and what the next wave of agentic commerce could mean for trust and accountability.
Let’s start with the basics. How do you personally define “useful AI” in a payments company?
For me, “useful AI” in payments is not about replacing judgement. It’s about replacing waiting. Waiting for someone to summarise a document, write repetitive code, scan logs, classify files, draft a first version of a threat model, or answer a standard customer question.
These tasks still need oversight, but they don’t always need human creativity. When AI removes that routine, engineers and operations teams can spend more time on what actually requires people: architecture decisions, incident response, tricky edge cases, and improving the customer experience.
When AI takes a disciplined first pass, people can focus on the work that genuinely needs human input.
Why did CoinsPaid start using AI in the first place?
The honest answer is speed,but speed with control.
Every organisation has a long list of repetitive, high-volume tasks: reading and processing documents, writing boilerplate, preparing test cases, triaging requests, pulling information from internal knowledge bases. Those tasks consume time and create queues, even when they are not the core “value work”.
AI helps us compress those queues. Done properly, it reduces human error, improves consistency, and frees teams to focus on high-impact problems.
But in fintech, there’s a non‑negotiable condition: AI must not become a new risk surface. If you deploy it without governance, you can accidentally leak sensitive information, embed hidden errors into code, or create compliance ambiguity. So our approach has always been: guardrails first, then scale.
AI can increase velocity in the short term and increase maintenance cost in the long term. We have measure both, not just “how fast did we ship”, but “what did we create to maintain later”.
Where in the company is AI already being used?
We focus on practical, day-to-day use cases. Two areas are already delivering real value:
1. Engineering: AI helps generate routine code, write tests, analyse requirements, identify logic gaps, and summarise large technical documents.
2. Operations and internal workflows: particularly document processing, where we built our own classification system to keep sensitive materials controlled.
None of this is futuristic. It’s operational.The goal is to make teams faster, not to create a new layer of complexity.
This document classifier you’ve built, what exactly does it do?
In payments and crypto, you handle a lot of sensitive material: client information, financial workflows, security artefacts, architecture diagrams, things that should never be casually uploaded into external tools.
Our classifier is designed to label documents quickly and consistently into four categories: public, internal, confidential, and secret. That label then determines what can happen next – where it can be stored, who can access it, and whether it is eligible for any AI-assisted processing.
The key point is that we built and deployed it inside our own environment, a private setup where the data does not leave our controlled infrastructure. That gives us governance, auditability, and a clear security boundary.
And it’s not just “privacy for privacy’s sake”. LLMs introduce a new attack surface. Treating them just like a productivity tool is a mistake. You need threat modelling, access controls, logging, and clear rules about what data can be exposed to what system.
The newer version is more “enterprise-grade” in the sense that it detects new files, updated versions, and changes over time and it re-evaluates content automatically. That matters because document risk is not static. A file that was harmless last month can become sensitive after a change.
Developers love AI tools, but they can also create IP and security risk. How do you let engineers use AI without losing control?
You’re right, code is intellectual property, and in fintech the cost of leakage is high. So we treat AI usage like any other high-impact tooling: policies, infrastructure, and enforcement.
Before we scaled anything, we established three principles:
• Data classification rules are non-negotiable. If something is sensitive, it cannot be used in ways that increase exposure.
• Safe tooling and safe environments matter. Engineers need a controlled path, not a grey zone where everyone improvises.
• Humans remain accountable. AI can draft, but engineers approve and own the outcome.
In practice, we allow AI to support work such as test generation, routine coding patterns, requirements analysis, and summarising large documents. But we do not outsource complex architecture decisions, high-risk security design, or compliance-critical logic to an LLM. Those still require senior expertise and deliberate review.
Have you measured any improvements or is this still “a nice idea”?
We measure it. Otherwise it’s just hype.
In some categories of repetitive work we’ve seen substantial cycle time reductions, particularly in test generation and routine frontend tasks. The magnitude varies by team, task type, and how mature the surrounding process is.
To be clear, not every task becomes faster. AI is not a shortcut for hard problems. But for the work that creates queues – the repetitive, time-consuming parts. It can be a big advantage.
Security is one of the most sensitive areas in fintech. Can AI genuinely help there?
Yes,if you use it correctly.
Security contains a lot of repetitive work: reviewing diagrams, mapping data flows, checking configurations, drafting threat models, writing first-pass risk notes. AI is very good at taking that first pass because it doesn’t get tired, and it doesn’t “forget” to check a section.
But our rule is simple: AI drafts, humans decide.
Security mistakes are too expensive. AI can hallucinate, misinterpret context, or miss a nuance that a security engineer catches instantly. So we use AI to accelerate analysis and documentation, then we apply human judgement and validation before any decision or change is made.
There’s also a hard reality: LLM accuracy often starts around 40–70% depending on the task and constraints. Getting to 90%+ takes serious effort. Final‑mile failure creates expensive rework. In fintech, 80% accuracy is a failure.
When that failure happens, the downside isn’t theoretical:
• cost of rework
• reputational risk
• regulatory exposure
• customer trust erosion
What about compliance and AML? Many people talk about “AI-powered compliance”. Where do you draw the line?
In compliance, determinism matters.
AI can help with supportive work: collecting information, summarising public sources, preparing an early due diligence pack, highlighting inconsistencies for review. That can save time.
But compliance decisions often require rule-based answers. Thresholds and obligations are defined by law and policy. AI is not deterministic, the same prompt can produce different answers. That’s not acceptable as the final decision-maker in AML.
So for us, AI is an assistant, not an authority. It speeds up preparation. It does not replace the combination of rules engines and accountable human judgement.
There’s a growing debate about AI agents and agentic commerce. Do you think we’ll see AI systems initiating payments at scale?
We’re moving in that direction, but the hard part is not the technology. The hard part is trust, consent, and accountability. The industry is already experimenting with “agentic payments” concepts, where an AI agent can take steps towards checkout under clearly defined permissions.
What that signals is that the ecosystem is starting to design standards around verifiable user intent and auditable authorisation for agent-driven flows.
I don’t think AI will “replace” regulated financial infrastructure. Governments and regulators want more control and clearer accountability, not less. But AI will absolutely reduce manual coordination: fewer manual steps, better orchestration, faster dispute resolution as long as we build the right guardrails.
The future here is not “autonomous money”. It’s delegated capability with strict boundaries.
Beyond tooling: what determines whether AI adoption actually works inside a company?
The hard truth is that AI exposes weak executive alignment faster than almost any other technology.
Where the exec team is cohesive, AI accelerates results. Where it isn’t, AI tends to amplify existing dysfunction.
That’s why governance isn’t only about security, it’s also about execution: ownership, decision‑making, review discipline, and measurement.
Looking ahead, what’s Coinspaid’s AI strategy over the next couple of years?
Three directions:
1. Engineering speed without quality loss: expand AI support in testing, documentation, and routine implementation while strengthening review discipline and governance.
2. Client-facing self-serve tools: give users faster ways to resolve basic issues, retrieve reports, and validate transaction-related information without waiting for manual support.
3. Operational forecasting: reduce bottlenecks and improve processing reliability by anticipating load patterns and operational constraints earlier.
A simple way to say it is: AI won’t “move funds” for you, but it can help ensure the right processes are ready when clients need them.
Finally, which AI systems impress you most right now?
Different models excel at different tasks, some handle long context well, others are faster or more consistent for structured outputs.
I’m also excited by more structured, multi‑role workflows, where you can simulate an architect, a developer, and a tester to stress‑test an idea before you ship.
But regardless of model, my principle stays the same: in fintech, the model is never the product. The product is the safe, controlled system around it.
At Coinspaid, AI is not a branding exercise. It’s a way to reduce waiting, increase consistency, and make teams and clients faster, while keeping humans accountable for the decisions that matter.
The companies that win won’t be the ones that talked most about AI. They’ll be the ones that built disciplined operating capability around it. Speed without governance is just future technical debt.



