Digital TransformationFuture of AIRegulation

Beyond the Algorithm: Why AI Must Be Regulated in Online Mortgage Advice

Written by Gerard Boon, Managing Director of Boon Brokers

Abstract: This article explores how AI is reshaping financial advice and why regulation must evolve to protect consumers from receiving misinformation on financial advice in today’s digital mortgage era.

Artificial intelligence is rapidly transforming how consumers are accessing financial advice. From AI-generated chatbots to algorithm-driven mortgage comparison tools, automation is becoming embedded in just about every way people research, evaluate, and inevitably act on their financial decisions. But while this innovation brings undeniable benefits – speed, accessibility, and scale to name a few – it also introduces a new layer of risk: the illusion of trust.

Latest research into consumer perceptions of online mortgage advice exposed a significant concern. 74% of respondents did not check whether their online source of mortgage advice was qualified or regulated, even when generated by AI or published on unregulated platforms.

The shocking misconception that easily obtained advice is equal to careful and considered research is more than just a gap in AI awareness – it represents a genuine threat to financial wellbeing. How? Simply, when consumers act on what they believe to be vetted advice, their confidence in each decision is an illusion of expert advice. In actuality, they could be following untrustworthy, unregulated, or even deliberately harmful financial advice that could create long-lasting concerns.

When we place this within the context of the financial mortgage sector, this issue is especially pressing. Unlike other areas of consumer finance, mortgage advice is deeply personal and should be tailored to the individual needs, including both financial affordability today and financial goals for the future. In short, it involved decisions with long-lasting consequences. A single misstep in the form of an inappropriately fixed rate, a misdirection in affordability, or misinformation on remortgage timelines and early repayment policies, can all cost borrowers tens of thousands of pounds.

Unfortunately, AI-powered platforms – most commonly ChatGPT and Microsoft Copilot – are now confidently offering generic “recommendations,” often without any expertise or real understanding of different borrower’s needs, goals, or obligations. It should be noted that AI is not a malicious force trying to lead borrowers astray. Rather, it’s a fantastic and innovative tool that still requires a unique understanding on how to use it.

We’ve encountered numerous clients who initially trusted automated advice tools, believing them to be objective and compliant. But in reality, these tools have only been able to offer surface-level suggestions – both down to the wide-source of misinformation and the questions that are actually being asked/typed into these platforms. Most crucially, however, they lack the regulatory obligation – and the ethical judgement – to assess long-term suitability or to warn users of their potential risks. By this, I do not mean a list of 100 potential problems drawn from the digital web – I mean the potential risks that are specific to the borrower’s specific financial profile and requirements. As a result, when clients arrive after engaging with these tools, it’s often clear they weren’t fully informed.

With that said, there is no doubt that AI holds enormous potential for improving financial inclusion. It can lower barriers for those who struggle with literacy, language, or digital access. It can operate around the clock, offering support where human advisers might be unavailable. And when responsibly implemented, it can streamline administrative burdens and improve client onboarding. But inclusion without protection is insufficient. The innovation and goal of AI must be to support responsibility, as such, the power to reach more people must come hand-in-hand with a duty to provide regulated and safe information.

Compounding the issue is the evolving role of search engines and AI-driven interfaces in shaping consumer journeys. Increasingly, visibility – not regulated accuracy – is the primary driver of engagement. Currently, content that perfectly plays the game of algorithms is the content that is deemed most optimised and therefore highest ranking, regardless of regulatory compliance or impartiality. A well-optimised but unregulated article might easily outrank FCA-compliant advice simply because it fits the algorithm’s preference for structure and keywords. This creates a feedback loop that prioritises influence over integrity.

This is where the mortgage industry must take a more active role in shaping the future of AI integration. We cannot afford to be passive bystanders. We must advocate for a regulatory framework that keeps pace with technological advancement. That means ensuring AI-generated financial guidance is clearly labelled, that disclaimers are visible and unambiguous, and that consumers are educated on the difference between regulated advice and general information.

Once again, AI should not be cast in the light of a villain. Its revolutionary objectives need to align with consumer and authoritative shifts, and for those who are in the financial sector, it specifically means working collaboratively with regulators and developers. The current regulatory landscape is still catching up to the speed of innovation. In the UK, the FCA’s “Pro-Innovation Approach to AI”, has begun exploring how AI intersects with consumer duty, but much of the oversight remains reactive rather than proactive.

From my own business perspective, we’ve already seen how AI can support internal operations. At Boon Brokers, we’ve trialled AI tools to assist with document checks, admin, and improve client communication. These tools are not replacements for human advisers – nor should they be. Instead, they serve as accelerators, freeing up time for our team to focus on what matters most: understanding the client’s full picture and offering tailored, regulated advice.

But even with these efficiencies, we remain cautious. Every AI tool we consider is evaluated through the lens of data protection, compliance, and client trust. We ask: does this tool enhance our service without compromising our standards? Can we explain its outputs? Are we confident it won’t introduce bias or misinformation? These are the questions every financial mortgage adviser should be asking – not just to satisfy regulators, but to uphold the trust that underpins our industry.

Looking ahead, the next five years must be defined by progression – not just in technology, but in transparency, accountability, and consumer empowerment. We need to invest in AI literacy, both within our teams and among the public. We need to push for clearer boundaries between marketing and advice. And we need to ensure that the tools we adopt serve the long-term interests of the people who rely on them.

AI is not inherently good or bad. It is a tool – powerful, scalable, and increasingly persuasive. But in the context of mortgage advice, where the stakes are high and the decisions deeply personal, we must treat it with the seriousness it deserves. Innovation without integrity is not progress. It’s a risk, and one we cannot afford to ignore.

Author

Related Articles

Back to top button