Last reviewed: 30 April 2026
A new warning from industry analysts and a landmark Treasury Committee report have raised the prospect of an AI-driven mis-selling scandal in UK financial services that could rival the scale of PPI — the payment protection insurance scandal that cost banks over £50 billion in redress. The concern centres on a governance gap: more than 75% of UK financial services firms now use AI, but regulatory frameworks have not kept pace with the risks.
The Treasury Committee's verdict
In January 2026, the House of Commons Treasury Select Committee published a major report on AI in financial services. It concluded that the Financial Conduct Authority (FCA), the Bank of England and HM Treasury are "exposing the public and the financial system to potentially serious harm" by adopting a wait-and-see approach to AI regulation.
The committee found that AI-driven decision-making in credit and insurance lacks transparency, and that automated financial guidance tools — including AI chatbots — are operating in a regulatory grey area. MPs expressed concern about the possibility of "opaque, AI-driven risks including model bias, automated mis-selling and new forms of systemic fraud."
What regulators are doing
The FCA has confirmed it does not plan to introduce AI-specific regulation. Instead, it is relying on existing frameworks — principally Consumer Duty (PRIN 2A) and the Senior Managers and Certification Regime (SM&CR) — to supervise how firms use AI in regulated activities. The regulator launched its AI Live Testing initiative in October 2025, with a second cohort expected to join in April 2026.
The FCA also launched the "Mills Review" in early 2026 — a formal strategic review led by Executive Director Sheldon Mills — examining the long-term impact of AI on retail financial services through to 2030. Responses were sought by February 2026 to inform future supervisory policy.
On 1 April 2026, the Bank of England and Prudential Regulation Authority published their response to the Treasury Committee, reiterating a technology-agnostic approach to regulation while acknowledging that risks from agentic AI in payments and financial markets warrant further monitoring.
The governance gap in practice
The core problem identified by the committee is accountability. When an AI model denies a credit application, misprices insurance, or delivers misleading financial guidance, it can be difficult to identify which individual — or which firm — is responsible under existing frameworks. The SM&CR requires named senior managers to own specific risk areas, but applying this to AI systems that operate autonomously across thousands of decisions daily is genuinely complex.
Parliament has recommended that the FCA publish comprehensive, practical guidance for firms on how consumer protection rules apply to AI — including clarity on senior manager accountability for harm caused through AI — by the end of 2026.
What consumers should know
If you receive a financial decision — a loan refusal, an insurance quote, an investment recommendation — that appears to have been made by an automated system, you have the right to request a human review under the UK GDPR. You also have the right to complain to the Financial Ombudsman Service if you believe an AI-driven decision has caused you harm.
Frequently asked questions
Is AI financial advice regulated in the UK?
Yes, to the extent AI is used in regulated activities. The FCA's Consumer Duty and SM&CR apply. However, unregulated AI chatbots offering informal financial guidance exist in a grey area that regulators are actively reviewing.
What is the Mills Review?
A formal FCA review, led by Executive Director Sheldon Mills, examining how AI could reshape retail financial services through to 2030. It covers technology evolution, market impacts, consumer trends and whether existing regulatory frameworks remain fit for purpose.
Can I challenge an AI-based financial decision?
Yes. Under UK GDPR, you have the right to request human review of decisions made solely by automated systems. Contact your financial provider in writing and escalate to the Financial Ombudsman Service if unresolved.
Are UK AI financial risks being stress-tested?
Not specifically. The Treasury Committee recommended that the Bank of England introduce AI-specific stress tests. The BoE has not yet committed to this but has indicated it will consider AI scenarios in future system-wide exercises.
How many UK financial firms use AI?
More than 75%, according to the Bank of England and FCA's 2024 AI survey, with a further 10% planning to adopt it within three years. Foundation models including large language models account for 17% of current AI use cases in the sector.
Sources: Treasury Select Committee — AI in Financial Services (HC 684), January 2026 | FCA — AI and the FCA: our approach | BoE/PRA response to Treasury Committee, April 2026 | Inside Global Tech — UK Financial Services Regulators' Approach to AI 2026 | Parliament.uk — TSC publishes regulator responses, April 2026.
This article is for informational purposes only. For regulated financial advice, consult a qualified FCA-authorised adviser.