May 2nd, 2026
On September 8th, 2025, the FCA published its approach on the use of AI in financial markets. The FCA believes it is best not to introduce extra regulation surrounding AI, and instead focus on existing regulatory frameworks, as they already mitigate the risks posed by AI, and offer firms the flexibility to adapt without sticking to detailed rules.
However, on January 20th, 2026, the Treasury Committee of the UK Parliament released a report, believing that this isn’t doing enough to manage the risks presented by AI.
With the mixture of principles-based and outcomes-based regulation that the FCA deploys, it makes it easier for financial institutions to best establish effective processes to reach the desirable outcomes, allowing them to compete at the global scale expected of them. Moreover, through the outcomes-based principle, financial institutions can highlight meeting the desired outcome with the use of AI. Looking at it from a financial crime perspective, in 2025, the National Crime Agency (NCA) received over 860,000 SARs. This is a high number of SARs, and one of the most common reasons for it, as also discussed in my previous post, is due to the tick-box compliance that firms display, leading to low-quality SARs. With the help of AI, institutions could better predict accurate suspicious behaviours, leading to more high-quality SARs. Moreover, with the rapid growth being shown in the AI industry, the FCA’s approach looks like the right way to go, as it would be impossible to continue to regulate a tool that develops so quickly. In addition, the FCA itself currently uses AI for supervision, highlighting the regulator’s trust in the system.
While it is true that AI has many advantages and the wait-and-see approach by the FCA is the best way to regulate it currently, the Treasury’s criticism is fair. One of their most interesting critiques surrounded accountability. In their report, they highlight how a trade association argued that individuals shouldn’t be in trouble for harm caused by AI due to the lack of knowledge. This is an important point to make, as should AI make a mistake, the argument becomes who should be blamed. With AI acting with limited human intervention, it is easy to push blame away, especially within a financial crime setting, where AI may miss high-risk transactions that could lead to enforcement. It brings into question who, specifically, under the Senior Managers and Certification Regime (SM&CR), should be held accountable for AI decision-making. Should it be the MLRO or the developers, or someone entirely different? Bringing in regulations surrounding this is important, and the current regulations don’t provide for it. Moreover, AI requires training using large datasets regarding customer information, some of which may be sensitive. This brings into question the issue of data handling and privacy, and regulations surrounding the GDPR. Customers may be worried as they won’t know how AI is utilizing their data, which could lead to more harm than good. Looking from the AML field, training AI on customer transaction data would bring the Money Laundering Regulations (MLR) 2017 into the conversation as legislation specifically around data retention and usage. The MLR states that data should be retained, but with AI continuously learning from data, it is not known how long it would require the data, and if it meets the 5-year period required by legislation. This brings about a question that the FCA’s current approach doesn’t address: whether AI systems continuously trained on customer data can meet the five-year retention requirements under MLR 2017, and who is responsible for it.
Overall, the FCA’s approach is right; however, it requires specific rules surrounding the use of AI. With the rapid growth and development of AI, it makes no sense to bring in a rules-based regulation that will not only become outdated quickly but would also stifle growth in the financial sector, as institutions would have to tick every box before trying to deploy their AI systems. With this in mind, I think it is important for the FCA to bring targeted regulations surrounding the issues of accountability and data privacy. This should include who is responsible for the continued monitoring and governance of AI systems, ensuring that institutions are still paying attention to their processes.
Do you believe the regulations on AI are effective or too lax right now?
Links:
https://www.fca.org.uk/firms/innovation/ai-approach#revisions
Leave a comment