While the financial world remains captivated by the promise of Generative AI, the
Bank of England (BoE) has released a sobering summary of its latest AI roundtables that highlights the “grit in the gears” of adoption.
The summary, following discussions with challenger banks, global giants, and insurers, reveals a sector that is supportive of current regulatory frameworks—specifically
Supervisory Statement 1/23 on Model Risk Management—but is increasingly hitting bottlenecks in data, skills, and the sheer technical reality of “Agentic AI.”
The Death of the “Explainable” Model?
One of the most striking takeaways from the BoE roundtables is the growing tension between traditional validation and the nature of modern AI. Historically, regulators and risk functions have demanded to see the “inner workings” of a model—exactly how an
input leads to an output.
Industry participants argued that this approach is becoming untenable as firms shift toward generative and agentic systems. The consensus is shifting: rather than trying to map every neuron of a complex model, risk management must evolve
to focus on outcome-based testing and rigorous monitoring of the “guardrails” around AI systems. Essentially, if we can’t always explain
how the AI thought of it, we must become much better at catching it when it’s wrong.
The “Caution” Bottleneck
Despite the push for innovation, “Second-line” risk functions remain deeply cautious. This isn’t just bureaucracy; it’s a reflection of two critical shortages:
The Skills Gap: There is a persistent bottleneck in AI-specific risk expertise.
The Compliance Burden: Banks are struggling to demonstrate that they can meet supervisory expectations in a way that is “sustainable” as AI use cases proliferate from dozens to thousands.
A Call for “Standardised” AI Procurement
In a move that mirrors the industry’s struggles with cloud adoption, roundtable participants highlighted that negotiations with third-party AI providers are slowing down deployment.
The issue? Tech giants and AI startups are often unfamiliar with the granular compliance requirements of regulated finance. There is now a growing call for the Bank of England to convene industry players to agree on
minimum standards for third-party AI providers, ensuring that “off-the-shelf” models don’t become a regulatory liability for the banks that buy them.
The Fragmented Frontier: EU vs. UK vs. US
For global firms, the “regulatory patchwork” remains the biggest hurdle to scaling AI. Participants noted significant friction between the
UK’s principles-based approach, the US’s SR11-7 guidance, and the
EU AI Act.
This fragmentation isn’t just a legal headache; it’s a competitive drain. It prevents firms from scaling successful AI use cases across borders, forcing them to re-verify and re-build controls for every jurisdiction. The message to the Bank of England was
clear: use your seat at the international table to push for global convergence.
The Takeaway for Fintechs
For the Finextra community, the BoE’s findings signal a shift in the AI narrative. The “Proof of Concept” era is over. The next phase of AI in finance won’t be won by the cleverest algorithm, but by the firm that solves the
“plumbing”—data quality, cross-border compliance, and a risk framework that can handle agentic systems without a human constantly in the loop.
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
Beyond the Hype: BoE AI Roundtables Reveal the Practical Barriers to Bank Adoption
While the financial world remains captivated by the promise of Generative AI, the Bank of England (BoE) has released a sobering summary of its latest AI roundtables that highlights the “grit in the gears” of adoption.
The summary, following discussions with challenger banks, global giants, and insurers, reveals a sector that is supportive of current regulatory frameworks—specifically Supervisory Statement 1/23 on Model Risk Management—but is increasingly hitting bottlenecks in data, skills, and the sheer technical reality of “Agentic AI.”
The Death of the “Explainable” Model?
One of the most striking takeaways from the BoE roundtables is the growing tension between traditional validation and the nature of modern AI. Historically, regulators and risk functions have demanded to see the “inner workings” of a model—exactly how an input leads to an output.
Industry participants argued that this approach is becoming untenable as firms shift toward generative and agentic systems. The consensus is shifting: rather than trying to map every neuron of a complex model, risk management must evolve to focus on outcome-based testing and rigorous monitoring of the “guardrails” around AI systems. Essentially, if we can’t always explain how the AI thought of it, we must become much better at catching it when it’s wrong.
The “Caution” Bottleneck
Despite the push for innovation, “Second-line” risk functions remain deeply cautious. This isn’t just bureaucracy; it’s a reflection of two critical shortages:
The Skills Gap: There is a persistent bottleneck in AI-specific risk expertise.
The Compliance Burden: Banks are struggling to demonstrate that they can meet supervisory expectations in a way that is “sustainable” as AI use cases proliferate from dozens to thousands.
A Call for “Standardised” AI Procurement
In a move that mirrors the industry’s struggles with cloud adoption, roundtable participants highlighted that negotiations with third-party AI providers are slowing down deployment.
The issue? Tech giants and AI startups are often unfamiliar with the granular compliance requirements of regulated finance. There is now a growing call for the Bank of England to convene industry players to agree on minimum standards for third-party AI providers, ensuring that “off-the-shelf” models don’t become a regulatory liability for the banks that buy them.
The Fragmented Frontier: EU vs. UK vs. US
For global firms, the “regulatory patchwork” remains the biggest hurdle to scaling AI. Participants noted significant friction between the UK’s principles-based approach, the US’s SR11-7 guidance, and the EU AI Act.
This fragmentation isn’t just a legal headache; it’s a competitive drain. It prevents firms from scaling successful AI use cases across borders, forcing them to re-verify and re-build controls for every jurisdiction. The message to the Bank of England was clear: use your seat at the international table to push for global convergence.
The Takeaway for Fintechs
For the Finextra community, the BoE’s findings signal a shift in the AI narrative. The “Proof of Concept” era is over. The next phase of AI in finance won’t be won by the cleverest algorithm, but by the firm that solves the “plumbing”—data quality, cross-border compliance, and a risk framework that can handle agentic systems without a human constantly in the loop.