Changeflow GovPing Banking Regulation Bank of England AI Roundtables Summary
Routine Notice Added Final

Bank of England AI Roundtables Summary

Favicon for www.bankofengland.co.uk Bank of England / PRA News
Published February 16th, 2026
Detected March 13th, 2026
Email

Summary

The Bank of England published a summary of its AI roundtables held with regulated firms in February 2026. The discussions focused on the responsible adoption of AI and ML, with participants generally supporting the existing regulatory framework and not seeing an immediate need for new AI-specific guidance.

What changed

The Bank of England has released a summary of its AI roundtables, which convened representatives from challenger banks, global systemically important banks, and insurers. The discussions, held in late 2025 and summarized in February 2026, aimed to understand constraints faced by regulated firms in adopting AI and ML technologies. Participants expressed support for the PRA's principles-based regulatory framework, finding it pragmatic for innovation, and did not currently see a need for specific AI guidance or sandboxes, noting existing FCA initiatives.

While the existing framework is viewed favorably, the summary highlights that second-line risk functions are proceeding with caution, potentially delaying AI deployment. Challenges include AI skills shortages and the need to demonstrate compliance. Traditional model risk management approaches are considered unsustainable for complex AI models like generative AI, suggesting a shift towards greater emphasis on testing, monitoring, and outcome-based guardrails. The document serves as an informational notice, with no immediate compliance actions or deadlines required for regulated entities.

Source document (simplified)

Summary of AI roundtables - February 2026

The Bank of England held roundtable meetings with representatives from regulated firms on the responsible adoption of artificial intelligence and machine learning (AI and ML), to better understand the constraints that firms may be facing.


Published on

16 February 2026

Introduction

As per the Bank’s approach to innovation in AI, DLT and quantum computing, we seek to engage with innovators and industry practitioners in various ways to better understand the latest technological developments and their implications for the financial sector. This includes via biennial AI surveys of the UK financial sector, the AI Consortium (a successor to the AI Public-Private Forum), the Cross Market Operational Resilience Group AI Taskforce, and the Bank’s Market Intelligence function.

To complement these initiatives, and in line with the Bank’s secondary growth objective, in late 2025, the Bank of England hosted three roundtables with participants from regulated firms to better understand the constraints firms may be facing in adopting AI, and how the Bank and PRA can support responsible AI adoption. Each roundtable was held with representatives from a different PRA-regulated sector: (1) challenger banks and UK-focussed larger banks; (2) global systemically important banks; and (3) insurers. Observers from the FCA and HMT were also present.

Below is a summary of the key points arising from the roundtable discussions, which were held under the Chatham House Rule.


Summary of key points

Across all three roundtables, participants from regulated firms expressed support for the PRA’s regulatory framework as it related to AI. Participants noted that the PRA’s principles-based, outcomes-based policy and supervisory statements gave firms sufficient space to innovate within clear regulatory guardrails. Supervisory Statement 1/23 on Model Risk Management in particular was noted by several participants as pragmatic in enabling responsible AI adoption. Most participants did not see the need yet for detailed AI-specific regulatory guidance or rules, and most couldn’t see a case for a Bank or PRA AI sandbox at this time; the FCA’s Supercharged Sandbox and AI Live Testing initiatives were seen as providing sufficient offerings for testing purposes.

Second-line risk functions continue to approach the use of AI with caution, which may delay AI deployment pipelines. There were mixed views on whether this was an optimal or inevitable level of caution. Drivers could include both (a) bottlenecks in AI skills and expertise, given the dynamic and highly complex nature of the technology, and the range of uses to which it was being put; and (b) a desire to ensure compliance with supervisory expectations could be comprehensively demonstrated. As an example, several participants noted that firms’ **** traditional model risk management approach to validation wouldn’t be sustainable in its current form as generative AI and agentic systems proliferated. The traditional emphasis on understanding the inner workings of a model – i.e. how inputs mapped to outputs –wasn’t tenable or fully effective for increasingly complex AI models. The concept of having a ‘human-in-the-loop’ was also challenged by the rise of agentic AI. Several participants suggested that risk management needed to evolve to put greater emphasis on testing, monitoring and setting guardrails around the outcomes of broader AI systems. Some participants suggested there would be value in sharing supervisory observations on good and bad practice, or convening industry experts to define, agree and share best practice. [1]

Firms operating in multiple jurisdictions need to navigate different regulatory approaches to AI. Participants noted key differences between the UK’s regulatory approach, the US’s approach (e.g. Supervisory Letter SR11-7 on Guidance on Model Risk Management) and the EU AI Act. Fragmentation increases compliance costs, slowed AI adoption, and prevented firms from scaling AI use cases across borders. Several participants therefore encouraged the Bank to use its membership of various international fora to encourage global coordination and convergence.

Procurement and contract negotiations with third-party AI providers were slowed by inconsistent familiarity with regulated firms’ compliance requirements. Some participants thought the market would eventually solve that problem i.e. minimum standards would emerge over time, but that there was an opportunity cost in the meantime. Several participants therefore **** noted that the Bank could explore convening financial and technology firms to agree minimum standards for third party AI providers to the regulated financial sector. Some participants noted that as AI models become embedded in agentic systems throughout their firm’s core business processes, substituting between AI providers may become more challenging.

Data protection laws – along with emerging data sovereignty regimes in other jurisdictions – were a challenge to deploying and scaling AI use cases. Several participants noted that the legal requirement to complete Data Protection Impact Assessments in certain situations slowed their AI deployment pipeline. [2] Participants noted that new data location requirements could prevent scaling AI solutions across borders.

Data quality can also be a barrier to the use of AI, particularly in some areas of insurance. Some insurers have relatively little data on their individual customers, owing to the infrequency of customer engagement (e.g. annual policy renewal, when a claim is submitted), in contrast to banks’ visibility of their customers’ transactions. Therefore prospects of e.g. hyperpersonalised insurance products using AI were limited in some areas in the near term.

  1. To note, in November, the PRA published slides with supervisory observations on firms’ compliance with SS1/23 in the context of their use of AI and machine learning.
  2. To note, the ICO has published guidance on when firms are required to do a DPIA, as well as specific guidance on DPIAs and data protection law more broadly in the context of AI deployment.

Convert this page to PDF

Other news

News // News release

11 March 2026

PRA fines U K Insurance Limited £10,625,000

PRA fines U K Insurance Limited £10,625,000 News // News release

11 March 2026

Wildlife to feature on next series of Bank...

Wildlife to feature on next series of Bank of England banknotes News // Statistical notice

03 March 2026

Statistical Notice 2026/02 - BEEDS user acceptance...

Statistical Notice 2026/02 - BEEDS user acceptance testing (UAT) environment – Statistical taxonomy v1.3.1... News // News release

27 February 2026

Braddick to take the helm at the UK’s banking...

Braddick to take the helm at the UK’s banking watchdog View more


Back to top

Source

Analysis generated by AI. Source diff and links are from the original.

Classification

Agency
Various UK Agencies
Published
February 16th, 2026
Instrument
Notice
Legal weight
Non-binding
Stage
Final
Change scope
Minor

Who this affects

Applies to
Insurers Banks
Geographic scope
UK

Taxonomy

Primary area
Financial Services
Operational domain
Compliance
Topics
Artificial Intelligence Risk Management

Get Banking Regulation alerts

Weekly digest. AI-summarized, no noise.

Free. Unsubscribe anytime.

Get alerts for this source

We'll email you when Bank of England / PRA News publishes new changes.

Free. Unsubscribe anytime.