Changeflow GovPing Telecom & Technology GPAI Taskforce Meets on Safety and Security Cha...
Routine Notice Added Final

GPAI Taskforce Meets on Safety and Security Chapter Measures

Favicon for digital-strategy.ec.europa.eu EC Digital Strategy News (AI Act)
Published
Detected
Email

Summary

The Signatory Taskforce for the General-Purpose AI (GPAI) Code of Practice met on March 27 to discuss two topics under the Safety and Security Chapter: aggregate forecasts of risk tiers and harmful manipulation risk scenarios. Providers of GPAI models with systemic risk are required to include in their frameworks estimates of timelines when they reasonably foresee their model will exceed the highest systemic risk tier already reached by any existing models. The AI Office will provide a concrete approach to aggregate forecasting, including standardised forecasting exercises conducted across providers, with cadences discussed ranging from semi-annually to annually. Signatories discussed categorising risk scenarios for harmful manipulation by context of exposure, including GPAI chatbots, third-party applications, agents, or disseminated AI-generated content.

“Taking all views into account, the AI Office will provide a concrete approach to this aggregate forecasting and respond to remaining open questions.”

EC , verbatim from source
Published by EC on digital-strategy.ec.europa.eu . Detected, standardized, and enriched by GovPing. Review our methodology and editorial standards .

About this source

GovPing monitors EC Digital Strategy News (AI Act) for new telecom & technology regulatory changes. Every update since tracking began is archived, classified, and available as free RSS or email alerts — 15 changes logged to date.

What changed

The GPAI Signatory Taskforce held its third meeting to discuss implementation of the Safety and Security Chapter of the GPAI Code of Practice. The meeting addressed Measure 1.1(2)(c) on aggregate forecasts, which requires providers of models with systemic risk to estimate when their models will exceed existing highest systemic risk tiers, and Measure 3.1 on forecasting methods such as algorithmic efficiency, compute use, data availability, and energy use. The Taskforce also discussed harmful manipulation risk scenarios under Appendix 1.4(4), with Transluce presenting approaches to categorise scenarios by exposure context.

GPAI model providers with systemic risk should monitor AI Office guidance on standardised aggregate forecasting formats and risk scenario categorisation. The AI Office indicated it will respond to open questions about compliance implications and exercise cadence. These discussions inform implementation of the EU AI Act's GPAI provisions, though no binding compliance obligations arise from this meeting summary alone.

Meeting

Date
2026-03-27

Archived snapshot

Apr 27, 2026

GovPing captured this document from the original source. If the source has since changed or been removed, this is the text as it existed at that time.

The March 27 meeting of the Signatory Taskforce for the General-Purpose AI (GPAI) Code of Practice, focused on 2 topics under the Safety and Security Chapter: aggregate forecasts of risk tiers and harmful manipulation.

Aggregate forecasts (Measure 1.1(2)(c) of the Safety and Security chapter) require signatories that are providers of models with systemic risk to include, in their frameworks, estimates of timelines when they reasonably foresee that their model will exceed the highest systemic risk tier already reached by any of their existing models. The relevant provision further states that such estimates ‘may take into account aggregate forecasts, surveys, and other estimates produced with other providers.

Measure 3.1 of the Safety and Security chapter mentions 'forecasting of general trends' (e.g. forecasts concerning the development of algorithmic efficiency, compute use, data availability, and energy use) as an example of a method to gather model-independent information for the risk assessment. This creates a concrete opportunity for structured forecasting exercises conducted across providers of GPAI models with systemic risk using a standardised framework.

To facilitate the discussion, the Forecasting Research Institute presented an introduction to the topic of forecasting. The Taskforce went on to discuss possible formats this could take. For instance, the signatories that are providers of GPAI models with systemic risk could individually answer a set up questions related to risk forecasts for the specified systemic risks (Appendix 1.4) twice a year. These individual forecasts could then be aggregated and anonymised to provide an industry-wide estimate. Signatories raised questions, such as the appropriate cadence for such an exercise (suggesting ranges from semi-annually to annually) and the implications for compliance. Taking all views into account, the AI Office will provide a concrete approach to this aggregate forecasting and respond to remaining open questions.

Concerning harmful manipulation – a specified systemic risk in Appendix 1.4(4) of the Safety and Security chapter – the Taskforce discussed approaches to concretise relevant risk scenarios. Establishing risk scenarios for each identified systemic risk is central for the risk assessment (Measures 2.2 and 3.3 Safety and Security Chapter).

Performing model evaluations that are sufficiently informative for the risk assessment requires that the measured model properties are relevant for a pathway to harm that is, in turn, relevant for the systemic risk in question. For model evaluations to be sufficiently specific to the risk of harmful manipulation, they should also be targeted at relevant risk scenarios. This means that the evaluation setting (e.g., system integration and user assumptions) should reflect conditions of such risk scenarios.

To kick-start signatories’ discussion, Transluce presented an introduction, based on recent stakeholder input, on how such risk scenarios could be approached. For example, risk scenarios could be categorised according to the context of exposure, reflecting whether the user is interacting with a GPAI chatbot, a third-party application (such as a financial service), an agent, or disseminated AI-generated content, or whether the model interacts with an evaluator directly.

The AI Office thanked the participants for sharing their input on the implementation of these measures and invited them to propose topics of interest to be considered in the preparation of the next meeting of the Signatory Taskforce.

Find further information about:

Last update

27 April 2026

Print as PDF

Named provisions

Safety and Security Chapter Measure 1.1(2)(c) Measure 3.1 Appendix 1.4 Appendix 1.4(4)

Mentioned entities

Get daily alerts for EC Digital Strategy News (AI Act)

Daily digest delivered to your inbox.

Free. Unsubscribe anytime.

About this page

What is GovPing?

Every important government, regulator, and court update from around the world. One place. Real-time. Free. Our mission

What's from the agency?

Source document text, dates, docket IDs, and authority are extracted directly from EC.

What's AI-generated?

The summary, classification, recommended actions, deadlines, and penalty information are AI-generated from the original text and may contain errors. Always verify against the source document.

Last updated

Classification

Agency
EC
Published
April 27th, 2026
Instrument
Notice
Branch
Executive
Legal weight
Non-binding
Stage
Final
Change scope
Minor

Who this affects

Applies to
Technology companies
Industry sector
5112 Software & Technology
Activity scope
AI model governance Risk assessment Systemic risk forecasting
Geographic scope
European Union EU

Taxonomy

Primary area
Artificial Intelligence
Operational domain
Compliance
Topics
Data Privacy Consumer Protection

Get alerts for this source

We'll email you when EC Digital Strategy News (AI Act) publishes new changes.

Free. Unsubscribe anytime.

You're subscribed!