Changeflow GovPing Ai Regulation AI Act: Code of Practice on AI-Generated Content
Routine Consultation Added Draft

AI Act: Code of Practice on AI-Generated Content

Favicon for digital-strategy.ec.europa.eu EC Digital Strategy News (AI Act)
Detected February 25th, 2026
Email

Summary

The EU AI Office convened working groups and workshops to gather stakeholder feedback on a draft Code of Practice for marking and labelling AI-generated content under the AI Act. Discussions focused on disclosure obligations, marking techniques, and responsibilities across the AI value chain.

What changed

The EU AI Office is in the process of developing a Code of Practice concerning the transparency obligations for AI-generated content, as mandated by Article 50 of the AI Act. This document details the recent meetings and workshops held by various working groups (WG1 on Marking and Detection, WG2 on Disclosure of Deep Fakes and AI-generated text) to collect feedback from providers of generative AI systems, developers, industry representatives, civil society, and academic experts. Key areas of discussion included the distinction between deceptive and non-deceptive AI use, responsibilities of different actors in the value chain, technical marking and detection methods, watermarking, metadata, interoperability, and alignment with existing EU legislation.

While this is a consultation phase and the code is still in draft, regulated entities, particularly providers of generative AI systems and online platforms, should monitor the development of this Code of Practice. The discussions highlight potential future obligations regarding the marking and labelling of AI-generated content, aiming for clarity, usability, and proportionality. Compliance officers should be aware of the ongoing efforts to establish standards and best practices that will likely inform future regulatory expectations and potentially impact content creation and distribution processes within the EU.

What to do next

  1. Monitor the development and finalization of the Code of Practice on Marking and Labelling of AI-Generated Content.
  2. Review internal processes for AI-generated content creation and disclosure in light of ongoing discussions on transparency obligations.

Source document (simplified)

The AI Office, convened a series of meetings and workshops to collect stakeholder feedback on the first draft of the forthcoming Code of Practice on Marking and Labelling of AI-Generated Content

Independent experts appointed as chairs and vice-chairs are responsible to draft the code. The meetings brought together providers of generative AI systems, developers of marking and detection tools, industry representatives, civil society organisations, academic experts and other stakeholders across the AI value chain participating in the code of practice process.

Round of meetings of the working groups

Working Group 2

The meeting of the Working Group on Disclosure of Deep Fakes and AI generated text (WG2), held on 12 January 2026, examined section 2 of the draft code. It focused on the disclosure obligations of deep fakes and AI-generated text. In particular, the discussion focused on the distinction between deceptive and non-deceptive uses, responsibilities across the value chain - including providers, deployers and intermediaries – as well as possible exceptions, proportionality and risk considerations. Participants highlighted the need to avoid information fatigue for users, while emphasising clarity, usability and context-appropriate transparency.

Working Group 1

On 14 January 2026, the Working Group on Marking and Detection Techniques (WG1) examined section 1 of the code that aims to facilitate implementation of providers’ obligations.Discussions focused on state-of-the-art marking techniques, including watermarking and metadata-based solutions, detection capabilities and their technical limitations, interoperability and standardisation challenges, as well as robustness, resilience against removal and governance aspects. Stakeholders highlighted feasibility considerations and the importance of aligning technical solutions with evolving international standards.

Workshops

Three additional workshops were organised with the Code of Practice participants and observers of the drafting process of the code (PDF). During the workshops, chairs and vice chairs addressed questions asked in advance.

WG1workshop

On 21 January 2026, a WG1workshop explored implementation challenges across different model architectures, trade-offs between transparency, innovation and system performance and cooperation between providers and downstream actors. Participants stressed the need for practical and technologically neutral guidance capable of accommodating diversity in system design, while ensuring meaningful and effective transparency.

WG2 workshop

On 22 January 2026, a WG2workshop continued discussions on labelling obligations for deep fakes and AI generated text publications. It examined the proposed taxonomy distinguishing “AI-generated” from “AI-assisted” content, as well as the proportionality in labelling requirements and the responsibilities of online platforms and other intermediaries. The alignment with other EU legislation and the importance of ensuring coherent messaging to users across regulatory frameworks was also discussed.

Joint WG1 and WG2 workshop

On the same day, a joint WG1 and WG2 workshop explored the interplay between marking and labelling obligations. Participants reflected on how technical marking solutions can support disclosure obligations, the complementary roles of watermarking, metadata, labelling and provenance systems, and the balance between standardisation and flexibility in implementation. The need for coordination across actors to ensure end-to-end transparency throughout the AI value chain was also considered. There was a call for coherence between technical and legal requirements and  a balanced approach favouring user awareness without creating disproportionate compliance burdens.

Second draft of the code of practice

Written contributions from stakeholders, received until 23 January 2026, will inform the second draft of the code of practice that will be published in early March.

Download the minutes of the meetings and the participant lists below.

Downloads

1 - Minutes for WG2 Disclosure of deep fakes and certain AI generated text - 12 January 2026 Download 2 - Minutes for WG1 Marking and Detection Techniques for Providers - 14 January 2026 Download 3 - Minutes WG1 Workshop - 21 January 2026 Download 4 - Minutes for WG2 Workshop - 22 January 2026 Download 5 - Minutes for WG1 and WG2 Workshop - 22 January 2026 Download

Last update

24 February 2026

Print as PDF

Source

Analysis generated by AI. Source diff and links are from the original.

Classification

Agency
Various EU Institutions
Instrument
Consultation
Legal weight
Non-binding
Stage
Draft
Change scope
Substantive

Who this affects

Applies to
Technology companies Legal professionals Consumers
Geographic scope
EU-wide

Taxonomy

Primary area
Data Privacy
Operational domain
Legal
Topics
AI Regulation Content Moderation Transparency

Get Ai Regulation alerts

Weekly digest. AI-summarized, no noise.

Free. Unsubscribe anytime.

Get alerts for this source

We'll email you when EC Digital Strategy News (AI Act) publishes new changes.

Free. Unsubscribe anytime.