EU AI Act Omnibus: New Compliance Deadlines and Deepfake Ban
Summary
Members of the European Parliament have reached a preliminary agreement on amendments to the EU AI Act, including extended compliance deadlines for high-risk systems and a ban on non-consensual deepfakes. The agreement aims to provide legal certainty and allow more time for technical standards and guidance development.
What changed
Members of the European Parliament (MEPs) have reached a preliminary political agreement on amendments to the EU Artificial Intelligence Act, referred to as the AI omnibus. Key changes include extending compliance deadlines for high-risk AI systems, with requirements for Annex III systems applying from December 2, 2027, and Annex I systems from August 2, 2028. The agreement also introduces a ban on AI systems that generate non-consensual explicit deepfakes, following investigations into platforms like X. These amendments are part of a broader digital simplification package aimed at fostering competitiveness and innovation.
The practical implications for regulated entities involve adapting to new compliance timelines for high-risk AI systems. The extended deadlines are intended to allow more time for the development of technical standards and guidance. Companies should review their AI systems against the updated requirements for Annex I and Annex III, and ensure compliance with the new deepfake ban. The preliminary deal will be voted on by committees on March 18, 2026, with finalization expected thereafter. Failure to comply with the AI Act's provisions can lead to significant penalties.
What to do next
- Review and update compliance strategies for high-risk AI systems based on new deadlines (August 2, 2028, and December 2, 2027).
- Implement measures to prevent the generation of non-consensual explicit deepfakes.
- Monitor final adoption of the AI Act amendments and any associated technical standards or guidance.
Source document (simplified)
Published
12 March 2026
Contributors:
Joe Duball
News Editor
IAPP
Lexie White
Staff Writer
IAPP
Members of European Parliament are moving toward finalizing a political agreement on amendments to the EU Artificial Intelligence Act. The preliminary deal among MEPs reached during a shadow meeting 11 March will be reflected in a report that will be voted on by the Committee on Civil Liberties, Justice and Home Affairs and the Committee on Internal Market and Consumer Protection 18 March.
The European Commission's Digital Omnibus on AI is part of the broader digital simplification package the EU is considering to foster competitiveness and innovation. The AI package was separated from proposed changes to other digital rules to expedite consideration of AI Act amendments that might impact upcoming legal deadlines, with requirements related to high-risk systems, transparency and governance take force 2 Aug.
The preliminary compromise reportedly contains notable extensions to compliance deadlines for high-risk requirements. According to a press release from MEPs, "requirements for systems listed in Annex III would apply from 2 December 2027, while those in Annex I would apply from 2 August 2028."
"The aim is to provide legal certainty and allow more time for technical standards, guidance and national authorities to prepare," MEPs said.
The agreement and subsequent report also features "clearer conditions for using sensitive personal data to detect and correct bias in high-risk systems, under strict safeguards," and measures to ban AI systems from generating nonconsensual explicit deepfakes.
In an email to the IAPP, Irish MEP and AI Omnibus rapporteur Michael McNamara said, "some technical negotiations are still ongoing" leading up to the 18 March vote.
Deepfake ban
According to Politico, MEPs agreed to provisions banning an AI system that "alters, manipulates or artificially generates realistic images or videos so as to depict sexually explicit activities or the intimate parts of an identifiable natural person, without that person's consent."
Though the proposed ban would allegedly not be applicable to companies "who have put effective safety measures (in place) to prevent the generation of such depictions and to avoid misuse."
The proposed prohibitions come after the EU launched an investigation into social platform X's AI tool Grok's alleged ability to create and share AI-generated explicit deepfakes of users, including children. X announced it has implemented safety measures to "geoblock" explicit AI-generated content in "jurisdictions where such content is illegal."
The U.K. made similar efforts to prevent AI-generated deepfakes after the incident involving Grok's systems, with proposed amendments to the Crime and Policing Bill that would prevent AI tools from creating harmful or illegal content.
EU lawmakers noted the ban aims to advance consumer protections and expand children's online safety efforts. German member of the European Parliament Sergey Lagodinsky told Politico the EU's efforts "are not only about Grok. It is about how much power we are willing to give AI to degrade people."
Stakeholder pulse check
While the new compliance dates for high-risk requirements are welcome, Information Technology Industry Council Policy Director, Europe, Marco Leto Barone told the IAPP there are "worrying signals" emerging from the political agreement.
"T he agreement rolls back several helpful provisions in the Commission's proposal. Particularly, shortening the grace period for the AI Act's generative AI transparency requirements to only 3 months will result in legal uncertainty and create compliance burden," he said.
Barone said the decision to reinstate registration requirements for certain non-high-risk AI systems ultimately misses "an opportunity for meaningful simplification."
Forty-eight EU-based trade associations wrote MEPs and the Council of the European Union outlining the need for additional regulatory rollback in the AI Omnibus, noting there is still work to be done to ensure "unnecessary regulatory burdens are removed from Europe's industrial base and digital companies." They argued for immediate delays on 2 Aug. deadlines while proposing exemptions on AI Act requirements for organizations that are covered by AI rules under existing sectoral frameworks.
"The fast-paced negotiations on the AI omnibus risk becoming a missed opportunity to address the challenges industrial companies, from healthcare and manufacturing to energy and automotive, face when implementing the AI Act in practice," the letter stated. "Many companies are already regulated under robust sectoral frameworks but are now caught in a double or even triple layer of regulation, and classified as high-risk under the AI Act despite existing sector-specific oversight."
This content is eligible for Continuing Professional Education credits. Please self-submit according to CPE policy guidelines.
Contributors:
Joe Duball
News Editor
IAPP
Lexie White
Staff Writer
IAPP
Tags:
AI and machine learning Law and regulation Risk management Government EU AI Act AI governance
Related Stories
### EU Digital Omnibus: Analysis of key changes 9 Dec. 2025
ANALYSIS
### Former AI Act negotiator Laura Caroli on the proposed EU Digital Omnibus for AI 17 Dec. 2025
### European Parliament LIBE hearing tackles Digital Omnibus skepticism 27 Jan. 2026
### Irish MEP McNamara tapped AI Omnibus rapporteur 15 Jan. 2026
Related changes
Source
Classification
Who this affects
Taxonomy
Browse Categories
Get Privacy Guidance alerts
Weekly digest. AI-summarized, no noise.
Free. Unsubscribe anytime.
Get alerts for this source
We'll email you when IAPP Privacy News publishes new changes.