EU Digital Omnibus Makes Deidentification Statements Inevitable
Summary
IAPP published an analysis examining how the proposed EU Digital Omnibus regulation would codify Recital 26's anonymization standard into binding Article 4 definitions. The analysis notes the regulation shifts responsibility to organizations, requiring them to document and demonstrate why they cannot re-identify individuals from deidentified datasets — effectively requiring companies to prove a negative regarding identification capabilities.
What changed
The EU Digital Omnibus proposes to elevate the contextual, risk-based standard from Recital 26 into the binding definition of personal data in Article 4. This codification would address inconsistent enforcement approaches where supervisory authorities defaulted to absolutism whenever theoretical re-identification was possible. Organizations processing deidentified datasets will need to proactively document and justify why identification is not reasonably likely, considering costs, time, technology and purpose.
Affected parties — primarily companies processing large datasets containing EU personal data — will face new documentation obligations requiring detailed technical and organizational assessments of identification impossibility. The change does not represent deregulation but rather a redistribution of compliance responsibility onto data processors. Privacy and legal teams should prepare updated policies, technical documentation frameworks and risk assessment methodologies to support these new justification requirements.
What to do next
- Monitor EU Digital Omnibus legislative developments
- Review data deidentification practices and documentation
- Prepare to articulate and defend identification impossibility assessments
Archived snapshot
Apr 16, 2026GovPing captured this document from the original source. If the source has since changed or been removed, this is the text as it existed at that time.
ANALYSIS Published
8 April 2026
Contributors:
Noemie Weinbaum
AIGP, CIPP/A, CIPP/C, CIPP/E, CIPP/US, CIPM, CIPT, CDPO/FR, FIP
Senior Managing Counsel, Privacy and Compliance
UKG
Flora Garcia
CIPP/E, CIPP/US, CIPT, FIP
Former Chief Privacy Officer
Wayfair, McAfee, Time
Roy Kamp
AIGP, CIPP/A, CIPP/E, CIPP/US, CIPM, CIPT, FIP
Legal Director
UKG
If Ella Fitzgerald and Louis Armstrong were singing about EU data law today, their famous tomatoes-and-potatoes duet might need a new bridge.
“You say anonymization, I say deidentification.
“You say personal data, I say not for me.”
“Let’s not call the whole thing off — but let’s stop borrowing each other’s keys.”
In a previous article, we explored how anonymization remains the high note of the EU General Data Protection Regulation world: rare, demanding and binary. Either the melody can no longer be traced back to a person, or it can. Most datasets that claim to be anonymized are, in reality, something more modest and far more common: pseudonymized — also known as deidentified. They are still playing the same tune, just behind a curtain.
That distinction mattered because the GDPR has, until now, treated personal data as a yes-or-no proposition. There has been no “mostly anonymous” refrain, no comfortable middle ground. And for years, that rigidity was justified by a single, deceptively compact provision: Recital 26.
Recital 26 has always set the tempo. It tells us that data protection does not turn on theoretical possibilities, but whether the means used are reasonably likely to be used, will identify or single out an individual taking into account costs, time, technology and purpose.
Recital 26 has always set the tempo. It tells us that data protection does not turn on theoretical possibilities of whether, given all the time, money and technology, someone somewhere could re-identify a natural person in a data set, but rather whether the particular natural person individual is possibly identifiable by means that are likely to be used, taking into account the costs, time, technology and purpose of those means.
The recital — and remember, recitals are not binding but provide strong, example-based color on the original legislative intent — embeds a contextual, risk-based and fundamentally relative concept of personal data into the GDPR’s DNA. The Court of Justice of the European Union has been faithfully riffing on that theme for more than a decade, spanning decisions from Breyer v. Bundesrepublik Deutschland to EDPS v. Single Resolution Board.
Yet recital-level guidance and case law, however clear on paper, proved insufficient in practice. Supervisory authorities have often defaulted to absolutism. If someone, somewhere, could identify a person, the data was treated as personal data everywhere. The relative concept became a theoretical solo few dared to play.
With the proposed Digital Omnibus, the EU does not abandon Recital 26. On the contrary, it finally drags its logic out of the recital grey zone and into the main score of Article 4 itself. And in doing so, it addresses two developments that now converge in the new definition of personal data.
The first is a quiet but decisive neutralization of a doctrinal wobble in recent CJEU case law. The second is a practical consequence that many organizations have not yet fully absorbed: Companies will increasingly need to articulate, document, and defend how and why they cannot identify individuals in the data they process — this requires a detailed understanding, not just Jedi mind tricks. This forces companies to prove a negative, to show that they do not have a decoder ring to be able to reveal the natural persons behind the data.
This is not deregulation; it is redistribution of responsibility.
For more than a decade, the court has insisted that personal data is a relative concept. Whether information relates to an identifiable person depends on whether a given entity has means reasonably likely to be used to identify that person. This logic, already present in Recital 26, was crystallized in Breyer and later reaffirmed in SRB. Sherlock Holmes would, most certainly, have approved. You assess the clues you actually have, not those that exist somewhere else in the world — or that could exist somewhere else in the world.
If Entity B can link a dataset to an individual, the data is personal for Entity B. If Entity A cannot, and has no realistic access to additional information, the data is not personal for Entity A. That is the relative concept in action now in practicality and under the Omnibus once eventually adopted.
And then came the confusion
In Gesamtverband Autoteile-Handel eV v. Scania and again in SRB, the court suggested that information might nevertheless become personal data for a sending entity if it is transferred to a recipient that can identify the individual. Read literally, this appeared to introduce a form of backward attribution. Identifiability seemed to flow upstream.
A recipient’s capabilities — and the scope of their existing engagement or data sets — could retroactively alter the legal nature of the sender’s data. While this is a familiar songbook for experienced data professionals, the degree of futureproofing that this requires may still be unrealistic in practice.
That reasoning sits uneasily with Recital 26. If identifiability depends on the means reasonably likely to be used by the entity in question, why should another entity’s means suddenly matter? The court’s likely intention was narrower. It wanted the GDPR to apply to the act of transferring data where identification becomes foreseeable, not to reclassify all of the sender’s other processing activities. But the wording left room for doubt, and doubt turns a classy tune into trashy noise in operational privacy.
This uncertainty might have remained a doctrinal or academic discussion if it were not for the EU Data Act. While the Data Act focuses on data subjects’ combined product and service data rather than the data required for data holders to operate a business, the clarifications within the act will have broad impacts.
Under Chapter 2 of the Data Act, data holders must make data available to users and third parties as applicable or as requested by the data subject, especially when the data subject wishes to enter into a new vendor relationship. Much of the product and service data is technical or industrial in nature and not personal data in the hands of the data holder.
If data holders were required to determine, for every potential recipient, whether that recipient could link the data to an identifiable individual, the system would quickly become unworkable. In fact, given how interconnected vendors and systems are, this interpretation would destroy one of the EU’s freedoms — the free movement of data — and undermine the EU’s digital strategy as part of the Digital Single Market. In reality, though, data holders would rarely be able to make such determinations with confidence. Sharing the data would expose them to GDPR risk. Refusing to share could expose them to Data Act sanctions.
"Hotel California," privacy edition: Your data can check out any time you like, but you can never be sure the data is allowed to leave.
This is the context in which the third proposed new sentence of Article 4(1) GDPR, as contained in the Digital Omnibus proposals, must be understood. By clarifying that information does not become personal for an entity merely because a subsequent recipient has means reasonably likely to identify a person, the European Commission is not restating Recital 26. It is correcting a doctrinal spillover that threatened to undermine the practical effectiveness of the EU’s broader digital framework and one that helps to better align the goals of the GDPR with the practicalities of the modern data ecosystems.
Other people’s keys do not unlock your door. Or, as Sherlock Holmes might put it, you do not solve a case with evidence you cannot access.
As confirmed by a Commission representative at the IAPP Europe Data Protection Congress 2025, this provision is explicitly aimed at neutralizing the Scania/SRB transfer logic to the extent it risked creating a compliance deadlock under the Data Act. The GDPR's scope is clarified not to weaken protection, but to ensure coherence across EU digital law and across the member states.
Notably, the European Data Protection Board-European Data Protection Supervisor Joint Opinion on the Digital Omnibus does not contest this move. It does not attempt to revive an absolute concept of personal data. It does not argue that pseudonymized or technical data should be treated as personal data for everyone, everywhere. Instead, it emphasizes legal certainty, accountability and supervision. It accepts that some data will fall outside the GDPR for some entities and positions regulators to scrutinize how those determinations are made.
Where the second convergence becomes unavoidable
If Recital 26’s relative concept is now firmly embedded in the operative text of the GDPR, organizations will increasingly need to explain why they cannot identify individuals in the data they process. Assertions will no longer be enough. Deidentification will need a documented narrative.
Much like transfer impact assessments after "Schrems II," organizations will be expected to articulate what data they hold; what identifiers have been removed or transformed; what additional information they do not possess; what technical, legal and organizational barriers exist; and why identification is not reasonably likely for them in that specific context. The issue is not whether anonymization can be claimed, but whether nonidentifiability can be demonstrated.
This development extends far beyond classic pseudonymization. For years, arguing that IP addresses, device identifiers or technical logs were not personal data in a given context carried significant enforcement risk, even where CJEU case law and Recital 26 supported that conclusion. The SRB judgment, the Omnibus proposal and the Commission’s clarifications together send a different signal. The relative concept is not an exception. It is the rule.
That does not make the GDPR disappear. It makes it situational. And with situationality comes responsibility. Anonymization remains the Fitzgerald moment: dazzling, rare and binary. Anonymization remains a high, unchanged bar. But the Commission’s proposal makes clear that organizations do not always need to reach that bar: Data can fall outside the GDPR where they cannot realistically identify individuals. In that sense, the change is less about anonymization and more about making nonidentifiability a defensible position. Rather than lowering the anonymization bar, the Commission's proposal acknowledges that Recital 26 was always right but not loud enough.
Or, to stay with the Eagles, you can check out any time you like, but you had better be able to explain how you locked the door.
The first two new sentences in Article 4(1) codify what Recital 26 and the CJEU have been saying or suggesting for years: “Information relating to a natural person is not necessarily personal data for every other person or entity, merely because another entity can identify that natural person. Information shall not be personal for a given entity where that entity cannot identify the natural person to whom the information relates, taking into account the means reasonably likely to be used by that entity.”
The third and fourth sentences fix what case law may have unintentionally destabilized. “Information relating to a natural person is not necessarily personal data for every other person or entity, merely because another entity can identify that natural person. Information shall not be personal for a given entity where that entity cannot identify the natural person to whom the information relates, taking into account the means reasonably likely to be used by that entity. Such information does not become personal for that entity merely because a potential subsequent recipient has means reasonably likely to be used to identify the natural person to whom the information relates.” Together, they close the door on the absolute concept of personal data through precision, not deregulation.
They also usher in a new compliance reality — one defined by better explanations and more logic.
In privacy, as in music, clarity matters. Not every tune is jazz. Not every dataset is anonymous. And sometimes the most important note is the one that never plays, provided you can explain why.
Contributors:
Noemie Weinbaum
AIGP, CIPP/A, CIPP/C, CIPP/E, CIPP/US, CIPM, CIPT, CDPO/FR, FIP
Senior Managing Counsel, Privacy and Compliance
UKG
Flora Garcia
CIPP/E, CIPP/US, CIPT, FIP
Former Chief Privacy Officer
Wayfair, McAfee, Time
Roy Kamp
AIGP, CIPP/A, CIPP/E, CIPP/US, CIPM, CIPT, FIP
Legal Director
UKG
Tags:
AI and machine learning Data security Law and regulation
Related Stories
### Anonymization: The unicorn of privacy engineering 14 Jan. 2026
ANALYSIS
### EU officials discuss alignment on digital simplification, interplay goals at EDPB workshop 17 March 2026
### Notes from the IAPP Europe: Digital Omnibus package developments, end to voluntary CSAM detection and more 26 March 2026
OPINION
Named provisions
Related changes
Get daily alerts for IAPP Privacy News
Daily digest delivered to your inbox.
Free. Unsubscribe anytime.
About this page
Every important government, regulator, and court update from around the world. One place. Real-time. Free. Our mission
Source document text, dates, docket IDs, and authority are extracted directly from IAPP.
The summary, classification, recommended actions, deadlines, and penalty information are AI-generated from the original text and may contain errors. Always verify against the source document.
Classification
Who this affects
Taxonomy
Browse Categories
Get alerts for this source
We'll email you when IAPP Privacy News publishes new changes.
Subscribed!
Optional. Filters your digest to exactly the updates that matter to you.