Changeflow GovPing Data Privacy & Cybersecurity South Korea Revises Pseudonymization Guidelines...
Routine Notice Added Final

South Korea Revises Pseudonymization Guidelines for AI

Favicon for iapp.org IAPP Privacy News
Published
Detected
Email

Summary

On 31 March 2026, the Personal Information Protection Commission released revised Pseudonymized Information Processing Guidelines, shifting toward a risk-based, contextual approach to pseudonymization rather than fixed technical thresholds. The guidelines explicitly allow AI development and service improvement to qualify as "scientific research" when involving hypothesis-setting, data analysis, validation and iterative refinement, with concrete examples including fraud detection systems, medical imaging analysis, chatbots and intelligent CCTV. This builds on prior 2024 and 2025 regulatory guidance clarifying conditions under which personal data may be used for AI development. A July 2025 South Korean Supreme Court decision reinforced this approach by holding that pseudonymization does not constitute "processing" for the purpose of a data subject's right to request suspension of processing.

“Rather than loosening its framework, South Korea is turning pseudonymization into a regulatory gateway — a legal condition that enables certain forms of data use without consent.”

IAPP , verbatim from source
Published by IAPP on iapp.org . Detected, standardized, and enriched by GovPing. Review our methodology and editorial standards .

About this source

GovPing monitors IAPP Privacy News for new data privacy & cybersecurity regulatory changes. Every update since tracking began is archived, classified, and available as free RSS or email alerts — 44 changes logged to date.

What changed

The 2026 Pseudonymized Information Processing Guidelines shift pseudonymization from a purely technical safeguard to a governed legal condition that enables secondary data use for AI development without consent. Organizations may now define expandable purposes for closely related downstream uses of the same dataset, reflecting the iterative nature of AI model development. The South Korean Supreme Court's July 2025 ruling that pseudonymization does not constitute "processing" for suspension-of-processing requests further limits data subjects' ability to block data use at an early stage. Technology companies and organizations using pseudonymized data for AI training should review whether their current pseudonymization practices align with the PIPC's risk-based contextual approach and document how their AI development activities satisfy the scientific research criteria.

Archived snapshot

Apr 23, 2026

GovPing captured this document from the original source. If the source has since changed or been removed, this is the text as it existed at that time.


ANALYSIS Published

22 April 2026

Subscribe to IAPP Newsletters
Across jurisdictions, regulators are exploring different ways to support artificial intelligence development without undermining data protection. South Korea is taking a particularly distinctive path. On 31 March 2026, the Personal Information Protection Commission released its revised Pseudonymized Information Processing Guidelines, signaling an approach that tests how far the boundaries of data protection can be extended while formally preserving its core principles.

Rather than loosening its framework, South Korea is turning pseudonymization into a regulatory gateway — a legal condition that enables certain forms of data use without consent. This shift is not driven by legislation alone. Through recent regulatory guidance and a notable Supreme Court decision, South Korea is shaping a system in which pseudonymization does more than reduce risk: it determines who can use data, under what conditions, and for which purposes.

Pseudonymization as a built-in legal gateway

To understand this shift, it is important to examine how pseudonymization is positioned in South Korean law. The Personal Information Protection Act permits the use of pseudonymized data without consent for purposes such as statistics, scientific research and public interest recordkeeping, and structures data combination around pseudonymization.

Crucially, in South Korea, pseudonymization is not merely a safeguard layered on top of a separate legal basis. It is embedded in the law as a condition that enables secondary use itself.

This marks a key difference from EU General Data Protection Regulation-based practice. In the EU, particularly in AI training contexts, the primary question is whether a lawful basis — such as legitimate interest — can be established, with pseudonymization functioning as a measure that supports that justification. In South Korea, by contrast, pseudonymization operates as a gateway into a legal regime that permits certain types of processing without consent.

The significance of the 2026 Guidelines lies not in creating this structure, but in operationalizing a legal design that already existed, translating it into a framework applicable to real-world data use, including AI development.

PIPC: From legal design to operational framework

The central feature of the 2026 Guidelines is the shift toward a risk-based, contextual approach to pseudonymization. Rather than defining it through fixed technical thresholds, the Guidelines emphasize factors such as processing environments, access controls, intended use and residual re-identification risks.

This reframes pseudonymization from a purely technical state into a governed condition. At the same time, it signals a clear administrative direction: enabling AI development through the structured use of pseudonymized data.

Importantly, the Guidelines explicitly align this framework with the realities of AI development. They clarify that AI development and service improvement may qualify as "scientific research" when they involve hypothesis-setting, data analysis, validation and iterative refinement. They also provide concrete examples, including fraud detection systems, medical imaging analysis, chatbots and intelligent CCTV.

In addition, the Guidelines allow organizations to define expandable purposes for closely related downstream uses of the same dataset, reflecting the iterative and cumulative nature of AI model development.

These developments build on earlier regulatory efforts. In 2024, PIPC clarified that publicly available personal data may be used for AI development under the legitimate interest provision in certain circumstances. In 2025, it issued guidance on generative AI development and deployment, aiming to reduce uncertainty for businesses working with large language models.

The timing of the revision is also telling. While the 2024 Guidelines envisaged a three-year review cycle, pointing to a revision around 2027, the 2026 update arrived earlier than expected — signaling an accelerated and more proactive regulatory response to the demands of AI development.

Taken together, these measures suggest that PIPC is not merely interpreting existing law, but actively shaping how that law operates in practice to enable AI data use.

The Supreme Court: Limiting ex ante resistance

This trajectory is reinforced by judicial interpretation. In July 2025, the South Korean Supreme Court held that pseudonymization does not constitute "processing" for the purpose of a data subject's right to request suspension of processing.

The Court emphasized that pseudonymization is, by nature, a measure designed to reduce identification risks and referred to the legislative purpose of promoting data use in emerging sectors such as AI, cloud computing and the Internet of Things.

The practical effect is clear. By excluding pseudonymization from the scope of this right, the Court narrows one potential avenue for data subjects to block data use at an early stage.

In doing so, the judiciary also contributes to a broader shift: interpreting the pseudonymization framework in a way that reduces friction in data use and supports data-driven innovation.

A new form of privacy governance

This does not mean that safeguards disappear. Pseudonymized data remain regulated, and core obligations still apply.

But in practice, once data are lawfully collected, pseudonymization opens a broad pathway for their reuse in AI development without additional consent — and, following the 2025 Supreme Court decision, one that data subjects have limited practical ability to halt ex ante.

This is where the distinctiveness of the South Korean model becomes clear. Legislative design, regulatory guidance and judicial interpretation are converging toward a common direction: treating pseudonymization not only as a safeguard, but as a mechanism for enabling and structuring lawful data use, with limited scope for data subjects to intervene in practice.

Korea is moving toward a model in which legal frameworks are actively interpreted to enable AI training — testing the limits of data use within privacy law.

Rather than resolving the tension between AI and privacy by weakening one side, Korea is experimenting with how far existing legal structures can be extended to accommodate data-driven innovation. The opportunity is clear. So is the risk. As pseudonymization evolves from a protective measure into a gateway to legality, it begins to reshape the architecture of data protection itself.

This content is eligible for Continuing Professional Education credits. Please self-submit according to CPE policy guidelines.

Submit for CPEs

Contributors:

Kyoungsic Min

AIGP, CIPP/E, FIP

Privacy Counsel and Asia Regional Lead

VeraSafe

Tags:

Data security Identity and verification Law and regulation AI governance Privacy

Related Stories

### South Korea overhauls PIPA and ties fines to CEO accountability 12 March 2026

ANALYSIS

### South Korea's PIPC flexes its muscles: What to know about AI model deletion, cross-border transfers and more 4 June 2025

ANALYSIS

### AI for HR in Canada and the US: What's new for 2026 and what employers are doing 22 April 2026

ANALYSIS

### SECURE Data Act: Analysis of the new federal privacy bill 22 April 2026

ANALYSIS

Get daily alerts for IAPP Privacy News

Daily digest delivered to your inbox.

Free. Unsubscribe anytime.

About this page

What is GovPing?

Every important government, regulator, and court update from around the world. One place. Real-time. Free. Our mission

What's from the agency?

Source document text, dates, docket IDs, and authority are extracted directly from IAPP.

What's AI-generated?

The summary, classification, recommended actions, deadlines, and penalty information are AI-generated from the original text and may contain errors. Always verify against the source document.

Last updated

Classification

Agency
IAPP
Published
April 22nd, 2026
Instrument
Notice
Branch
International
Legal weight
Non-binding
Stage
Final
Change scope
Minor

Who this affects

Applies to
Technology companies
Industry sector
5112 Software & Technology
Activity scope
Pseudonymization governance AI data training
Geographic scope
KR KR

Taxonomy

Primary area
Data Privacy
Operational domain
Compliance
Compliance frameworks
GDPR
Topics
Artificial Intelligence Healthcare

Get alerts for this source

We'll email you when IAPP Privacy News publishes new changes.

Free. Unsubscribe anytime.

You're subscribed!