International Data Protection Authorities Joint Statement on AI Imagery Risks
Summary
Sixty-one international data protection authorities have issued a joint statement outlining privacy risks associated with AI-generated imagery. The statement addresses concerns about the creation of realistic images and videos of individuals without consent, particularly highlighting potential harms to children. It urges responsible AI development and deployment.
What changed
Sixty-one international data protection authorities, including the ICO, have jointly issued a statement addressing the significant privacy risks posed by AI-generated imagery. The statement highlights concerns regarding the creation of realistic images and videos depicting identifiable individuals without their knowledge or consent, with a particular focus on potential harms to children. It emphasizes the need for responsible innovation that prioritizes people's identity, dignity, and safety, and calls for meaningful safeguards to ensure autonomy, transparency, and control in AI systems.
While this statement is non-binding, it signals a united global regulatory stance and sets expectations for developers and deployers of AI technology. Regulated entities, particularly those involved in AI development or deployment that generates imagery, should review the joint statement to understand the expressed concerns and ensure their practices align with the principles of data protection and responsible innovation. The ICO has indicated that action will be taken where obligations are not met, suggesting potential future enforcement if risks are not adequately mitigated.
What to do next
- Review the joint statement on AI-generated imagery risks.
- Assess AI systems for potential privacy risks related to image generation, especially concerning identifiable individuals and children.
- Ensure AI development and deployment practices incorporate safeguards for autonomy, transparency, and control.
Source document (simplified)
International Data Protection Authorities issue joint statement on privacy risks of AI-generated imagery
- Date 23 February 2026
- Type Statement Data protection authorities from across the globe have today published a Joint Statement on AI-Generated Imagery. The statement represents the united position of 61 authorities and has been issued in response to serious concerns about artificial intelligence (AI) systems that generate realistic images and videos depicting identifiable individuals without their knowledge and consent. The signatories are especially concerned about potential harms to children.
William Malcolm, Executive Director Regulatory Risk & Innovation, said:
“People should be able to benefit from AI without fearing that their identity, dignity or safety are under threat. AI already plays a large role in all our lives, and everybody has a right to expect that AI systems handling their personal data will do with respect. Responsible innovation means putting people first: anticipating the risks and building in meaningful safeguards to ensure autonomy, transparency, and control.
“Public trust is foundational to the successful adoption and use of AI. Joint regulatory initiatives like this show global commitment to high standards of data protection in AI systems and help provide regulatory certainty. We expect those developing and deploying AI to act responsibly. Where we find that obligations have not been met, we will take action to protect the public.”
Related changes
Source
Classification
Who this affects
Taxonomy
Browse Categories
Get Data Protection alerts
Weekly digest. AI-summarized, no noise.
Free. Unsubscribe anytime.
Get alerts for this source
We'll email you when ICO News & Blogs publishes new changes.