Changeflow GovPing Data Privacy & Cybersecurity Dutch DPA Urges Accelerated AI Regulation and S...
Priority review Notice Added Final

Dutch DPA Urges Accelerated AI Regulation and Supervision

Favicon for www.autoriteitpersoonsgegevens.nl Dutch DPA News
Published March 5th, 2026
Detected March 16th, 2026
Email

Summary

The Dutch Data Protection Authority (AP) is urging the government to accelerate AI regulation and supervision, citing increasing risks and a deteriorating AI Impact Barometer. The AP warns that current enforcement capabilities are insufficient to address unsafe and discriminatory algorithms.

What changed

The Autoriteit Persoonsgegevens (AP) has issued a strong call for the Dutch government to expedite the implementation of AI regulation and supervision, as detailed in their latest AI Impact Barometer report. The AP highlights a significant increase in risks associated with AI deployment, with four out of nine indicators on their barometer now showing red, indicating a deteriorating picture. Specific concerns include the lack of progress in supervision design, standards development, government algorithm registration, and incident visibility. The AP emphasizes that current enforcement is inadequate to address issues like discrimination and safety risks posed by algorithms, particularly in high-risk areas such as recruitment and selection.

Organizations deploying AI urgently need clarity on applicable rules, which are becoming increasingly critical. The AP warns that fundamental rights are at risk due to the growing pressure to adopt AI without robust safeguards. The report specifically points to insufficient transparency and explainability in AI used for recruitment and selection, noting that these systems are classified as high-risk under the EU AI Act and must meet strict requirements by August 2026. The AP is concerned that some organizations may attempt to circumvent the EU AI Act by misclassifying systems, underscoring the need for immediate action to prevent future scandals and ensure proper protection of fundamental rights.

What to do next

  1. Review current AI deployment for compliance with existing and upcoming EU AI Act requirements.
  2. Assess AI systems used in recruitment and selection for transparency, explainability, and non-discriminatory outcomes.
  3. Monitor further developments and guidance from the Dutch DPA and EU regarding AI regulation.

Source document (simplified)

AP: AI Impact Barometer turns red – action required

05 March 2026 Themes: AI & algorithmic risks: developments in the Netherlands Coordination of algorithmic and AI supervision EU AI Act The Autoriteit Persoonsgegevens (AP) is urging the new government to accelerate the implementation of regulation on artificial intelligence (AI) and its supervision. Organisations that wish to deploy AI urgently need clarity about the applicable rules. These rules are already in place and are becoming increasingly important. The AP warns of significant risks posed by unsafe and discriminatory algorithms, which currently cannot be adequately enforced at this moment.

These findings are presented in the sixth edition of the Report AI & Algorithms Netherlands (RAN), which the AP publishes twice a year.

As the coordinating supervisor for algorithms and AI, the AP analyses the main risks and societal effects associated with these technologies. The AP translates these findings into nine indicators in the AI Impact Barometer. The barometer now shows a deteriorating picture. In the previous report, two of the nine indicators had already turned red; this has now doubled to four.

The AP is concerned about the lack of progress in the design of supervision, the development of standards, the registration of government algorithms, and the visibility of incidents.

Aleid Wolfsen, Chairman of the AP: “Five years after the Dutch childcare benefits scandal, the lessons are clear, but the follow-up is lagging behind. This is mainly due to a lack of implementation of robust rules for algorithms and AI, and the effective enforcement of those rules. As the pressure to adopt AI continues to grow, we must ensure that fundamental rights are properly protected. Anyone who wants to prevent a new scandal must act now.”

Increase in serious risks

The AP observes that the risks associated with the deployment of AI and algorithms have long been underestimated. As a result, people may face discrimination or, in some cases, risks to safety in society. This is also reflected in several findings in the RAN.

The AP notes that many employers are using AI in recruitment and selection. The AP warns that explainability and transparency in recruitment and selection remain insufficient in many respects, for example in the use of online assessments. The use of AI in recruitment and selection should be both accurate and non-discriminatory, and its outcomes must be explainable to candidates.

However, exploratory studies and practical tests show that transparency and explainability often fall short, particularly in online and game-based assessments. These tools are frequently used for an initial screening, even though it is often unclear how they predict the suitability of candidates. It is also frequently unclear how decisions are reached and how candidates can challenge them. As a result, some candidates may have little chance of being selected from the outset. Under the EU AI Act, AI systems used in recruitment and selection are classified as high-risk systems and must comply with strict requirements from August 2026 onwards.

Avoiding responsibility

The risks associated with AI are increasing rapidly. New developments, such as agentic AI, require organisations and supervisory authorities to prepare quickly. At the same time, some organisations attempt to circumvent the EU AI Act by classifying their systems as ‘ordinary algorithms’. Yet these are precisely the types of systems that fall within the scope of the strict rules of the AI Act, intended to protect people from harmful or discriminatory outcomes. Each week, the AP sees new systems being registered in the government’s algorithm register as algorithms, even though they are in fact AI systems. This misclassification increases the risks faced by individuals.

Commercial organisations also sometimes seek to avoid their responsibilities, to the detriment of customers and users. Key risks that have emerged in 2025 and 2026 include the rapid spread of deepfakes, AI-enabled fraud, psychological harm caused by chatbots, and the growing gap between AI security measures and the pace of technological development.

The AP also points to recent AI-related incidents, such as the proliferation of AI-generated voice cloning tools and problems with Grok AI chatbot, which reportedly made it possible to generate highly realistic nude images of individuals. Such developments pose serious risks to the protection of fundamental rights and to cybersecurity.

Urgent call to the new cabinet

The purpose of the EU AI Act is clear: to promote trustworthy AI in Europe by ensuring that systems are safe, respect fundamental rights, and at the same time enable innovation. The AP warns that these goals may not be achieved due to ongoing indecision, putting the timely introduction of AI rules at risk.

The AP therefore calls on the new government to accelerate the implementation of the AI Act. This includes adopting the Dutch implementing legislation, designating supervisory authorities, securing structural funding for supervision, and clarifying how the rules will be applied in practice. In addition, the AP argues that the Netherlands should actively advocate at the European level for the swift conclusion of discussions on the deferral and simplification of the regulation.

Publications

Report AI & Algorithms Netherlands - March 2026

PDF, 4 MB

Source

Analysis generated by AI. Source diff and links are from the original.

Classification

Agency
AP Netherlands
Published
March 5th, 2026
Instrument
Notice
Legal weight
Non-binding
Stage
Final
Change scope
Substantive

Who this affects

Applies to
Employers Government agencies Manufacturers Technology companies
Geographic scope
Netherlands

Taxonomy

Primary area
Artificial Intelligence
Operational domain
Compliance
Topics
Data Privacy Consumer Protection

Get Data Privacy & Cybersecurity alerts

Weekly digest. AI-summarized, no noise.

Free. Unsubscribe anytime.

Get alerts for this source

We'll email you when Dutch DPA News publishes new changes.

Free. Unsubscribe anytime.