PCPD Alerts on OpenClaw and Agentic AI Privacy Risks
Summary
The Hong Kong Office of the Privacy Commissioner for Personal Data (PCPD) has issued an alert regarding the privacy and security risks associated with agentic AI, specifically mentioning OpenClaw. The PCPD reminds organizations and the public to implement adequate security measures when using such AI tools to prevent data breaches and cybersecurity threats.
What changed
The Hong Kong Office of the Privacy Commissioner for Personal Data (PCPD) has issued a notice highlighting significant privacy and security risks associated with agentic AI tools like OpenClaw. Unlike standard AI chatbots, agentic AI possesses higher default access rights, enabling it to read, write, and execute tasks on local devices and servers, potentially leading to unauthorized access, data breaches, and system takeovers if not properly secured. The PCPD emphasizes that vulnerabilities in system design, inadequate security reviews of plugins, and overly permissive access rights can expose vast amounts of personal data.
Organizations and individuals are urged to exercise caution when using agentic AI. Key recommendations include granting only the minimum necessary access rights, avoiding arbitrary data sharing, especially sensitive personal information, and ensuring the use of the latest official versions of the AI software downloaded from trusted sources. The notice aims to proactively inform users about potential dangers and promote safe AI deployment to safeguard personal data privacy and cybersecurity.
What to do next
- Grant minimum necessary access rights to agentic AI tools.
- Avoid providing sensitive personal data to agentic AI arbitrarily.
- Download and use only the latest official versions of agentic AI from official channels.
Source document (simplified)
Media Statements
The PCPD Issues Alert over the Privacy Risks of OpenClaw and Agentic AI and Reminds Organisations and the Public to Use AI Safely
Date: 16 March 2026
The PCPD Issues Alert over
the Privacy Risks of OpenClaw and Agentic AI and
Reminds Organisations and the Public to Use AI Safely
The Office of the Privacy Commissioner for Personal Data (PCPD) noted that the security risks related to the use of OpenClaw and other agentic artificial intelligence (AI) have provoked discussion recently. The PCPD is also concerned about the matter and reminds organisations and members of the public that before deploying or using OpenClaw and other agentic AI, they should pay attention to and understand the personal data privacy and security risks involved to avoid personal data breaches, malicious system takeovers and cybersecurity threats. They are also reminded to adopt adequate and effective security measures to safeguard personal data privacy.
The PCPD pointed out that compared to AI chatbots, which are generally used for text replies, content summary or content generation, agentic AI is more versatile in terms of functionality. Agentic AI is usually an agentic AI tool with high-level access that can be deployed on local device or server. It can read and write local files, allocate system resources, handle external services, or even autonomously act on behalf of the user to execute tasks with multiple steps according to pre-defined workflow, such as handling emails, making restaurant reservations and settling payments. The relevant processes do not require real time involvement of users.
Therefore, from the perspective of protecting personal data privacy, agentic AI generally poses higher risks than ordinary AI chatbots. For instance:
- The default access right of agentic AI is generally higher than that of AI chatbots, allowing it to access files, emails, account credentials of devices and contents saved in browsers, etc. If the settings of the relevant access rights lack stringent restrictions, the agentic AI may access a vast amount of the personal data of users or other individuals, resulting in increased risks of unauthorised access or reproduction of personal data by third parties, and even data breaches. At the same time, agentic AI may also misinterpret the commands from users and mistakenly delete their important data, such as mistakenly deleting all email records of the users;
- If there are any vulnerabilities in the system design or safety control on these agentic AI with high-level access and access to multiple systems and data sources, it will pose significant risks to personal data privacy and data security as a whole; and
- If the agentic AI allows users to install Plugins or Skills, and some of the Plugins or Skills have not undergone rigorous security reviews, malicious codes might be embedded in those Plugins or Skills. Hackers may then exploit the vulnerability(ies) to gain unauthorised access and take over user accounts, or further take control of the entire computer system, leading to leakage of personal data or other sensitive data.
The PCPD suggests that when collecting, using and processing personal data with agentic AI, organisations and members of the public should pay particular attention to the followings:-
- Grant the minimum access right to agentic AI: Users should carefully consider the nature and sensitivity of the personal data involved. Do not provide your personal data to agentic AI arbitrarily, especially when this involves confidential or sensitive personal data, such as identification documents, bank account numbers and passwords. Only the minimum access rights necessary to complete the tasks should be granted to agentic AI. Avoid granting administrator account rights to AI;
- Use the latest official version: Users should only download the latest versions of agentic AI from official channels and should avoid using third-party versions or outdated versions to reduce the risks of data breach incidents arising from unpatched system vulnerabilities;
- Adopt adequate measures to ensure system security and data security, such as separating the runtime environment of agentic AI from local devices or servers, strengthening network controls, strictly managing Internet-facing surfaces, lowering access rights and establishing effective protection mechanisms;
- Install and use Plugins or Skills with caution: Verify that the relevant programmes are the official versions to ensure their security; review the programmes to check if malicious codes are embedded and refrain from using them if their security cannot be ascertained; and
- Conduct continuous risk assessments: Users should continuously assess the risks involved in using agentic AI and watch out for any request by the agentic AI to execute high risk operations. If the decisions made by agentic AI are likely to have a significant impact on individuals, users should consider adopting a “human-in-the-loop” approach to retain the final control in decision-making processes, such as transmission of data and modification of system configurations.
Organisations can refer to the guidance titled “ Artificial Intelligence: Model Personal Data Protection Framework ” (Model Framework) published by the PCPD when collecting, using and processing personal data with AI tools . The Model Framework reflected international prevailing norms and best practices, including recommendations on formulating policies and frameworks on AI governance with a view to enhancing the protection of personal data privacy and complying with the relevant requirements of the Personal Data (Privacy) Ordinance.
Related changes
Source
Classification
Who this affects
Taxonomy
Browse Categories
Get Data Protection alerts
Weekly digest. AI-summarized, no noise.
Free. Unsubscribe anytime.
Get alerts for this source
We'll email you when PCPD Media Statements (HK) publishes new changes.