Changeflow GovPing Telecom & Technology OECD Responsible AI Guidance for Businesses
Priority review Guidance Added Final

OECD Responsible AI Guidance for Businesses

Favicon for oecd.ai OECD AI Wonk Blog
Published February 19th, 2026
Detected March 23rd, 2026
Email

Summary

The OECD has launched new Due Diligence Guidance for Responsible AI, providing businesses with a government-backed tool to ensure their AI systems are trustworthy. The guidance addresses risks throughout the AI value chain, including data privacy, environmental impact, and ethical considerations for workers.

What changed

The Organisation for Economic Co-operation and Development (OECD) has released new Due Diligence Guidance for Responsible AI. This guidance offers businesses an internationally agreed-upon, government-backed framework to ensure their AI systems are trustworthy, addressing concerns across the entire AI value chain. It highlights risks such as data privacy breaches, environmental costs associated with AI operations (energy and water consumption), and the potential for AI misuse, including the spread of misinformation.

Companies involved in investing in, developing, or using AI should review this guidance to understand their responsibilities in managing AI risks. While non-binding, adherence to the guidance can help demonstrate a commitment to responsible AI practices, fostering trust among markets and societies. The document emphasizes the need for businesses to ensure decent work for data enrichment workers and secure sensitive data used in AI training.

What to do next

  1. Review the OECD Due Diligence Guidance for Responsible AI.
  2. Assess AI systems and processes for risks related to data privacy, environmental impact, and worker conditions.
  3. Implement measures to ensure AI systems are trustworthy and align with responsible AI principles.

Source document (simplified)

Intergovernmental

The OECD’s new responsible AI guidance: A compass for businesses in a complex terrain

Rashad Abelson , Barbara Bijelic

February 19, 2026 —

4 min read

Companies hoping to take advantage of AI’s opportunities need to be trustworthy. Whether investing in, developing, or using AI, the OECD’s new Due Diligence Guidance for Responsible AI provides businesses with an internationally agreed, government-backed tool to demonstrate that markets and societies can trust their AI systems.

Recent international reporting underscores a growing consensus: AI is not just a technological shift. It is a major geopolitical, economic, and societal phenomenon that demands coordinated action amongst all actors, including companies.

AI has the potential to transform society through productivity, economic value and solutions to complex challenges, but for these benefits to materialise, AI needs trust.  So far, the technology seems to advance faster than its guardrails. The gap between AI systems and appropriate safeguards is now one of the defining challenges for policymakers and global businesses alike. Both are under pressure to balance AI innovation and diffusion with safety and risk management. Success depends on getting the balance right.

Risks throughout the AI value chain are continually evolving

Risks to people and the environment can manifest at any point along the AI value chain. The OECD actively tracks and categorises risks through its AI Incidents and Hazards Monitor.

Here are a few examples. At one end of the AI value chain, there are the people who label, clean, and moderate the vast datasets required to train AI models. They can face low wages, long hours, and suffer psychological distress from exposure to harmful content. Companies need to ensure decent work for data enrichment workers.

The environmental costs of running AI systems can also be significant, particularly for energy and water consumption by data centres that power AI development and deployment, which may lead to higher energy prices.

Data privacy is another critical concern. AI models are trained on massive datasets that may include personal or sensitive information. If these datasets are not properly anonymised and secured, it can lead to data breaches. If AI models “memorise” and reproduce sensitive data in their outputs, they can expose confidential details, creating legal and ethical dilemmas.

At the other end of the AI value chain, the potential for AI misuse poses risks such as reputational harm and the spread of misinformation. AI-generated deepfakes, for instance, can be used to create realistic but fabricated content, damaging reputations or manipulating public opinion. Similarly, AI can be used to generate and disseminate mis and dis-information at speed and scale, eroding trust in institutions and potentially influencing events.

Worldwide, governments, consumers, and markets are calling for responsible and trustworthy AI. This is one of the reasons for the surge in mandatory and voluntary AI risk management frameworks, responsible AI initiatives, global agreements, academic research and statements from industry leaders and investors. However, this surge in frameworks is also increasing complexity for companies, as risk management is defined differently across jurisdictions and understanding of AI-related risks is evolving.

OECD Due Diligence Guidance for Responsible AI: A flexible, whole-of-value-chain approach to support businesses in navigating evolving risks and rules

This is why the OECD has now developed the first internationally agreed, government-backed Due Diligence Guidance for Responsible AI. Backed by all the OECD’s member countries, plus 17 partner governments and the EU, this Guidance helps enterprises navigate the complex terrain of AI risk management. It is designed to help businesses ensure that the AI systems they develop are trustworthy, used and developed safely and responsibly, and aligned with broad societal values.

Concretely, this Guidance offers:

  • A step-by-step framework for enterprises to set up internal management systems capable of proactively identifying and responding to risks related to human rights, labour standards, and environmental impacts.
  • Comprehensive coverage of all risk areas from the leading international standards that it is built on and reflects, notably, the OECD Guidelines for Multinational Enterprises on Responsible Business Conduct (MNE Guidelines) and the OECD Recommendation on Artificial Intelligence (AI Principles);
  • Recommendations and implementation examples for everyone in the AI value chain, from data suppliers and infrastructure providers to financiers and end-users – including enterprises. The guidance emphasises a “whole-of-value-chain” approach to support secure and resilient AI value chains more resistant to supply chain shocks and interference.
  • A roadmap of related provisions in existing frameworks, indicating how each step complements and relates to relevant provisions from AI risk management frameworks. This feature helps enterprises understand how implementing this guidance can help them meet expectations from multiple sources and navigate the current landscape of AI risk management frameworks.

Responsibility and trust can give a competitive edge

Responsibility and innovation not only coexist but also reinforce each other. Companies that show a commitment to responsible AI and actively address potential risks can gain trust from investors, customers, regulators, and policymakers. This trust leads to a competitive edge. Instead of hindering innovation, responsible AI practices can speed up growth by reducing obstacles and preventing costly damage to reputation, legal issues, and society.

Responsible and trustworthy AI is becoming increasingly crucial for accessing global markets as international regulatory and voluntary risk management frameworks evolve. Companies in the AI value chain that meaningfully implement the Guidance’s recommendations can position themselves advantageously for cross-border expansion, potentially avoiding the substantial costs of retrofitting systems to meet various regional requirements.

As AI continues to develop rapidly, frameworks and best practices for responsible AI are likely to evolve as well. To help stakeholders keep pace, the OECD will launch an online navigation tool later this year with updates on new frameworks and use cases.

Receive the OECD's artificial intelligence newsletter! Sign up with Linkedin Accountability Human-centred values and fairness Inclusive growth, sustainable development and well-being International co-operation for trustworthy AI Shaping an enabling policy environment for AI Digital economy Industry & entrepreneurship AI Diffusion AI ethics AI Productivity ai supply chain Generative AI Innovation Labour Markets responsible business conduct WIPS

Rashad Abelson OECD Centre for Responsible Business Conduct

Technology Sector Lead

Barbara Bijelic

OECD Centre for Responsible Business Conduct

Head of Regulation and Standards

  • See profile Disclaimer: The opinions expressed and arguments employed herein are solely those of the authors and do not necessarily reflect the official views of the OECD, the GPAI or their member countries. The Organisation cannot be held responsible for possible violations of copyright resulting from the posting of any written material on this website/blog.

From the AI Wonk

Intergovernmental #### Why AI Sandboxes matter for responsible innovation and public trust AI regulatory sandboxes in AI governance: benefits, design, global examples and policy insights to foster innovation, trust and compliance.

March 18, 2026 —

10 min read

Intergovernmental #### Can we create a clear understanding of what agentic AI is and does? AI agents and agentic AI based on large language models are becoming more autonomous and capable of interacting with both physical and virtual environments. As the capabilities of these AI systems gro...

March 3, 2026 —

4 min read

Named provisions

Risks throughout the AI value chain are continually evolving

Source

Analysis generated by AI. Source diff and links are from the original.

Classification

Agency
GP
Published
February 19th, 2026
Instrument
Guidance
Legal weight
Non-binding
Stage
Final
Change scope
Substantive

Who this affects

Applies to
Employers Manufacturers Technology companies
Industry sector
5112 Software & Technology 5182 Data Processing & Hosting
Activity scope
AI Development AI Deployment Data Management
Geographic scope
European Union EU

Taxonomy

Primary area
Artificial Intelligence
Operational domain
Compliance
Topics
Business Ethics Data Privacy Environmental Impact

Get Telecom & Technology alerts

Weekly digest. AI-summarized, no noise.

Free. Unsubscribe anytime.

Get alerts for this source

We'll email you when OECD AI Wonk Blog publishes new changes.

Free. Unsubscribe anytime.