Changeflow GovPing Pharma & Drug Safety WHO Guidance on Responsible AI in Mental Health
Priority review Guidance Added Final

WHO Guidance on Responsible AI in Mental Health

Favicon for www.who.int WHO News
Published March 20th, 2026
Detected March 20th, 2026
Email

Summary

The WHO has released guidance based on expert recommendations for the responsible use of AI in mental health. The guidance emphasizes recognizing generative AI use as a public mental health concern, integrating mental health into AI impact assessments, and co-designing AI tools with experts and individuals with lived experience.

What changed

The World Health Organization (WHO), in collaboration with the Delft Digital Ethics Centre (DDEC), has published guidance outlining recommendations for the responsible development and deployment of Artificial Intelligence (AI) in mental health services. The guidance, stemming from a workshop of international experts, identifies generative AI use for emotional support as a public mental health concern requiring a coordinated response from governments, health systems, and industry. Key recommendations include integrating mental health impact assessments into AI monitoring, ensuring tools are co-designed with mental health experts and users, and grounding AI applications in evidence and cultural context.

This guidance signals a shift towards greater regulatory scrutiny of AI applications in sensitive areas like mental health. Compliance officers should note the emphasis on recognizing AI's impact as a public health issue and the call for integrated impact assessments. Companies developing or deploying AI for mental health support must prioritize co-design with end-users and experts, ensure tools are evidence-based and culturally appropriate, and be prepared for increased oversight regarding safety, accountability, and human well-being. While non-binding, these recommendations set a strong precedent for future regulatory frameworks and industry standards.

What to do next

  1. Recognize generative AI use in mental health as a public health concern.
  2. Integrate mental health impact assessments into AI solution monitoring.
  3. Co-design AI tools for mental health support with experts and individuals with lived experience.

Source document (simplified)

WHO / Petra Hongell © Credits

Towards responsible AI for mental health and well-being: experts chart a way forward

20 March 2026 Departmental update Reading time:
On 29 January 2026, over 30 international experts in artificial intelligence, mental health, ethics, and public policy gathered for an online workshop organized by the Delft Digital Ethics Centre (DDEC) at the Delft University of Technology (TU Delft) – the first WHO Collaborating Centre on AI for health governance, including ethics.

Held as an official pre-summit event of the India AI Impact Summit 2026, with support from the World Health Organization, the workshop convened researchers, policy-makers, clinicians, and advocates. Dr Alain Labrique, Director of WHO's Department of Data, Digital Health, Analytics and AI, noted: “As AI increasingly interacts with people in moments of emotional vulnerability, we as WHO and its stakeholders must ensure these systems are designed and governed with safety, accountability and human well-being at their core.”

Central among these challenges is the growing use of generative AI tools – neither designed nor tested for mental health – for emotional support, particularly by young people, and the potentially serious risks this may pose. “We are at a critical juncture”, Sameer Pujari, WHO’s AI Lead, remarked. “The pace of AI adoption in people's daily lives has far outstripped investment in understanding its impact on mental health. Closing that gap requires coordinated action and dedicated resources from both the public and private sectors.”

Underscoring the importance of cross-disciplinary collaboration, Dr Kenneth Carswell of WHO’s Department of Noncommunicable Diseases and Mental Health added: “Minimizing risks from generative AI for mental health while maximizing benefits requires bringing together the voices of those most affected, clinical and research expertise, governance and regulatory frameworks, and data to inform understanding. WHO is committed to ensuring that users’ well‑being stays at the centre as these tools evolve.”

Key recommendations

The workshop distilled these discussions into three principal recommendations:

  • first, generative AI use should be recognized as a public mental health concern, with commensurate responses across government, health systems, and industry that address all generative AI solutions, not only those intended for mental health;
  • second, mental health should be integrated into impact assessments and monitoring of AI solutions to better understand their effects on determinants of health, short-term clinical measures, and long-term outcomes, such as emotional dependence. One workshop participant stressed: “We need independent investments to test these effects”;
  • third, AI tools used for mental health support should be co-designed with mental health experts and people with lived experience, including youth. Tools must be **** grounded in the best available evidence and tailored to cultural, linguistic, and contextual factors. Workshop participants emphasized the importance of consumer empowerment, while TU Delft’s Dr Caroline Figueroa highlighted the urgent need for consensus on crisis referral frameworks and accountability systems.

Collaborating Centres: a strategic pillar for responsible AI

More broadly, the workshop illustrated how the WHO Collaborating Centre mechanism has become a critical pillar in implementing the WHO’s vision for responsible AI in health. Through this mechanism, WHO mobilizes world-class academic expertise and convenes diverse international stakeholders to generate evidence-based recommendations in support of its standard-setting role. As Dr Stefan Buijsman, managing director of the DDEC, noted: “As a WHO Collaborating Centre, we can increase impact by collaborating with experts around the world, domain experts, and governments.”

Looking ahead: building a global consortium

WHO is establishing a Consortium of Collaborating Centres on AI for Health, a network of leading institutions across all six WHO regions, to support Member States in the responsible adoption of AI. A pre-convening of candidate consortium members took place on 17–19 March 2026 at TU Delft, where institutions aligned on shared priorities and agreed on initial collaboration mechanisms to build the collaborative infrastructure needed to ensure that AI governance in health is grounded in evidence, ethics, and the needs of diverse populations worldwide.

Related

Harnessing artificial intelligence for health WHO’s work on digital health WHO’s work on mental health

Named provisions

Key recommendations Collaborating Centres: a strategic pillar for responsible AI Looking ahead: building a global consortium

Source

Analysis generated by AI. Source diff and links are from the original.

Classification

Agency
WHO
Published
March 20th, 2026
Instrument
Guidance
Legal weight
Non-binding
Stage
Final
Change scope
Substantive

Who this affects

Applies to
Healthcare providers Drug manufacturers Technology companies
Industry sector
6211 Healthcare Providers 5112 Software & Technology 3254 Pharmaceutical Manufacturing
Activity scope
Mental Health Support AI Development
Geographic scope
international international

Taxonomy

Primary area
Healthcare
Operational domain
Compliance
Compliance frameworks
NIST CSF FDA 21 CFR Part 11
Topics
Artificial Intelligence Public Health Ethics

Get Pharma & Drug Safety alerts

Weekly digest. AI-summarized, no noise.

Free. Unsubscribe anytime.

Get alerts for this source

We'll email you when WHO News publishes new changes.

Free. Unsubscribe anytime.