Changeflow GovPing Trade & Export Regulation of AI in Drug Development: US, EU, a...
Priority review Guidance Added Final

Regulation of AI in Drug Development: US, EU, and UK Approaches

Favicon for www.jdsupra.com JD Supra Trade Law
Published March 6th, 2026
Detected March 7th, 2026
Email

Summary

Regulators in the US, EU, and UK are developing approaches to govern the use of AI in drug development, focusing on patient safety and research integrity. Key initiatives include common principles agreed upon by the EMA and FDA, and international harmonization efforts for AI/ML-enabled medical devices.

What changed

This article discusses the evolving regulatory landscape for Artificial Intelligence (AI) in drug development across the U.S., EU, and UK. It highlights the risks associated with opaque AI models, such as hallucinations and bias, which could impact patient safety. Regulatory bodies are actively creating frameworks to ensure AI drives innovation while safeguarding public health and research integrity. Key developments include common principles for AI use in drug development agreed upon by the EMA and FDA on January 14, 2026, and ongoing international harmonization efforts for AI/ML-enabled medical devices, building on prior work like the Good Machine Learning Practice (GMLP) for Medical Device Development and Predetermined Change Control Plans (PCCPs).

Companies involved in drug development utilizing AI should be aware of these emerging regulatory trends and principles. While this document outlines guiding principles and collaborative efforts rather than specific mandates, it signals a growing focus on AI governance in the pharmaceutical and medical device sectors. Compliance officers should monitor the specific guidance and policies issued by the FDA, EMA, and MHRA, particularly concerning data governance, risk-based performance assessments, and the development of AI/ML-enabled medical devices. The emphasis on international harmonization suggests a need for a globally consistent approach to AI compliance in this field.

What to do next

  1. Monitor FDA, EMA, and MHRA guidance on AI in drug development and medical devices.
  2. Review internal data governance and risk assessment processes for AI systems used in drug development.
  3. Stay informed about international harmonization efforts for AI/ML-enabled medical devices.

Source document (simplified)

March 6, 2026

Black boxes, white coats and red tape: Regulating the use of AI in drug development

Kellie Combs, Hannah Kerr-Peterson, Michael Purcell, Lincoln Tsang Ropes & Gray LLP + Follow Contact LinkedIn Facebook X Send Embed

AI systems are being used throughout the medicines lifecycle to analyse large volumes of data. These systems often rely on complex, opaque model architectures that autonomously train on large data sets, presenting unique risks. The actual risk posed by an AI system depends on the specific context of its use and the extent to which it impacts decision-making. If an AI system were to hallucinate or integrate bias during the drug development process, the consequences for patient safety could be severe. To address these practical challenges, regulators worldwide are developing regulatory tools and new approaches to safeguard the public health and integrity of research as well as regulatory decision-making while promoting AI adoption. This article examines how the U.S., EU, and UK are taking bold steps to ensure AI drives innovation in drug development without compromising safety and public trust.

  • Common principles for the use of AI in drug development. On 14 January 2026, the European Medicines Agency (EMA) and the U.S. Food and Drug Administration (FDA) agreed to a set of common principles for the use of AI in drug development. These principles are intended to identify areas where the international regulators, international standards organisations, and other collaborative bodies could work to advance good practice in drug development. They include alignment with human-centric values, implementation of appropriate data governance, and the conduct of risk-based performance assessments of complete systems.
  • International harmonisation of AI/ML-enabled medical device policies. In recent years, interest surrounding decentralised clinical trials has increased. Through the use of medical devices, apps, wearables, and telemedicine, patients can be monitored remotely, improving convenience of participation as well as generating evidence of the drug’s performance in the real world. To the extent any of the medical devices used in this setting are AI/ML-enabled, developers will need to consider the ongoing effort between international regulatory authorities to develop and establish a harmonised set of principles to govern the development and use of AI in the health care and life sciences industries. Previously, international regulators have collaborated to jointly develop:
    • Good Machine Learning Practice (GMLP) for Medical Device Development – Guiding Principles (October 2021): FDA, the UK’s Medicines and Healthcare products Regulatory Agency (MHRA) and Health Canada collaborated in 2021 to develop guiding principles to promote safe, effective, and high-quality medical devices that use AI and machine learning. The principles are aimed at addressing the unique nature and constantly evolving abilities of AI-enabled products to inform the development of regulatory frameworks and guidelines. In January 2025, the International Medical Device Regulators Forum released a final document leveraging and expanding upon these principles.
    • Predetermined Change Control Plans (PCCPs) for Machine Learning-Enabled Medical Devices – Guiding Principles (October 2023): PCCPs are written plans intended to enable regulators to authorise pre-specified automatic and manual device software products during the initial authorisation review process so that re-submission and review at the time of modification is not required. To support the GMLP guiding principle aimed at ensuring that deployed models are monitored for performance and re-training risks are managed, FDA, MHRA, and Health Canada jointly identified five guiding principles for developing PCCPs to support the development of robust and effective PCCPs.
    • Transparency for Machine Learning-Enabled Medical Devices – Guiding Principles (June 2024): Intended to build upon the identified GMLP principles, FDA, MHRA, and Health Canada jointly developed a set of principles to promote transparency—defined as the degree to which appropriate information about a machine learning-enabled medical device (including its intended use, development, performance and, when available, logic) is clearly communicated to relevant audiences. In defining the importance of transparency and establishing guideposts for key considerations, the transparency principles state that effective transparency (i) ensures that information that could impact risks and patient outcomes is communicated; (ii) considers the information that the intended user or audience needs and the context in which it’s used; (iii) uses the best media, timing and strategies for successful communication; and (iv) relies on a holistic understanding of users, environments, and workflows.
  • FDA Guidance Documents. While the U.S. does not currently have a statutory or regulatory framework that expressly addresses AI, if an AI/ML-enabled product meets the definition of “medical device” under the Federal Food, Drug and Cosmetic Act, including when such product is used to support the development of drug products, FDA will exercise jurisdiction to regulate it. However, because FDA’s traditional regulatory paradigm is not designed for adaptive AI/ML-enabled technology, FDA has indicated its intention to engage in regulatory flexibility to enable innovation while balancing safety and effectiveness. The Agency has issued numerous non-binding guidance documents, action plans, and papers aimed at addressing considerations specific to AI-enabled medical devices. FDA’s guidance documents focus on various topics including, among other things, recommendations related to PCCPs, lifecycle management considerations for AI-enabled medical devices, and considerations for the use of AI to support regulatory decision-making for drugs and biological products.
  • FDA’s Next Steps. In remarks given at the Consumer Electronics Show on 6 January 2026, FDA Commissioner Dr. Martin Makary stated that, among its other digital health initiatives, FDA is currently developing a new regulatory framework for AI that is “smarter” and “more forward thinking.” Dr. Makary emphasized that FDA’s intention is to take a “common sense” approach that includes, among other things, a shift in a “deregulatory direction for low-risk products” that would allow companies to bring products to market faster while leveraging post-market monitoring to ensure continued safety and accuracy. Dr. Makary further stated that FDA intends to issue additional guidance specifically targeting AI-enabled products that proactively establishes rules and guidelines for development of such products.
  • The EU Pharma Package. On 11 December 2025, Council and European Parliament reached political agreement to reform the regulatory framework governing medicinal products. The Pharma Package will overhaul the existing framework, revising key areas such as regulatory data and marketing protections, orphan incentives, supply shortages, and antimicrobial resistance. Although the final texts have not been published, the EMA has indicated that reforms will support “the broader use of AI in the lifecycle of medicines in regulatory decision-making, and creates additional possibilities for testing innovative AI driven methods for medicines in a controlled environment.” The Pharma Package is expected to enter into force in 2026, and will be subject to a transition period of two years for the new law to be fully implemented.
  • The proposed EU Biotech Act. On 16 December 2025, the European Commission proposed a Biotech Act aimed at strengthening the EU’s biotechnology sector by fostering a more competitive, innovative and efficient ecosystem. The legislative proposal recognises AI as a key enabler for biotechnology innovation, with provisions to ensure coherence with EU digital policies such as the AI Act and the EU Cybersecurity framework. The Act mandates the EMA to issue nonbinding guidance on the deployment and use of AI across the entire medicinal product lifecycle, including pre-clinical research, clinical trials, manufacturing, and post-authorisation monitoring, and establishes trusted testing environments and data quality accelerators for AI-enabled biotechnology. It also promotes responsible experimentation, data quality, and interoperability to advance safe and effective AI-driven health biotechnology solutions. The final adoption will unlikely occur before late 2026.
  • The AI Act. Regulation (EU) 2024/1689, known as the AI Act, aims to promote trustworthy AI across the EU. The AI Act categories uses of AI into four risk classifications; aside from the highest-risk class, which is banned, each ascending class must comply with increasingly stringent requirements relating to data governance, record keeping, transparency accuracy and security. While most AI systems used during drug development will not fall under the high-risk classification, compliance with the enhanced obligations of the AI Act will be necessary if, for example, the AI system undergoes a notified body conformity assessment, is considered software as a medical device, or is intended to be used as a component whose proper functioning is essential for the safety of a product. There is a general call, and the EU legislature recognises the need, to simplify the regulatory framework for AI-enabled medical devices.
  • The UK’s National Commission into the Regulation of AI in Healthcare. On 26 September 2025, the MHRA established a non-statutory body, the National Commission into the Regulation of AI in Healthcare, to review existing regulations and recommend improvements for a new regulatory framework governing AI in health care. This Commission brings together global AI leaders, clinicians and regulators to advise the MHRA on the development of a new regulatory framework for AI in health care, to be published in 2026. To ensure the Commission’s recommendations reflect the full breadth of perspectives, the MHRA launched a Call for Evidence to invite contributions from across the UK and internationally. The unprecedented pace of development in this field is significant. However, concerns about the potential risks to patient safety and public trust highlight the need for caution. The initiatives outlined above are focused on promoting the safe and responsible use of AI, aiming to create an international regulatory environment that accelerates the path from discovery to approval, while maintaining rigorous standards for safety, transparency, and accountability. For these efforts to succeed, close alignment of these initiatives with established and proposed legislative frameworks will be crucial.

Send Print Report

Related Posts

Latest Posts

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.
Attorney Advertising.

©
Ropes & Gray LLP

Written by:

Ropes & Gray LLP Contact + Follow Kellie Combs + Follow Hannah Kerr-Peterson + Follow Michael Purcell + Follow Lincoln Tsang + Follow more less

What do you want from legal thought leadership?

Please take our short survey – your perspective helps to shape how firms create relevant, useful content that addresses your needs:

Take the survey now »

Published In:

AI Act + Follow Artificial Intelligence + Follow Biotechnology + Follow Clinical Trials + Follow Digital Health + Follow EU + Follow European Medicines Agency (EMA) + Follow Food and Drug Administration (FDA) + Follow Health Technology + Follow International Harmonization + Follow Machine Learning + Follow Medical Devices + Follow Pharmaceutical Industry + Follow Regulatory Oversight + Follow Regulatory Reform + Follow Regulatory Requirements + Follow Risk Management + Follow UK + Follow Health + Follow International Trade + Follow Science, Computers & Technology + Follow more less

Ropes & Gray LLP on:

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra: Sign Up Log in ** By using the service, you signify your acceptance of JD Supra's Privacy Policy.* - hide - hide

Source

Analysis generated by AI. Source diff and links are from the original.

Classification

Agency
Various
Published
March 6th, 2026
Instrument
Guidance
Legal weight
Non-binding
Stage
Final
Change scope
Substantive

Who this affects

Applies to
Drug manufacturers Pharmaceutical companies Medical device makers
Geographic scope
International

Taxonomy

Primary area
Pharmaceuticals
Operational domain
Compliance
Topics
Artificial Intelligence Medical Devices Regulatory Harmonization

Get Trade & Export alerts

Weekly digest. AI-summarized, no noise.

Free. Unsubscribe anytime.

Get alerts for this source

We'll email you when JD Supra Trade Law publishes new changes.

Free. Unsubscribe anytime.