FDA Warns Drug Maker for AI Misuse in Manufacturing
Summary
FDA issued a Warning Letter to a drug manufacturer for using an AI tool to generate drug product specifications, procedures, and master production or control records intended to satisfy FDA requirements. The agency cited the company for failure to ensure AI-generated documents were adequately reviewed and validated by its quality unit for accuracy and compliance with current Good Manufacturing Practice requirements, and for overreliance on the AI tool for compliance. This Warning Letter is described as the first issued related to company use of AI as a compliance tool, expanding FDA scrutiny beyond AI as a regulated product to its use in regulated product manufacturing and quality.
“Reliance on AI is not a defense against regulatory violations. AI can be used as a tool (e.g., in document creation or compliance support), but the ultimate responsibility for compliance lies with the regulated entity.”
About this source
JD Supra is the legal industry's open library where US law firms publish client alerts and regulatory analysis. The Healthcare section aggregates everything from partners covering CMS reimbursement, HIPAA enforcement, FDA compliance, healthcare M&A, fraud and abuse, payer-provider disputes, telehealth, and the fast-moving state regulation of healthcare AI. Around 250 alerts a month. Watch this if you run a hospital legal department, advise digital health startups, manage payer compliance, or track how state Medicaid agencies and HHS-OIG actually enforce the rules they publish. The signal-to-noise ratio is genuinely good because firms only publish when they have something concrete to say to their clients. GovPing pulls each alert with the firm name, author, and topic.
What changed
FDA cited the company for using an AI tool to generate GMP documentation without adequate quality unit review and validation, and for overreliance on AI to identify regulatory requirements. The agency stated that reliance on AI is not a defense against regulatory violations.\n\nPharmaceutical and life sciences companies deploying AI across FDA-regulated operations should ensure that any AI-generated compliance documents, procedures, or recommendations are thoroughly reviewed and approved by qualified human personnel. Companies should critically examine their current use of AI in compliance functions and establish robust AI governance frameworks with clear policies, defined roles, and meaningful training programs to guide appropriate use of AI across their organization.
Archived snapshot
Apr 24, 2026GovPing captured this document from the original source. If the source has since changed or been removed, this is the text as it existed at that time.
April 24, 2026
FDAs Warning Letter Suggests Growing Scrutiny of AI Overreliance
Michele Buenafe, Ariel Seeley Morgan Lewis - As Prescribed + Follow Contact LinkedIn Facebook X ;) Embed
A recently issued Food and Drug Administration (FDA) Warning Letter citing a drug manufacturer for improper use of artificial intelligence (AI) suggests FDA’s scrutiny of AI is expanding. Although not the first FDA Warning Letter related to AI, prior Warning Letters focused on issues surrounding the regulatory status of the AI systems themselves, namely whether a given AI system was a medical device subject to FDA oversight. This Warning Letter, however, indicates FDA is now scrutinizing the use of AI in other contexts, such as regulated product manufacturing and quality (in this case, for pharmaceuticals).
As life sciences companies rapidly deploy AI across their FDA-regulated business operations, they should bear in mind that they remain fully responsible for any AI-generated outputs and work product, including any errors, omissions, or oversights.
FDA’s Findings: AI Use and Compliance Failures
FDA’s Warning Letter indicates that the drug manufacturer informed FDA that it had used an AI tool to generate “drug product specifications, procedures, and master production or control records” intended to satisfy FDA requirements. FDA cited the company for several failures related to its use of AI, including:
- Failure to ensure that AI-generated documents are adequately reviewed/validated by the company’s quality unit for accuracy and compliance with the relevant cGMP requirements
- Overreliance on the AI tool for compliance. In one telling example, company representatives allegedly attributed their lack of awareness of certain process validation requirements to the failure of their AI system to flag such requirements As stated above, this is the first time FDA has issued a Warning Letter related to company use of AI as a compliance tool, demonstrating that the agency’s focus on AI has expanded beyond AI as a regulated product and that other FDA centers (beyond the Center for Devices and Radiological Health or CDRH) are also paying attention to AI. This Warning Letter sends an unambiguous message: Reliance on AI is not a defense against regulatory violations. AI can be used as a tool (e.g., in document creation or compliance support), but the ultimate responsibility for compliance lies with the regulated entity.
Implications and Recommendations
For any company deploying AI, this Warning Letter should serve as a wake-up call, not only because FDA is watching, but because it brings to the forefront broader considerations about what it means to appropriately and responsibly deploy AI in a regulated industry.
Three takeaways merit particular attention:
- Human oversight is non-negotiable: AI can be a valuable tool for enhancing compliance, but it cannot act as a substitute for the expertise and judgement of qualified human professionals. Any AI-generated compliance documents, procedures, or recommendations must be thoroughly reviewed and approved by authorized personnel in accordance with FDA’s laws and regulations.
- Accountability cannot be outsourced to technology: Manufacturers remain accountable for compliance failures, even when those failures stem from technology-driven processes. Companies should critically examine their current use of AI and other automated systems in compliance functions to ensure that appropriate human validation and oversight mechanisms are in place.
- AI governance is a compliance imperative: Companies should ensure that they have robust AI governance frameworks, including clear policies, defined roles, and meaningful training programs, to guide the appropriate and effective use of AI across their organization. Conclusion
The recent Warning Letter demonstrates that FDA is scrutinizing companies’ use of AI and serves as a reminder of the risks associated with overreliance on AI. As AI adoption accelerates across the pharmaceutical and life sciences sectors, companies must ensure that their personnel are exercising proper judgement instead of deferring unreservedly to AI-generated outputs.
Ultimately, the lesson of this Warning Letter is straightforward: FDA is watching and will continue to hold companies and their personnel responsible for regulatory compliance. It is the company and its employees that will bear the consequences should something go wrong. AI is a tool, and it should be used to support, rather than supplant, human oversight and expertise.
[View source.]
;) ;) Report
Related Posts
- Awash in Data? FDA Removes a Barrier in Real-World Evidence Generation
- Key Takeaways from the AI Trends in Medical Devices Panel at the 2025 FDLI Annual Conference
- FDA’s AI-Assisted Review: The Next Stage of Regulated Product Evaluation
Latest Posts
- FDAs Warning Letter Suggests Growing Scrutiny of AI Overreliance
- FDA Grants First Voucher Under Restored Pediatric Disease Program See more »
DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.
Attorney Advertising.
©
Morgan Lewis - As Prescribed
Written by:
Morgan Lewis - As Prescribed Contact + Follow Michele Buenafe + Follow Ariel Seeley + Follow more less
PUBLISH YOUR CONTENT ON JD SUPRA
- ✔ Increased readership
- ✔ Actionable analytics
- ✔ Ongoing writing guidance Join more than 70,000 authors publishing their insights on JD Supra
Published In:
Artificial Intelligence + Follow Automation Systems + Follow Compliance + Follow Emerging Technologies + Follow Enforcement Actions + Follow FDA Warning Letters + Follow Food & Drug Regulations + Follow Food and Drug Administration (FDA) + Follow Life Sciences + Follow Machine Learning + Follow Manufacturers + Follow Pharmaceutical Industry + Follow Regulatory Oversight + Follow Regulatory Requirements + Follow Regulatory Violations + Follow Risk Management + Follow Administrative Agency + Follow Consumer Protection + Follow Health + Follow Science, Computers & Technology + Follow more less
Morgan Lewis - As Prescribed on:
"My best business intelligence, in one easy email…"
Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra: Sign Up Log in ** By using the service, you signify your acceptance of JD Supra's Privacy Policy.* - hide - hide
Mentioned entities
Related changes
Get daily alerts for JD Supra Healthcare
Daily digest delivered to your inbox.
Free. Unsubscribe anytime.
Source
About this page
Every important government, regulator, and court update from around the world. One place. Real-time. Free. Our mission
Source document text, dates, docket IDs, and authority are extracted directly from Morgan Lewis.
The summary, classification, recommended actions, deadlines, and penalty information are AI-generated from the original text and may contain errors. Always verify against the source document.
Classification
Who this affects
Taxonomy
Browse Categories
Get alerts for this source
We'll email you when JD Supra Healthcare publishes new changes.
Subscribed!
Optional. Filters your digest to exactly the updates that matter to you.