In This Article
- Executive Summary
- The Purolea Warning Letter: What Actually Happened
- The G-AI-P Framework: FDA and EMA Set the New Bar
- Four Lessons Pharma IT and Quality Leaders Must Act On
- A Practical Framework: Building AI Governance for GxP
- What This Means for Life Sciences Organizations Today
- Conclusion
- Further Reading
- References & Sources
Executive Summary
On April 2, 2026, the FDA issued Warning Letter #722591 to Purolea Cosmetics Lab in Livonia, Michigan — the agency’s first-ever warning letter citing AI over-reliance as a current good manufacturing practice (cGMP) violation. The company had used AI agents to generate drug specifications, procedures, and production records without the human quality review required under 21 CFR 211.22(c). Their defense, in effect: the AI agent never told us that was required.
The regulatory context has shifted. The FDA and EMA jointly released the Good AI Practice (G-AI-P) principles in January 2026, codifying expectations around traceability, validation, human oversight, and lifecycle management for AI systems used in drug development and manufacturing.
The implications are clear. AI-generated content must be reviewed by a qualified human before entering the quality system. “The AI said so” is not a defense. Over-reliance on AI is itself a cGMP violation — even when the output happens to be correct.
This article unpacks what the Purolea letter says, how it connects to the G-AI-P framework, and what life sciences organizations must do now to build AI governance that holds up to regulatory scrutiny.
The Purolea Warning Letter: What Actually Happened
The FDA inspected Purolea Cosmetics Lab over three days in late October 2025. The facility, located in Livonia, Michigan, produced drug products under cGMP regulations. What investigators found, and what the subsequent April 2, 2026 warning letter documented, is a watershed moment for how regulators view artificial intelligence in regulated manufacturing.
Purolea had deployed AI agents to generate significant portions of its quality documentation. According to the warning letter, AI tools produced drug specifications, manufacturing procedures, and production records — the documents that anchor cGMP compliance. The company had not performed the process validation required before distributing these products to the market.
When FDA investigators asked why validation had been skipped, the company’s response was essentially that the AI agent had not flagged the requirement. As RAPS reported, this admission crystallized the core compliance failure: the company had delegated a regulatory obligation to a tool, then treated the tool’s silence as an authoritative answer.
The letter contained other observations as well — the Redica Systems analysis notes contamination risk issues, insect findings, and inadequate separation of operations. But the AI citation is the historic piece. It established, for the first time in an enforcement action, that a company’s failure to exercise human judgment over AI output is not a technical lapse. It is a violation.
The FDA’s language is worth quoting directly. The agency wrote that the company “relied on AI without ensuring appropriate oversight” and that the company’s quality unit “did not review records generated through AI tools to verify they were correct or compliant.” Those two findings, taken together, mean the quality system had been effectively outsourced to an algorithm without accountability controls. That, the FDA concluded, is a failure of the quality control unit’s responsibilities under 21 CFR 211.22.
The broader message from the compliance community’s early analysis is that Purolea’s specific facts — a small, relatively obscure facility — do not limit the letter’s reach. Every pharmaceutical, biotech, and medical device company using AI anywhere near a GxP workflow is now on notice.
The G-AI-P Framework: FDA and EMA Set the New Bar
Three months before the Purolea warning letter, on January 14, 2026, the FDA and EMA jointly released the Guiding Principles of Good AI Practice for Drug Development — commonly referred to as G-AI-P. The framework codifies expectations for how AI is designed, deployed, validated, and monitored across the drug development lifecycle.
As the Causaly analysis observes, the G-AI-P principles represent an explicit signal from regulators: AI is subject to the same rigor as any other GxP system. The assumption that AI sits outside traditional validation because it is “just a tool” has been retired.
The ten principles group naturally into four themes.
Theme 1: Design and Scope
The first three principles cover how AI systems are framed and documented before they are deployed.
- Well-defined role and scope. There must be no ambiguity about what the AI does, what decisions it informs, and where its authority ends.
- Traceability of data sources. Training data provenance must be documented — where the data came from, how it was collected, what representational biases it may carry.
- Documentation of processing choices. Every design decision — model selection, feature engineering, hyperparameters, data splits — must be auditable.
Theme 2: Oversight and Validation
The next three principles address how AI is governed in use.
- Human oversight is essential. AI is decision-support, not an autonomous replacement for qualified humans. This language is the direct regulatory counterpart to what the Purolea letter enforced.
- Risk-based validation. Validation must include the human-AI interaction, not just the model in isolation.
- Context-appropriate metrics. AI performance must be measured against the actual use case, not generic benchmarks.
Theme 3: Monitoring and Transparency
The seventh, eighth, and ninth principles focus on ongoing accountability.
- Verification and ongoing monitoring. Models require continuous oversight — performance drift, edge cases, operational exceptions.
- Transparency in capabilities and limitations. Users must understand what the model can and cannot do, including failure modes.
- Bias assessment and mitigation. Systematic evaluation of model behavior across populations, contexts, and inputs.
Theme 4: Lifecycle
The tenth principle addresses what happens after deployment.
- Lifecycle management. AI models change — they are retrained, updated, tuned. Each change is a potential revalidation event. The compliance obligation does not end at go-live.
The EMA Reflection Paper on AI in the Medicinal Product Lifecycle, which informed the joint framework, reinforces a critical point: AI decisions in regulated processes must comply with data integrity principles (ALCOA+). Attributable, legible, contemporaneous, original, accurate — plus complete, consistent, enduring, and available. If your AI system cannot produce a record that meets ALCOA+ for every decision it contributes to, it is not GxP-ready.
Four Lessons Pharma IT and Quality Leaders Must Act On
Read together, the Purolea warning letter and the G-AI-P framework deliver four lessons that should reshape how life sciences organizations deploy AI.
Lesson 1: “The AI Said So” Is Not a Defense
The regulatory accountability model has not changed. Humans are accountable for cGMP decisions. A model’s output — no matter how well-engineered — does not discharge that accountability. The Purolea case makes this explicit: the company’s reliance on an AI agent’s silence, treated as authoritative guidance, was itself the violation.
For quality leaders, this means every AI-assisted workflow requires a named human reviewer with the authority and training to override the AI’s output. For IT leaders, it means AI systems must produce records that capture both what the AI did and what the human reviewer approved, disputed, or modified.
Lesson 2: AI Over-Reliance Is Itself a Violation
Even when the AI’s output is technically correct, the absence of documented human review is citable. This is a subtle but important shift. Regulators are no longer only examining outcomes; they are examining process. The question has moved from “did you get the right answer?” to “can you demonstrate how you verified the answer was right?”
The implication for organizations: documentation of the review process matters as much as the review itself. A quality reviewer who approves AI output without leaving an audit trail has created the appearance of unreviewed AI decisions, which is functionally the same regulatory exposure as having no review at all.
Lesson 3: Validation Must Include Human-AI Interaction
Traditional computer system validation (CSV) validates the system. AI validation must validate the workflow — the combined human-AI decision process. A model that performs well in isolation may perform poorly when its outputs are interpreted, edited, or rubber-stamped by humans under operational pressure.
This means validation protocols must now simulate realistic operational conditions, including time pressure, fatigue, and reviewer skill variation. It also means validation needs to assess the review process itself — are reviewers catching errors? Are they over-approving? Are they under-approving in a way that creates bottlenecks?
Lesson 4: Lifecycle Governance Is Non-Negotiable
Models drift. Training data ages. Use cases evolve. The go-live validation is not the final compliance step — it is the starting line.
Organizations need SOPs that treat AI model changes with the same rigor as other GxP change controls. Every retraining, every model version update, every material change in input data is a potential revalidation trigger. The G-AI-P lifecycle management principle puts regulatory weight behind what has long been a quiet challenge in AI operations.
A Practical Framework: Building AI Governance for GxP
Framing is only useful if it translates into operational controls. Below is a tiered framework Sakara Digital recommends to clients evaluating AI for GxP use cases. The tiers are based on the level of risk a given AI use case introduces into the regulated workflow.
| Tier | Use Case Type | Example | Required Controls |
|---|---|---|---|
| Tier 1 | Decision-support, non-GxP | Internal knowledge search, meeting summarization, marketing copy assistance | AI governance SOP, appropriate use policy, training, general audit trail |
| Tier 2 | Decision-support, GxP-adjacent | Drafting SOPs, summarizing clinical data, assisting document review, coding variance investigations | All Tier 1 controls, plus mandatory human review workflow, validated review templates, role-based reviewer qualification, decision-level audit trail (ALCOA+) |
| Tier 3 | Autonomous or direct GxP action | Automated release decisions, AI-generated specifications entering the quality system, autonomous deviation classification | All Tier 2 controls, plus full CSV-equivalent validation including human-AI interaction, lifecycle change control for all model updates, periodic revalidation, bias and drift monitoring, formal regulatory risk classification |
Required Controls by Tier
Across all three tiers, six controls form the foundation of an AI governance program aligned with both the G-AI-P principles and the enforcement posture the Purolea letter signals.
AI Governance SOP
A written policy describing approved AI uses, prohibited uses, tier assignments, approval workflows, and accountability owners. The SOP is itself a controlled GxP document.
Human Review Workflows
Role-based review procedures with documented qualifications, review templates, and decision audit trails. Reviews must be attributable to a named individual and time-stamped.
Validation Protocols
Validation that includes both the model and the human-AI interaction. Tier 3 use cases require CSV-equivalent validation with formal intended-use statements and performance specifications.
Training Requirements
Role-specific training for AI operators and reviewers. Training must cover model capabilities, limitations, common failure modes, and the reviewer’s authority to override AI output.
Audit Trail (ALCOA+)
Every AI decision that contributes to a GxP outcome must produce a record meeting data integrity principles. This includes the AI input, the output, the reviewer, and the disposition.
Lifecycle Change Control
Model updates, retraining events, and vendor changes require formal change control. Revalidation triggers must be defined in advance.
The Human-in-the-Loop Maturity Model
Not every organization will reach Tier 3 maturity on day one. The maturity model below describes where organizations typically sit today and where G-AI-P compliance requires them to move.
Level 3 is the current regulatory minimum for any organization deploying AI in GxP-adjacent workflows. Level 4 is the minimum for Tier 3 use cases. Organizations below Level 3 that are using AI in regulated contexts should consider this a priority remediation area.
What This Means for Life Sciences Organizations Today
The Purolea letter is a leading indicator, not an outlier. Regulators have telegraphed the framework, issued the first enforcement action, and set expectations for how AI governance must integrate with existing quality systems. What remains is for organizations to translate the signal into action.
Short-Term Actions (Next 30 Days)
Inventory AI Usage Across GxP Processes
Identify every AI tool in use — including shadow IT, department-level trials, and vendor-embedded AI features. Map each tool to the tier it would fall under. You cannot govern what you have not catalogued.
Review Quality Unit SOPs
Assess whether existing SOPs address AI output review. If the quality unit’s review procedures assume human-authored records, they likely do not meet 21 CFR 211.22(c) expectations for AI-assisted records.
Assess Training Gaps
Identify roles that interact with AI in GxP workflows. Evaluate whether those individuals have the training to evaluate AI output, recognize failure modes, and exercise override authority.
Medium-Term Actions (Next Six Months)
Formalize an AI Governance Framework
Establish a written AI governance SOP, tier definitions, approval workflows, and accountability ownership. Integrate the framework with the existing QMS rather than running it as a parallel program.
Update Validation Protocols
Extend validation to cover human-AI interaction for any Tier 2 or Tier 3 use case. Develop templates for validation protocols, reviewer qualification, and post-deployment monitoring.
Implement Lifecycle Management
Define revalidation triggers, change control procedures for model updates, and periodic review cadences. Establish a relationship with AI vendors that includes change notification commitments.
Strategic Posture
AI governance is moving from a compliance cost to a competitive differentiator. Organizations that build mature governance early will deploy AI faster, because their controls will let them expand into Tier 2 and Tier 3 use cases with confidence. Organizations that delay will face a choice between limiting AI to low-value Tier 1 work or absorbing regulatory risk to do anything more. The Purolea letter has priced that risk — and the bill is public, permanent, and searchable.
Conclusion
The Purolea warning letter marks the beginning of an enforcement era, not its full expression. The FDA has signaled how it reads AI in regulated contexts, the joint G-AI-P framework has set the expectations, and the first shot across the bow has been fired. What comes next is a steady cadence of inspections, observations, and letters that will test how well the industry has absorbed the lesson.
The organizations that will move fastest and most compliantly through this period are the ones that recognize AI governance is not a separate initiative. It is an extension of the quality system, an extension of data integrity principles, and an extension of the same disciplined approach that has governed pharmaceutical manufacturing for decades. The technology is new. The regulatory logic is not.
If you would like to continue exploring how AI governance, compliance, and digital transformation intersect in life sciences, subscribe for more insights from Sakara Digital.
Further Reading
Explore related perspectives on AI governance, compliance, and regulatory strategy from the Sakara Digital blog:
References & Sources
- U.S. Food and Drug Administration. Warning Letter: Purolea Cosmetics Lab, Warning Letter #722591. April 2, 2026.
- Regulatory Affairs Professionals Society (RAPS). FDA warns firm for inappropriate use of AI in drug manufacturing. April 2026.
- U.S. Food and Drug Administration and European Medicines Agency. Guiding Principles of Good AI Practice for Drug Development (G-AI-P). January 14, 2026.
- Redica Systems. The FDA’s First AI Warning — Over-Reliance Is a cGMP Violation. 2026.
- ComplianceG. AI in GxP: Key Lessons from FDA’s April 2026 Warning Letter. 2026.
- Causaly. The FDA’s New Guiding Principles for AI in Drug Development. 2026.
- U.S. Code of Federal Regulations. 21 CFR 211.22 — Responsibilities of the Quality Control Unit.
- European Medicines Agency. Reflection Paper on the Use of Artificial Intelligence in the Medicinal Product Lifecycle.
#SakaraDigital #AIGovernance #GxPCompliance #PharmaAI #cGMP #LifeSciences #RegulatoryCompliance








Your perspective matters—join the conversation.