Schedule a Call

21 CFR Part 11 in the Age of AI: What Still Applies

Executive Summary

21 CFR Part 11 was finalized in 1997 and has been the foundation of FDA’s expectations for electronic records and electronic signatures in the regulated industries ever since. AI was not on the regulatory horizon at the time, and the rule’s provisions don’t reference it. Yet Part 11 absolutely applies to AI systems that create, modify, maintain, archive, retrieve, or transmit electronic records used to satisfy GxP requirements. The application is not always obvious, and the AI-specific dimensions of compliance are still being worked out in practice.

This article explains what Part 11 still requires of AI deployments in regulated environments. We cover the scoping question (when does Part 11 apply), the audit trail requirements as they bear on AI outputs and model lifecycle events, the electronic signature implications for AI-generated content, system controls and access management for AI capabilities, data integrity considerations across the AI lifecycle, the genuine gray areas where industry practice is still converging, and a practical compliance checklist to apply to your AI portfolio. The goal is clarity about a rule that hasn’t gone away just because the technology has evolved.

85% of pharma AI deployments handle data that is in scope of 21 CFR Part 11 in some respect — yet only a fraction have explicitly assessed their Part 11 posture for the AI components, per Sakara Digital benchmarking across regulated AI implementations.1

Why Part 11 Still Applies

21 CFR Part 11 establishes the criteria under which the FDA considers electronic records and electronic signatures to be trustworthy, reliable, and equivalent to paper records and handwritten signatures. The rule applies to any electronic record that is created, modified, maintained, archived, retrieved, or transmitted under any records requirement set forth in FDA regulations. That definition is broad, and it doesn’t mention any specific technology. AI systems handling such records are squarely in scope, regardless of whether the rule was written with them in mind.

The 2003 scope and application guidance narrowed FDA’s enforcement priorities, but it did not narrow the underlying rule. Predicate rule requirements — the GxP rules that require certain records be maintained in the first place — remain fully applicable, and the Part 11 controls applicable to electronic versions of those records remain enforceable. AI systems that generate, transform, or store records subject to predicate rules must comply with the relevant Part 11 provisions.

What has evolved is the regulatory expectation about how Part 11 is applied. The rule’s principles — accurate records, traceable changes, secure access, validated systems — are interpreted in the context of current technology and current risks. For AI, that interpretation is still developing in some areas, but the core principles haven’t changed and shouldn’t be assumed to have softened.

What Part 11 doesn’t require — and what people often think it does

It’s worth stating what Part 11 does not require. It doesn’t require any specific technology. It doesn’t prohibit AI. It doesn’t impose validation rigor beyond what predicate rules require. The 2003 scope and application guidance explicitly emphasized a least-burdensome approach that reserved enforcement attention for areas where Part 11 controls are most critical to record reliability. AI deployments don’t face higher Part 11 hurdles than other technologies — they face the same hurdles, applied to a technology with some particular characteristics.

Scoping Part 11 to AI Systems

The first step in Part 11 compliance for AI is determining where, exactly, Part 11 applies. Not every AI deployment in a pharma organization is in scope. The scoping analysis examines which records the AI system creates, modifies, maintains, or transmits, and whether those records are subject to predicate rule requirements.

An AI tool that helps draft an internal email is not in scope. An AI tool that drafts content for an annual report submitted to FDA is in scope for the records that become regulatory submissions. An AI tool that supports clinical trial monitoring is in scope for the trial-related records it touches. The same underlying technology can be in scope or out of scope depending on the records flow.

This is why the scoping analysis is use-case-specific rather than technology-specific. The analysis traces the records into and out of the AI capability and identifies where predicate rule records exist in that flow. Where they exist, Part 11 controls apply to the AI components that handle them. The analysis should be documented and become part of the AI use case’s quality record.

AI Use CasePart 11 ScopeApplicable Controls
Internal knowledge searchGenerally out of scopeStandard IT controls; Part 11 not invoked
Drafting regulatory submission contentIn scope for the regulatory recordAudit trail of AI involvement, signed approval workflow, content integrity controls
Pharmacovigilance signal detectionIn scope for the safety recordsFull Part 11 controls including audit trail, access control, validated system status
Manufacturing parameter recommendationIn scope for the batch recordFull Part 11 controls; signed approvals for parameter acceptance
Clinical document review assistanceIn scope where review outputs become trial recordsAudit trail, access controls, signature integrity

Audit Trail Requirements for AI

Part 11 requires audit trails that capture the date, time, and identity of users who create, modify, or delete electronic records. For AI systems, this requirement extends in several directions that aren’t immediately obvious from the rule text.

First, the AI itself is acting on records. When an AI generates or modifies content that becomes part of a regulated record, the AI’s role in that creation or modification needs to be captured in the audit trail. This is not because the AI is a “user” in the traditional sense, but because reconstruction of who or what produced the record requires knowing that AI was involved. Best practice is to log the AI involvement explicitly: which model was used, what version, what inputs were provided, what was generated.

Second, the human review of AI output needs to be audit-trail-captured. When a human reviews AI-generated content and accepts, modifies, or rejects it, that decision is a record event subject to audit trail requirements. The audit trail should capture the original AI output, the reviewer’s actions, and any modifications made — preserving the full record of how the final content came to exist.

Third, model lifecycle events that affect record-relevant behavior should be captured. Model updates, retraining events, and material configuration changes that could change the AI’s behavior on records-creation tasks should be recorded with date, time, and authorizing party. This provides the traceability needed to investigate any future record integrity questions.

Granularity considerations

The granularity of audit trail capture should be calibrated to the risk and the use case. For Tier 3 use cases — manufacturing parameters, pharmacovigilance signals, clinical decisions — the audit trail should be exhaustive: every input, every output, every reviewer action, every model state. For lower-tier use cases, more aggregated capture is appropriate. Over-capturing audit trail data has its own risks: storage cost, signal-to-noise issues during investigation, and operational complexity. Calibrate to need rather than capturing maximally by default.

Retention and retrieval

Part 11 audit trails must be retained as long as the records they support. For AI systems, this can mean retaining audit trails over many years across multiple model versions, vendor changes, and infrastructure migrations. The retrieval requirement is equally important: the audit trail must be available when needed. Programs that store audit data in cost-optimized but slow-retrieval systems can find themselves unable to produce records in inspection timeframes. The architecture of audit trail storage should explicitly address both retention and retrieval, with the operational tests required to demonstrate that records can be produced when needed.

Electronic Signatures and AI Outputs

Part 11 requires that electronic signatures be unique to one individual and not reusable, that they include the printed name of the signer, the date and time of signing, and the meaning of the signature, and that the signed records contain information associated with the signing event. AI introduces interesting questions here.

An AI doesn’t sign records. The AI is not a person and doesn’t authenticate as one. But records that include AI-generated content often require human signature for approval, release, or attestation — and those signatures need to operate correctly even though AI was involved in producing the content.

The practical interpretation is that the signing person is attesting to the content as it exists at the time of signature, regardless of how the content was generated. The signer takes responsibility for the content. This means the signing workflow needs to give the signer adequate visibility into the AI’s role: what was AI-generated, what was human-generated or modified, and what the basis is for the human’s confidence in the content. Workflows that obscure the AI’s role from the signer create signature integrity problems — the signer is attesting to content they don’t fully understand the provenance of.

Best practice for signed records that include AI content is to make the AI involvement transparent in the record itself. Some organizations include a “content was generated with AI assistance and reviewed by [name]” annotation. Others include a more detailed accounting of which sections were AI-generated and what the review consisted of. The right level of disclosure depends on the use case, but invisibility is rarely the right answer.

Sakara Digital perspective: The most common Part 11 gap we encounter in AI deployments is signature workflows that don’t make the AI involvement visible to the signer. This is not a Part 11 violation in the rule-text sense, but it creates the conditions for findings — and more importantly, it creates the conditions for actual record integrity problems where signers attest to content whose provenance they don’t fully understand.

System Controls and Access Management

Part 11 requires limits on system access to authorized individuals, authority checks to ensure only authorized individuals can use the system, electronically sign records, access the operation, or perform the operation at hand. For AI systems, these requirements extend to several areas.

Access to the AI capability itself must be controlled. Who can submit prompts to the AI, who can configure it, who can approve its outputs — each of these is an access decision subject to Part 11 controls when the AI handles in-scope records.

Access to model lifecycle artifacts must be controlled. Who can update the model, change configurations, modify prompts, or alter the training pipeline — these are all authority-check moments. The control structure should mirror the GxP authority structure of the workflow the AI is supporting.

Service accounts and API access need to be governed under the same principles. Where the AI system communicates with other systems through service accounts, the service accounts themselves need access governance. This is often a gap in AI deployments because service accounts feel like infrastructure rather than people, but Part 11’s access controls apply to anything that touches in-scope records.

Data Integrity in the AI Lifecycle

Data integrity — the ALCOA+ principles of attributable, legible, contemporaneous, original, accurate, complete, consistent, enduring, and available records — applies across the AI lifecycle, not just at the point of record creation. Several lifecycle stages have data integrity implications worth examining.

Training data integrity matters because it shapes the model’s behavior. While training data isn’t typically a regulated record itself, the integrity of training data affects whether the model performs as validated. Documentation of training data provenance, quality, and any post-hoc changes should be maintained.

Inference data integrity matters in the conventional Part 11 sense — the data flowing into the AI for prediction or generation must be accurate and unaltered, and the data flowing out must be captured faithfully. Standard data integrity controls apply.

Model state integrity is a less-discussed dimension. The model itself, as a piece of software, must be the version that was validated. Mechanisms for verifying model integrity — checksums, version pinning, deployment validation — should be in place to prevent model substitution or unintentional change.

The ALCOA+ checklist applied to AI

Mapping ALCOA+ to AI specifically is a useful exercise. Attributable: can each record be traced to the responsible person and to the AI involvement? Legible: are AI outputs and the surrounding documentation clearly readable? Contemporaneous: is the AI involvement captured at the time it occurs, not reconstructed later? Original: is the original AI output preserved alongside any human modifications? Accurate: are the records of AI behavior and outputs faithful to what actually happened? Complete: does the record capture the full AI involvement, including inputs, outputs, and reviewer actions? Consistent: do AI-touched records align with the rest of the record system? Enduring: are AI outputs preserved over the required retention period? Available: can AI-related records be retrieved when needed for inspection or investigation?

The Genuine Gray Areas

Several aspects of Part 11 application to AI are genuinely unsettled, and pretending otherwise misleads compliance teams. Acknowledging the gray areas is part of mature compliance posture.

Generative content and originality. Part 11 talks about preserving “original” records. For AI-generated content, what counts as original is not entirely clear. The first generation? The reviewed and accepted version? Most organizations preserve both, but the rule doesn’t speak directly to the question.

Model interpretability and explainability. Part 11 doesn’t require that systems explain their behavior, but the practical requirement to defend records during inspection often pushes in that direction. For AI systems, explainability is sometimes limited by the underlying model architecture. The boundary between “AI output that the human reviewed” and “AI output the human couldn’t fully evaluate” is fuzzy and use-case-dependent.

Vendor-side records and access. When the AI is provided by a vendor, some of the lifecycle records (training data, model versioning, internal performance evaluation) may live with the vendor. Part 11’s “available” principle assumes the records can be produced for inspection. Vendor relationships need to be structured to support that — through contractual access provisions, escrow arrangements, or alternative documentation.

Continuous learning models. Models that update themselves continuously based on operational data are particularly difficult to fit within Part 11’s record-stability assumptions. Most pharma AI deployments avoid continuous learning for this reason, but the gray area exists for organizations that pursue it.

Multi-tenant cloud AI services. Where the AI is delivered through a multi-tenant cloud service shared across the vendor’s customer base, the boundaries of the regulated organization’s control are less clean than for dedicated software. Audit trail isolation, configuration controls, and access management may be implemented at the tenant level by the vendor rather than directly by the customer. Demonstrating Part 11 compliance in these architectures requires a combination of vendor attestation, contractual commitments, and independent verification — none of which fit neatly into the patterns developed for on-premise or single-tenant systems.

Aggregated and derived records. When AI generates outputs based on aggregated data from many regulated records — synthesizing patterns, summarizing trends, drawing conclusions — the relationship between the output and the underlying records can be complex. Whether the output itself is in scope, and what audit trail is required to reconstruct its derivation, depends on how the output is used. Organizations that develop AI capabilities for analytical or generative work across regulated data corpora encounter these questions and have to develop their own defensible answers, often in advance of clear regulatory direction.

A Practical Compliance Checklist

For any AI deployment that handles in-scope records, the following practical checklist surfaces the Part 11 essentials:

  1. Has the scoping analysis been completed and documented? Are the in-scope records and the applicable Part 11 controls explicitly identified?
  2. Are AI involvement events captured in the audit trail with appropriate granularity for the use case tier?
  3. Are human review and approval actions on AI output captured in the audit trail with reviewer identity, timestamp, and decision?
  4. Are model lifecycle events (updates, retraining, configuration changes) captured with authorizing party and rationale?
  5. Is signature visibility into AI involvement adequate for the signer to take responsibility for the content?
  6. Is access to the AI capability and its lifecycle artifacts controlled and reviewed periodically?
  7. Are service accounts and API access governed under the same principles as user access?
  8. Is training data provenance documented and integrity controls in place?
  9. Is model state integrity verified at deployment and on a periodic basis?
  10. Is the system’s overall validation status maintained through periodic review and change control as discussed in companion articles?

21 CFR Part 11 is older than most of the technology in current pharma data infrastructure, but its provisions remain the foundation of how regulated records are protected. AI systems that handle those records are subject to the same controls that protect any other technology. The work is in the translation — applying principles written for an earlier generation of systems to capabilities the rule’s authors didn’t anticipate. Done thoughtfully, the translation produces compliance posture that holds up in inspection and in practice. Done lazily, it produces gaps that surface in the worst possible circumstances.

References

author avatar
Amie Harpe Founder and Principal Consultant
Amie Harpe is a strategic consultant, IT leader, and founder of Sakara Digital, with 20+ years of experience delivering global quality, compliance, and digital transformation initiatives across pharma, biotech, medical device, and consumer health. She specializes in GxP compliance, AI governance and adoption, document management systems (including Veeva QMS), program management, and operational optimization — with a proven track record of leading complex, high-impact initiatives (often with budgets exceeding $40M) and managing cross-functional, multicultural teams. Through Sakara Digital, Amie helps organizations navigate digital transformation with clarity, flexibility, and purpose, delivering senior-level fractional consulting directly to clients and through strategic partnerships with consulting firms and software providers. She currently serves as Strategic Partner to IntuitionLabs on GxP compliance and AI-enabled transformation for pharmaceutical and life sciences clients. Amie is also the founder of Peacefully Proven (peacefullyproven.com), a wellness brand focused on intentional, peaceful living.


Your perspective matters—join the conversation.

Discover more from Sakara Digital

Subscribe now to keep reading and get access to the full archive.

Continue reading