Table of Contents
- The Enforcement Reality: Where We Are Now
- Risk Classification: Where Does Your AI Portfolio Land?
- The High-Risk Compliance Framework: Eight Core Obligations
- Integrating AI Act Requirements into Your Existing Quality System
- GPAI Models and Downstream Deployer Obligations
- Building AI Governance Infrastructure
- A Practical 90-Day Readiness Roadmap
- Conclusion: The Competitive Advantage of Early Compliance
Executive Summary
The EU AI Act is no longer a future concern — it is an active regulatory reality. With prohibited practices already in force since February 2025 and the critical August 2, 2026 deadline for high-risk AI systems fast approaching, life sciences organizations face an unprecedented convergence of AI governance requirements on top of existing MDR, IVDR, and FDA frameworks.
This article maps the enforcement timeline, identifies which AI tools in a typical life sciences portfolio qualify as high-risk, and provides a concrete 90-day readiness roadmap. The organizations that treat the EU AI Act as an integration challenge — not a parallel compliance program — will find themselves with a structural advantage over competitors scrambling at the deadline.
Key takeaway: The EU AI Act does not replace ISO 13485, MDR, or FDA requirements — it layers on top of them. The path to compliance is integration, not duplication.
The Enforcement Reality: Where We Are Now
The EU AI Act entered into force on August 1, 2024, marking the culmination of three years of legislative development and signaling a fundamental shift in how artificial intelligence will be governed across the European Union. Unlike previous technology regulations that followed enforcement only after lengthy implementation gaps, the AI Act operates on a tiered timeline that has already activated the most serious prohibitions — and the window for high-risk compliance preparation is now measurably finite.
Understanding where enforcement stands today requires mapping the regulation’s phased rollout against the current date. As of April 2026, two of the four major activation milestones have already passed. Prohibited AI practices — those deemed unacceptable risks to fundamental rights, including real-time biometric surveillance in public spaces, social scoring systems, and manipulation of vulnerable individuals — became enforceable on February 2, 2025. Any life sciences organization operating AI systems that could be interpreted as falling into prohibited categories has been subject to enforcement since that date.
General Purpose AI (GPAI) model obligations followed on August 2, 2025. This milestone matters enormously for life sciences companies that deploy large language models, foundation models, or any AI system built on top of third-party GPAI providers. The compliance obligations for GPAI providers are distinct from those of downstream deployers — a distinction that carries significant operational and legal implications that we will address in detail.
The most consequential milestone for the life sciences sector is the August 2, 2026 deadline for high-risk AI systems. This is when the comprehensive compliance framework — covering conformity assessments, technical documentation, human oversight mechanisms, quality management requirements, and transparency obligations — becomes mandatory for the categories of AI systems most commonly deployed in life sciences settings. With less than four months remaining from today, organizations that have not begun structured compliance programs are running out of runway.
There is one additional timeline provision relevant to certain product configurations: AI systems embedded in products regulated under existing EU safety legislation — including medical devices under MDR and in vitro diagnostics under IVDR — have an extended transition until August 2, 2027. However, this extension applies only to specific embedded configurations and should not be interpreted as a general exemption for AI systems in the medical device space. Legal counsel familiar with both the AI Act and MDR/IVDR frameworks should confirm whether any given AI-enabled medical device qualifies for this extended timeline.
The Regulatory Enforcement Landscape in 2026
The EU AI Act does not operate in isolation. In June 2025, the FDA launched ELSA — an AI-powered inspection targeting and analysis platform — bringing autonomous regulatory intelligence to the US enforcement landscape as well. While ELSA operates within FDA jurisdiction rather than the EU regulatory framework, its existence signals a broader global trend: regulators are themselves deploying AI to improve inspection efficiency, increase targeting precision, and identify compliance gaps that human reviewers might miss. Life sciences organizations that delay compliance improvements should anticipate that both EU and US regulatory bodies are becoming more capable, not less, of identifying non-compliant AI deployments.
EU member states are in the process of designating national competent authorities and market surveillance bodies under the AI Act framework. The European AI Office, established within the European Commission in early 2024, provides oversight coordination for GPAI models and cross-border enforcement. National authorities will handle enforcement for most high-risk AI systems deployed within their jurisdictions. Penalty authority is substantial: violations of high-risk obligations carry fines of up to €15 million or 3% of global annual turnover, while violations of prohibited practices can reach €35 million or 7% of global annual turnover.
Risk Classification: Where Does Your AI Portfolio Land?
The EU AI Act’s risk-based architecture organizes AI systems into four categories: prohibited practices, high-risk systems, limited-risk systems subject to transparency obligations, and minimal-risk systems with no specific mandatory requirements. For life sciences organizations, the critical classification challenge is determining which AI tools, algorithms, and automated decision systems in their portfolios fall into the high-risk category under Annex III of the regulation.
Annex III enumerates specific high-risk categories across eight domains. Several of these are directly and unambiguously applicable to life sciences operations:
High-Risk Categories Directly Relevant to Life Sciences
Diagnostic Imaging Algorithms: AI systems used in the interpretation of medical imaging — CT scans, MRI, X-ray, pathology slides, ophthalmology images — that assist in or make clinical determinations are explicitly covered under Annex III provisions related to the administration of safety components of medical devices. If your radiology AI tool provides decision support that influences a diagnosis, treatment recommendation, or clinical pathway, it is almost certainly high-risk under the AI Act regardless of its separate classification under MDR.
Clinical Decision Support Systems (CDSS): AI systems that analyze patient data to generate clinical recommendations, risk scores, or treatment pathway suggestions are high-risk under the AI Act. This includes sepsis prediction tools, deterioration scoring algorithms, medication dosing recommendations, and any AI-assisted diagnostic application. The high-risk classification applies whether the system is deployed in a hospital setting, integrated into an electronic health record, or delivered as a standalone clinical software product.
Patient Triage Tools: AI systems used in emergency settings or primary care intake to prioritize patients by urgency, likelihood of deterioration, or treatment priority fall under high-risk classification. The stakes of errors in triage — patients with acute conditions receiving delayed care — meet the threshold the regulation establishes for high-risk designation.
Remote Monitoring Platforms: AI systems that continuously analyze patient-generated data from wearables, home monitoring devices, or digital therapeutics to detect clinical events, flag deterioration, or generate care team alerts are high-risk. This category has expanded rapidly with the growth of digital health and decentralized clinical trials.
AI-Driven Clinical Trial Recruitment Software: AI systems that screen and select patients for clinical trial enrollment using automated eligibility assessment, medical record analysis, or predictive modeling to identify suitable candidates fall under high-risk classification. Given the centrality of trial design to drug approval and the risk of systematic bias in AI-driven enrollment, this classification is both appropriate and consequential.
The Classification Edge Cases
Beyond these clearly defined high-risk categories, life sciences organizations must carefully evaluate AI systems that might initially appear to fall outside the high-risk threshold:
Pharmacovigilance AI: AI systems that process adverse event reports, signal detection algorithms that identify safety signals in post-market surveillance data, and literature monitoring tools that flag new safety evidence operate in a space that sits at the intersection of high-risk AI classification and established pharmacovigilance regulatory requirements. The regulatory analysis here requires careful examination of the specific function of the AI system and the degree to which it influences safety-critical decisions without direct human review.
Regulatory Intelligence Tools: AI systems used to analyze regulatory submissions, generate submission content, or assess regulatory pathways present classification ambiguity. If these tools directly influence regulatory strategy decisions with significant downstream consequences, legal counsel should assess whether they approach high-risk thresholds.
Manufacturing Quality AI: AI systems used in batch release decisions, in-process controls, or quality attribute prediction in pharmaceutical manufacturing operate in a space governed by GMP frameworks but also potentially subject to AI Act classification as high-risk safety-critical AI in product safety contexts.
Classification Principle: When in doubt about whether an AI system crosses the high-risk threshold, apply the “safety-critical decision support” test: if the AI system’s output directly influences a decision that could result in harm to an individual’s health, safety, or fundamental rights, treat it as high-risk and build your compliance program accordingly. The cost of over-classification is higher compliance overhead; the cost of under-classification is regulatory exposure, potential market withdrawal, and substantial penalties.
The High-Risk Compliance Framework: Eight Core Obligations
For AI systems that qualify as high-risk under Annex III, the EU AI Act mandates compliance with eight core obligation domains. These are not aspirational guidelines — they are enforceable requirements that must be documented, implemented, and maintained as conditions of market access in the EU.
Obligation 1: Risk Management System
Providers of high-risk AI systems must establish, implement, document, and maintain a risk management system throughout the entire lifecycle of the AI system. This system must identify and analyze foreseeable risks associated with the AI system’s intended purpose, evaluate the probability and severity of harm from those risks, adopt appropriate risk management measures, and test the residual risk against acceptable thresholds. For life sciences organizations already operating under ISO 14971 for medical device risk management or ICH Q9 for pharmaceutical quality risk management, this obligation creates a natural integration point — but the AI-specific risk profile (including model degradation, distributional shift, and adversarial inputs) requires additions to existing frameworks that most organizations have not yet made.
Obligation 2: Data and Data Governance
Training, validation, and testing datasets for high-risk AI systems must meet quality criteria covering relevance, representativeness, freedom from errors, and statistical properties that ensure the AI system performs as intended across the full range of intended users, purposes, and geographic or demographic contexts. Life sciences organizations must document their data provenance, describe the data collection and labeling methodology, and assess datasets for known or foreseeable biases. This obligation intersects directly with existing data integrity requirements under 21 CFR Part 11, EU GMP Annex 11, and ALCOA+ principles — but extends them specifically to AI training data in ways that most existing data governance frameworks do not address.
Obligation 3: Technical Documentation
Providers must prepare and maintain comprehensive technical documentation before placing a high-risk AI system on the market or putting it into service. This documentation must enable competent authorities to assess compliance with the regulation and must cover the AI system’s general description, development process, training methodologies, performance metrics, technical specifications, instructions for use, and a description of the oversight measures implemented. For life sciences organizations accustomed to maintaining Design History Files (DHF) under QSR and Technical Documentation under MDR, the technical documentation obligation has meaningful parallels — but the AI-specific content requirements (model architecture, training data statistics, performance validation methodology) require new documentation disciplines.
Obligation 4: Record-Keeping and Logging
High-risk AI systems must be designed and built with automatic logging capabilities that ensure traceability throughout the system’s operational life. Logs must capture sufficient information to enable post-market monitoring, investigation of incidents, and regulatory audit. The retention requirements for AI system logs must be aligned with applicable sectoral regulations — which in the pharmaceutical and medical device context typically means multi-year retention that extends well beyond what most enterprise AI deployments maintain by default.
Obligation 5: Transparency and Information to Deployers
Providers must supply deployers with instructions for use that cover the AI system’s intended purpose, capabilities, limitations, performance levels across different population groups, any foreseeable misuse, and the human oversight measures that must be in place. This obligation creates new requirements for AI vendors selling to life sciences organizations: their product documentation must explicitly address the compliance requirements that their life sciences customers face. Organizations should be actively negotiating these documentation requirements into their vendor contracts now.
Obligation 6: Human Oversight
High-risk AI systems must be designed and deployed in ways that allow human operators to effectively oversee the AI system during its operation. This means the system must be understandable to the degree necessary for operators to recognize and respond to anomalies, the system must allow operators to intervene or override, and the deployment context must ensure that qualified individuals are assigned oversight responsibilities. The human oversight obligation is not satisfied by a generic “human in the loop” assertion — it requires documented oversight protocols, trained operators, and defined intervention procedures.
Obligation 7: Accuracy, Robustness, and Cybersecurity
High-risk AI systems must achieve appropriate levels of accuracy for their intended purpose, must demonstrate robustness against foreseeable errors, faults, and inconsistencies, and must be resilient against attempts by third parties to alter their use, outputs, or performance through adversarial manipulation. The cybersecurity requirement is particularly significant: many life sciences organizations have robust information security programs that have not yet been extended to specifically assess adversarial vulnerabilities in AI systems.
Obligation 8: Quality Management System
Providers of high-risk AI systems must put in place a quality management system that encompasses all aspects of compliance with the regulation, including policies, procedures, and documentation requirements. This is the obligation with the most direct and immediate integration opportunity for life sciences organizations: the quality management system required by the EU AI Act maps closely to the QMS frameworks already mandated under ISO 13485, 21 CFR Part 820, and EU GMP. The challenge is extending existing QMS frameworks to encompass AI-specific requirements — not building a parallel quality system from scratch.
Integrating AI Act Requirements into Your Existing Quality System
One of the most important strategic insights for life sciences organizations approaching EU AI Act compliance is this: you do not need to build a new compliance program. You need to extend the compliance programs you already have. The AI Act’s eight obligations for high-risk systems map with meaningful precision to the existing quality and regulatory infrastructure that sophisticated life sciences organizations maintain. The integration approach is both more efficient and more sustainable than a parallel-track compliance architecture.
| EU AI Act Obligation | Existing QMS / Regulatory Control | Integration Gap to Address |
|---|---|---|
| Risk Management System | ISO 14971 / ICH Q9 risk assessments; FMEA processes | Add AI-specific risk categories: model drift, distributional shift, adversarial inputs, training data bias |
| Data and Data Governance | Data integrity program; 21 CFR Part 11 / Annex 11; ALCOA+ documentation | Extend to cover training data provenance, labeling methodology, bias assessment, validation set construction |
| Technical Documentation | Design History File (DHF); Technical File (MDR); Device Master Record | Add model architecture documentation, training methodology, performance metrics by subgroup, validation protocol |
| Record-Keeping and Logging | Batch records; audit trails; electronic record retention policies | Specify AI decision logging requirements; confirm retention alignment with regulatory minimum periods |
| Transparency / Instructions for Use | IFU documentation; labeling controls; supplier qualification documentation | Update IFU templates to include AI capability/limitation disclosure; add AI oversight requirements to vendor specs |
| Human Oversight | Procedural controls; batch release SOP; operator training records | Define AI-specific oversight protocols; formalize operator intervention procedures; document override mechanisms |
| Accuracy, Robustness, Cybersecurity | Validation protocols (IQ/OQ/PQ); change control; information security management | Add adversarial robustness testing; extend cybersecurity assessment to AI-specific attack vectors |
| Quality Management System | ISO 13485 QMS; 21 CFR Part 820 QSR; EU GMP quality system | Extend QMS scope statement to include AI systems; add AI-specific procedures to existing QMS structure |
The integration approach outlined above has several practical advantages over building a parallel compliance program. First, it leverages existing regulatory relationships: auditors and inspectors from national competent authorities will be assessing EU AI Act compliance against a background that includes your existing regulatory history. An organization that presents AI compliance within a recognizable, well-maintained QMS framework will communicate competence and maturity far more effectively than one presenting a hastily assembled standalone AI compliance program.
Second, the integration approach reduces documentation burden. Life sciences quality organizations already maintain extensive procedure libraries, validation documentation, and audit trails. Extending these systems to include AI-specific requirements is far more efficient than maintaining separate documentation ecosystems.
Sakara Digital Perspective: The organizations we advise that make the fastest progress on EU AI Act compliance are those that assign a QA leader — not an IT leader — as the responsible owner for AI Act integration. The regulatory vocabulary, documentation discipline, and audit readiness culture that experienced QA professionals bring to AI Act compliance is directly transferable. The technical content can be learned; the quality culture is harder to build from scratch.
GPAI Models and Downstream Deployer Obligations
The EU AI Act creates a meaningful legal distinction between providers of General Purpose AI models and the downstream deployers who integrate those models into their own products and services. Understanding this distinction — and its operational implications — is one of the most pressing practical tasks for life sciences organizations that use commercially available foundation models, LLMs, or AI platforms built on GPAI foundations.
What Qualifies as GPAI
A General Purpose AI model is defined under the AI Act as an AI model trained with large amounts of data using self-supervision at scale that displays significant generality and is capable of performing a wide range of distinct tasks. This definition encompasses the major commercial LLMs (GPT-4o, Claude, Gemini), as well as multimodal foundation models and large-scale vision models. Most commercially deployed AI platforms used in life sciences — including AI writing assistants, document analysis tools, and many regulatory intelligence platforms — are built on GPAI foundations.
What GPAI Providers Are Required to Do
As of August 2, 2025, GPAI providers are required to maintain technical documentation of their models, make information available to downstream deployers regarding the model’s capabilities, limitations, training data characteristics, and known risks, and comply with EU copyright law in their training data practices. GPAI providers with systemic risk (generally defined by training compute exceeding 10^25 FLOPs) face additional obligations including adversarial testing, incident reporting, and cybersecurity measures.
Critical Warning — The Downstream Deployer Trap: GPAI provider compliance does NOT cover your use case. When OpenAI, Anthropic, Google, or any other GPAI provider publishes their EU AI Act compliance documentation, that compliance applies to their model — not to your application. If you have built a clinical decision support tool, patient risk scoring algorithm, or any other high-risk AI application on top of a GPAI foundation model, you are the provider of a high-risk AI system and you bear the full compliance obligations of that role. The GPAI provider’s compliance is necessary but not sufficient for your regulatory position.
This downstream deployer gap is one of the most frequently misunderstood aspects of EU AI Act compliance in the life sciences sector. Organizations that have relied on their AI vendors’ compliance representations to satisfy their own regulatory obligations are exposed. The vendor’s compliance covers the foundation model; your compliance must cover the system you have built on top of it.
Practical Implications for Life Sciences Deployers
If your organization deploys AI tools built on GPAI foundations for any purpose that qualifies as high-risk under Annex III, you must:
- Conduct your own conformity assessment for the complete AI system as deployed in your environment
- Maintain your own technical documentation covering the complete system — including how the GPAI component is configured, fine-tuned, prompted, or constrained for your use case
- Establish your own human oversight procedures specific to your deployment context
- Implement your own logging and monitoring at the application layer
- Register the AI system with EU authorities if required under your national competent authority’s guidance
Contracts with GPAI providers should be reviewed and updated to ensure they provide the technical information, access to model documentation, and notification of material changes that you need to maintain your own compliance obligations. This is an active procurement and legal task, not a passive one.
Building AI Governance Infrastructure
Compliance with the EU AI Act’s specific obligations is a minimum standard. Organizations that position themselves as trusted partners in life sciences digital transformation — whether as technology vendors, consultants, or pharmaceutical innovators — need to build AI governance infrastructure that demonstrates genuine organizational commitment to responsible AI, not just checkbox compliance.
AI governance maturity in life sciences can be understood as a five-level progression:
Most life sciences organizations entering structured EU AI Act compliance programs today will find themselves at Level 1 or Level 2. The goal of a 90-day readiness program is to reach Level 3 — structured governance — by the August 2026 deadline, with a roadmap to reach Level 4 within the following six to twelve months.
Core Governance Infrastructure Components
AI System Registry: A centralized inventory of all AI systems in use across the organization, with classification status (prohibited, high-risk, limited-risk, minimal-risk), deployment context, responsible owner, and compliance status tracking. This registry is the operational foundation for all other governance activities. It cannot be delegated entirely to IT; it requires active participation from quality, regulatory affairs, and business functions.
AI Classification Process: A documented, repeatable process for evaluating new AI systems against EU AI Act risk categories before deployment. This process should be integrated into the existing change control and software validation workflows. Every new AI tool, platform, or algorithmic system should pass through classification before organizational deployment.
AI Governance Policy: A board-level or executive-level policy statement establishing the organization’s commitments with respect to AI governance, including EU AI Act compliance, human oversight principles, data ethics standards, and accountability structures. This policy provides the authority for the operational governance program.
Responsible AI Owner Roles: Defined accountability for AI governance across the organization, including an AI Compliance Lead responsible for regulatory obligations, function-level AI Owners responsible for the AI systems in their areas, and a cross-functional AI Governance Committee with representation from Quality, Regulatory Affairs, Legal, IT, and relevant business functions.
Incident Response and Post-Market Monitoring: Documented procedures for detecting, reporting, and investigating AI system incidents — including performance degradation, unexpected outputs, and serious incidents with patient safety implications. These procedures must be integrated with existing pharmacovigilance and complaint handling systems for medical device and pharmaceutical AI deployments.
A Practical 90-Day Readiness Roadmap
Given that the August 2, 2026 high-risk compliance deadline is now less than four months away, organizations must move from planning to execution immediately. The following 90-day roadmap provides a structured approach to achieving baseline compliance readiness — defined as all high-risk AI systems having a documented compliance plan and the organizational infrastructure to execute it.
Days 1–21: Inventory and Classification
Conduct a comprehensive AI system inventory across all business functions — clinical, regulatory affairs, quality, manufacturing, commercial, and research. For each AI system identified, document the vendor, deployment context, data inputs, decision outputs, and primary user population. Apply the EU AI Act risk classification framework to each system. Flag all systems that are potentially high-risk for immediate escalation to legal and regulatory review. Identify systems that may qualify for the embedded products extended transition (August 2027) and confirm eligibility with legal counsel. The output of this phase is a complete, classified AI system registry with compliance status assessment for each system.
Days 22–45: Gap Assessment and Prioritization
For each high-risk AI system identified in Phase 1, conduct a structured gap assessment against all eight EU AI Act compliance obligations. Map existing QMS controls, documentation, and procedures to each obligation. Document specific gaps — areas where the existing quality system does not fully address the AI Act requirement without modification or extension. Prioritize gaps by compliance risk and remediation effort. Develop a remediation roadmap for each high-risk AI system, with owners, timelines, and resource requirements. Engage AI vendors for high-risk systems to obtain required technical documentation and assess contractual provisions for adequacy. The output of this phase is a prioritized gap register and remediation plan.
Days 46–75: Remediation Execution and Documentation
Execute the highest-priority remediation items from Phase 2. This includes extending QMS procedures to cover AI-specific requirements, developing or updating technical documentation for high-risk AI systems, establishing human oversight protocols for each high-risk system, implementing logging and monitoring capabilities where gaps were identified, and updating training records and personnel competency documentation for AI system operators. Parallel-track activities in this phase include establishing the AI system registry as a living governance tool, drafting or finalizing the AI governance policy for executive approval, and initiating vendor contract reviews for GPAI-dependent high-risk systems. The output of this phase is a materially remediated compliance posture for all in-scope high-risk AI systems.
Days 76–90: Validation, Testing, and Readiness Confirmation
Conduct internal audit of compliance program completeness for all high-risk AI systems. Test logging and monitoring capabilities to confirm they capture required information. Test human oversight mechanisms to confirm operators can effectively exercise oversight, detect anomalies, and intervene as required. Review all technical documentation for completeness against EU AI Act Annex IV requirements. Complete conformity assessments for high-risk systems — either through internal review (where permitted) or through engagement of a notified body. Document the conformity assessment outcomes and prepare the EU Declaration of Conformity where required. Brief executive leadership on compliance posture, remaining gaps, and ongoing monitoring requirements. The output of this phase is a confirmed compliance readiness posture for the August 2026 deadline, with documented evidence supporting each compliance claim.
Sakara Digital Perspective on 90-Day Execution: The organizations most likely to succeed in this compressed timeline are those that resist the urge to build a perfect compliance program and instead focus on documented, defensible progress. Regulators understand that the EU AI Act represents a new compliance domain. An organization that presents a well-structured, evidence-based compliance effort — even if some gaps remain open with documented remediation plans — is in a far better position than one that has done nothing. Documentation of the compliance journey matters as much as the compliance outcome.
Conclusion: The Competitive Advantage of Early Compliance
The EU AI Act creates a regulatory floor — a minimum standard of AI governance that all market participants operating in the EU must meet. But for sophisticated life sciences organizations, it also creates an opportunity. Organizations that build genuine AI governance infrastructure — not just paper compliance — will find that this infrastructure accelerates their ability to deploy and scale AI responsibly, builds trust with regulators and clinical partners, and positions them as preferred partners for AI-enabled innovation.
The regulatory environment for AI in life sciences is only going to intensify. The EU AI Act represents the most comprehensive framework enacted to date, but it will not be the last. FDA’s ongoing work on AI/ML-based Software as a Medical Device (SaMD), the global harmonization of AI governance through bodies like the ICH and IMDRF, and the proliferation of national AI governance frameworks across major pharmaceutical markets all point toward an environment where AI governance competence is a core organizational capability, not an occasional compliance exercise.
The August 2, 2026 deadline is the immediate forcing function. But the organizations that will thrive in the AI-enabled future of life sciences are those that treat this deadline as the beginning of a governance capability build, not the finish line. The work done now to inventory AI systems, extend quality management frameworks, build human oversight protocols, and establish governance infrastructure will compound in value over time — creating competitive advantages that are difficult for late movers to replicate.
The question is not whether the EU AI Act applies to your organization’s AI portfolio. For any organization with meaningful AI deployments in EU-facing life sciences operations, it does. The question is whether your organization will lead on compliance or follow. The organizations that move now have the time and resources to do this well. Those that wait until Q3 2026 will be executing under crisis conditions, making rushed decisions, and spending far more for far less.
References
- European Commission. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council on Artificial Intelligence (EU AI Act). Official Journal of the European Union. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689
- ZS Associates. (2025). EU AI Act Implications for Life Sciences Organizations. ZS Life Sciences Regulatory Intelligence. https://www.zs.com
- USDM Life Sciences. (2025). EU AI Act Compliance Framework for Pharmaceutical and Medical Device Companies. USDM White Paper Series. https://www.usdm.com
- Clifford Chance. (2024). The EU AI Act: A Comprehensive Guide for Life Sciences Companies. Clifford Chance LLP Client Briefing. https://www.cliffordchance.com
- Iliomadhealthdata. (2025). EU AI Act High-Risk Classification: Life Sciences Sector Analysis. Life Sciences AI Regulatory Review. https://www.iliomadhealthdata.com
- European AI Office. (2025). GPAI Code of Practice: First Draft Guidelines for General Purpose AI Model Providers. European Commission AI Office. https://digital-strategy.ec.europa.eu/en/policies/ai-office
- International Organization for Standardization. (2016). ISO 13485:2016 — Medical Devices: Quality Management Systems. ISO. https://www.iso.org/standard/59752.html
- European Commission. (2017). Regulation (EU) 2017/745 on Medical Devices (MDR). Official Journal of the European Union. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32017R0745
- FDA. (2025). ELSA AI Platform Launch: Enhancing Inspection Targeting Through Artificial Intelligence. FDA News and Events. https://www.fda.gov
- McKinsey & Company. (2025). AI Governance in Life Sciences: From Compliance to Competitive Advantage. McKinsey Center for Government. https://www.mckinsey.com
- IMDRF. (2024). Artificial Intelligence/Machine Learning-Based Software as a Medical Device: Action Plan. International Medical Device Regulators Forum. https://www.imdrf.org
- Deloitte. (2025). EU AI Act Readiness Assessment for Pharmaceutical Companies: Findings from 2025 Compliance Survey. Deloitte Life Sciences Regulatory Practice. https://www.deloitte.com








Your perspective matters—join the conversation.