Schedule a Call

Why Most Pharma AI Investments Underperform (And How to Change That)

Executive Summary

Most pharma AI investments underperform their original business case. Studies put the failure rate between 60% and 80%, depending on how “failure” is defined. The technology works in most cases. The organization does not absorb the technology in the way the business case assumed.

This article identifies the five most common failure patterns in pharma AI investments — patterns that recur across organizations of every size and at every level of AI maturity. We cover the diagnostic that leaders can run on their own portfolios, the correctives that actually move the needle, and what leadership has to do differently to position the organization in the top quartile of AI performers.

60-80% of pharma AI investments fail to meet their original business case targets, per cross-industry analyses. The failure mode is rarely technological — it’s organizational, governance-related, or rooted in flawed initial economics.1

The Scope of the Problem

Speak to any pharma executive who has been close to an AI program for more than two years and you’ll hear some version of the same story. The pilots showed promise. The early adopters were enthusiastic. The economic projections were optimistic. Eighteen months later, the use cases are running at half the expected adoption, the cost is higher than projected, and the original sponsors have gone quiet.

The pattern is so common that it’s worth treating as the baseline expectation rather than the exception. Pharma AI underperforms not because of a particular failure of skill or strategy at any given organization, but because of structural conditions that almost all pharma organizations share. Recognizing this is the first step to designing a program that beats the baseline.

The Five Recurring Failure Patterns

Pattern 1: The economic case never accounted for the full operational cost

The most common failure. Implementation costs were budgeted; validation, change management, integration, and ongoing operational overhead were not — or were dramatically underestimated. The use case lands but its true total cost of ownership erodes the economic case before it has a chance to deliver.

Pattern 2: Adoption stalled in a key user segment

Pilot users were enthusiastic. Production users — the broader population the system was designed for — were skeptical, untrained, or culturally resistant. The system runs but is not actually used at the level the business case assumed. The economic value never materializes.

Pattern 3: The governance burden was underestimated

The use case lands in production. Validation requirements escalate. Quality requires documentation that wasn’t budgeted. Each model update triggers a revalidation event. The administrative cost of running the AI in a regulated environment exceeds projections, often by multiples.

Pattern 4: The vendor relationship became fragile

The vendor selected during the pilot does not scale, raises pricing aggressively, or pivots their product in ways that break the use case. The organization spends a year managing vendor risk that wasn’t part of the original investment thesis.

Pattern 5: The original sponsor moved on

The executive sponsor of the AI program leaves the organization or changes roles. The use case loses its political champion. Funding gets harder to defend. The program drifts even though the use case itself is still working.

What Distinguishes the High Performers

The pharma organizations that are getting AI right share a small number of disciplined practices. None are technological. All are organizational.

  • Honest economics from day one. They model total cost of ownership rigorously. They stress-test benefits. They publish realistic time-to-value curves rather than optimistic ones.
  • Investment in the capability foundation. They fund AI governance, data infrastructure, change management, and operating model design as first-class line items, not afterthoughts.
  • Disciplined portfolio management. They run AI investments as a portfolio with quarterly reviews, gate decisions, and a willingness to sunset use cases that don’t perform.
  • User-centered design at scale. They invest seriously in the user experience and adoption support, recognizing that adoption is the single biggest variable in realized ROI.
  • Vendor relationships managed as strategic risk. They monitor vendor health, have contingency plans, and avoid critical dependencies on any single provider.
Sakara Digital perspective: The high performers we’ve worked with are not characterized by superior technology or better vendor selection. They are characterized by disciplined organizational practices that turn AI from a technology project into a sustained capability. The discipline is replicable; the technology is increasingly commoditized.

A Diagnostic for Your Current AI Portfolio

Run the following diagnostic on your current AI portfolio. Each item is a yes or no question; “no” or “uncertain” indicates a remediation area.

  1. Can you produce a current ROI calculation for each use case, refreshed within the last 90 days?
  2. Have you documented the total cost of ownership for each use case, including validation, change management, and ongoing governance?
  3. Do you have an explicit user adoption metric for each use case, with a target and a current value?
  4. Is each use case mapped to a current executive sponsor with continuity of accountability?
  5. Do you have a documented sunset criteria for each use case — conditions under which it would be retired?
  6. Have you stress-tested vendor concentration risk in your AI portfolio?
  7. Is your AI governance documented at a level that would survive a regulatory inspection?

Organizations that score 5 or more “yes” answers are typically in the top quartile of pharma AI maturity. Organizations scoring 3 or fewer have substantial remediation work ahead.

The Correctives That Actually Work

For organizations that recognize themselves in the failure patterns, the correctives are concrete and incremental. None require a strategic reset; all of them require executive attention.

  • Refresh the economic case. Take each currently-running AI use case and rebuild the ROI math with actual data. Be honest about what’s working and what isn’t.
  • Make adoption a tracked metric. Add adoption to the executive dashboard. Make it visible. Make it a leading indicator.
  • Fund the foundation. If AI governance, data infrastructure, or change management capability is underfunded, the use cases will continue to underperform. The fix is upstream.
  • Run portfolio reviews. Quarterly. With actual data. With explicit gate decisions. Be willing to sunset use cases that don’t merit continued investment.
  • Reduce vendor concentration. Where any single vendor underwrites multiple critical use cases, build optionality.

What Leadership Has to Do Differently

The single most important leadership behavior is the willingness to look at AI portfolio performance honestly and act on what they see. Most pharma AI portfolios have at least one use case that the organization quietly knows is underperforming and that hasn’t been addressed because the political cost feels higher than the financial cost.

Reversing that pattern is straightforward in concept and hard in practice. It requires the senior team to treat AI underperformance the way they treat any other capital investment underperformance: with concern, with discipline, and with the assumption that course correction is normal and expected.

Organizations that develop this muscle outperform their peers. Organizations that don’t get the same set of underperforming AI use cases that everyone else has — and lose ground to the rare pharma competitor who is actually getting AI right.

References

author avatar
Amie Harpe Founder and Principal Consultant
Amie Harpe is a strategic consultant, IT leader, and founder of Sakara Digital, with 20+ years of experience delivering global quality, compliance, and digital transformation initiatives across pharma, biotech, medical device, and consumer health. She specializes in GxP compliance, AI governance and adoption, document management systems (including Veeva QMS), program management, and operational optimization — with a proven track record of leading complex, high-impact initiatives (often with budgets exceeding $40M) and managing cross-functional, multicultural teams. Through Sakara Digital, Amie helps organizations navigate digital transformation with clarity, flexibility, and purpose, delivering senior-level fractional consulting directly to clients and through strategic partnerships with consulting firms and software providers. She currently serves as Strategic Partner to IntuitionLabs on GxP compliance and AI-enabled transformation for pharmaceutical and life sciences clients. Amie is also the founder of Peacefully Proven (peacefullyproven.com), a wellness brand focused on intentional, peaceful living.


Your perspective matters—join the conversation.

Discover more from Sakara Digital

Subscribe now to keep reading and get access to the full archive.

Continue reading