Schedule a Call

The 10/20/70 Rule: Why Pharma AI Success Is 70% People and Process

Executive Summary

The 10/20/70 rule, popularized by BCG and corroborated by McKinsey QuantumBlack across multiple industry studies, holds that successful AI deployments allocate roughly 10% of effort to algorithm and model work, 20% to technology and data infrastructure, and 70% to people, process, and change. In life sciences, the ratio is if anything more pronounced — regulated environments multiply the change management surface and amplify the cost of getting the human side wrong.

This article explains where the rule comes from, why it applies with particular force in pharma, what each bucket actually contains, and how to recognize and correct the most common misallocation pattern: the inverse 70/20/10, where a program over-invests in algorithms and under-invests in adoption. We close with a practical sequence of corrective actions that any pharma AI sponsor can take in the next quarter to rebalance their portfolio.

~70% of pharma AI program effort should be allocated to people, process, and change management to maximize the probability of capturing projected business value, per BCG research synthesized with Sakara Digital benchmarking across regulated deployments.1

Where the 10/20/70 Rule Comes From

The 10/20/70 rule emerged from BCG’s analysis of AI transformation programs across industries, with subsequent confirmation from McKinsey QuantumBlack, Deloitte, and Gartner. The empirical observation: programs that allocated effort and budget across these proportions consistently outperformed programs that concentrated investment in any single bucket — particularly programs that concentrated in algorithms.

The rule isn’t a prescription for how every dollar must be spent; it’s a heuristic for where leverage actually lives. The algorithm and model work is real and important — but it is rarely the binding constraint on whether the program produces business value. The data and infrastructure work matters more, but it too is rarely sufficient. What separates programs that capture value from programs that don’t is almost always the human work: how the affected functions absorb the change, how processes evolve, how decision rights shift, how training and trust develop.

In pharma, the ratio holds especially hard because the regulatory and quality context multiplies the change management surface. A new AI capability doesn’t just need to be adopted by the people using it — it needs to be governed, validated, documented, and integrated into the QMS in ways that themselves require sustained organizational work. Pharma’s structural conditions make the 70% bucket larger than its industry-average 70% would suggest in absolute terms, even if the ratio remains directionally similar.

It’s worth noting what the rule is not. It’s not a claim that algorithms don’t matter; obviously the model has to work. It’s not a claim that data is unimportant; data quality remains a binding constraint for many use cases. It’s a claim about where the marginal investment yields disproportionate returns once a basic technical foundation is in place. For most pharma organizations starting their AI journey, that foundation already exists — the gap is in the human and process work that turns the foundation into business value.

Why the Ratio Holds Especially Hard in Pharma

Three structural features of pharma make the 70% bucket dominate the outcome more than in most industries.

Regulated workflows are codified. A clinical operations team doesn’t have informal latitude to redesign their workflow around a new AI tool. The workflow is documented, validated, trained against, and audited. Changing it requires SOP updates, change controls, retraining, requalification, and inspection-readiness work. The technology change is a small fraction of the total effort. Compare this to a SaaS company where the workflow change can be made by editing a wiki page and announcing it on Slack — and you see why pharma’s 70% is heavier than most.

Trust thresholds are higher. Pharma professionals are trained to be skeptical of unvalidated outputs. A clinical writer or regulatory specialist who trusts an AI suggestion uncritically is failing at their job. Building genuine trust — the kind that produces appropriate use rather than rejection or overreliance — is slow, evidence-based work that can’t be shortcut by better algorithms. Trust is built through transparent performance data, side-by-side validation periods, gradual responsibility expansion, and consistent demonstration that the system performs to the standard the work requires.

Cross-functional dependencies are dense. An AI capability deployed in clinical operations depends on data from biostatistics, validation from quality, infrastructure from IT, governance from a cross-functional committee, and adoption from the line teams. Each interface is a place where the program can stall. The 70% includes the relational and process work that keeps these interfaces healthy. In flatter, less regulated organizations, fewer interfaces means fewer places to lose momentum; in pharma, the dense interface map makes the human work that holds it together correspondingly larger.

A fourth, less-discussed factor: pharma’s career and incentive structures. Affected functions often have decades-tenured experts whose professional identity is tied to the workflow being changed. Effective change management has to engage that identity directly rather than treating it as resistance to be managed. Programs that get this right unlock deep expertise; programs that get it wrong create durable opposition.

The 10%: Algorithms and Models

The algorithm and model work is real but bounded. In pharma AI today, the typical program is using foundation models or vendor-provided capabilities — not building proprietary models from scratch. The 10% includes:

  • Model selection and benchmarking against use case requirements
  • Prompt engineering, fine-tuning, or RAG configuration where appropriate
  • Performance evaluation and quality measurement
  • Bias and fairness testing for use cases with patient-impact dimensions
  • Versioning and lifecycle management for the models in scope

The 10% is not a license to underinvest. The model work has to be done well — but it’s a smaller portion of the total than most program plans assume. Programs that spend 50% of their budget on model tuning and 10% on adoption are inverting the ratio. Equally important, the 10% can be staffed in part with vendor or contract resources for many pharma use cases; the 70% almost always requires internal staff who carry organizational context and continuity.

One pattern worth flagging: the 10% can balloon when use cases are not well-defined. Teams spend disproportionate time tuning models for use cases whose requirements haven’t been clarified. The corrective is upstream — disciplined use case definition with clear success criteria — not more algorithm work. Model tuning is rarely a substitute for clarity about what the model is supposed to do.

The 20%: Technology, Data, and Infrastructure

The 20% covers the technical scaffolding that makes AI usable at enterprise scale: data pipelines and quality, integration with source systems, identity and access management, monitoring and observability, lifecycle management infrastructure, and the validation tooling that keeps the system inspection-ready over time.

This bucket is the one most pharma organizations underestimate at the start of programs and rediscover under pressure during scale-up. The infrastructure is the difference between a pilot that ran for six months on a vendor sandbox and an enterprise deployment that 5,000 people use daily for years. The infrastructure investment also creates leverage — once it exists for one use case, additional use cases can ride on top of it at lower marginal cost. This is one reason early infrastructure investment tends to pay back across the portfolio rather than against any single use case.

Data is the most under-resourced sub-bucket

Within the 20%, data work is consistently the most under-resourced. Programs assume that the data they need exists, is clean, is governed, and is accessible. In practice, none of these assumptions tend to hold for the data that powers high-value AI use cases. Cleaning, normalizing, documenting, and governing the data is itself a multi-quarter effort that benefits every AI use case downstream — but only if it’s funded as a first-class line item.

The pharma-specific dimension is that much of the most valuable data is governed by regulatory, privacy, and contractual constraints that complicate use. Trial data has consent considerations. Manufacturing data has trade secret considerations. Adverse event data has patient privacy considerations. Each of these has to be addressed before AI can use the data — and the addressing itself is largely human and process work, even though it’s labeled as data infrastructure.

Observability is the second most under-resourced

Observability — knowing what the AI is doing, how it’s performing, and where it’s drifting — is the second most under-resourced sub-bucket. Pilots run blind on these dimensions and the gap is rarely visible until production. Building observability into the architecture from the start is a small additional cost; bolting it on later is materially expensive. The right time to invest in observability is when the use case is being designed, not when the production team starts asking why performance feels off.

Integration patterns matter more than they look

A third under-resourced sub-bucket is integration patterns. Pilots typically run with manual or one-off integration that doesn’t scale. Production deployments require enterprise integration patterns that handle failure modes, security boundaries, and operational monitoring. The integration work is rarely glamorous and rarely featured in vendor demos, but it determines whether the use case can run for years rather than months. Programs that under-invest in integration discover the cost during the second year, when the pilot architecture starts straining under production load and the team faces a choice between rebuilding and limping.

The 70%: People, Process, and Change

The 70% is where the program either captures value or doesn’t. It includes everything that makes the AI capability part of how the organization actually works.

Sub-bucketWhat It IncludesTypical Underinvestment Pattern
Process redesignUpdating SOPs, decision rights, hand-offs, exception pathsTreated as a documentation update rather than a redesign
TrainingRole-based training, scenario practice, ongoing reinforcementOne-time vendor training that doesn’t stick
Change managementStakeholder engagement, communications, champion network, feedback loopsSponsor-led announcement followed by silence
Trust buildingTransparent performance reporting, side-by-side validation, gradual responsibility expansion“Just use it” without evidence-building
GovernanceTier classification, validation, change control integration, inspection readinessLightweight pilot governance carried forward into production
Operating modelDay-to-day ownership, support tiers, lifecycle stewardshipNo clear owner once the project team disbands
Sakara Digital perspective: The single most powerful diagnostic for whether a pharma AI program is going to land is the depth and seriousness of its change management plan. Programs with a robust 70% — funded, staffed, and treated as central rather than peripheral — consistently outperform programs with a glossy technology stack and an afterthought adoption strategy.

Why each sub-bucket is harder than it looks

Process redesign in pharma isn’t a documentation exercise. SOPs that govern regulated workflows have decades of refinement embedded in them. Changing them touches training systems, audit trails, validated workflows, and organizational muscle memory. A serious process redesign for a Tier 2 use case takes 3-6 months from first draft to operational readiness, and that’s with a competent change team. Trying to compress it into weeks produces SOPs that don’t survive contact with the work.

Training is similarly underestimated. Effective training for AI-augmented workflows isn’t a one-hour session and a job aid. It’s role-based scenarios that develop judgment about when to trust AI output, when to challenge it, and how to escalate disagreement. This kind of judgment doesn’t develop in a single session; it develops over weeks of practice with feedback. Programs that allocate single-session training and then declare adoption complete are setting up for the failure pattern they later complain about.

Trust building is the most subtle of the sub-buckets. Trust isn’t earned by capability claims; it’s earned by sustained evidence of capability over time. Programs that publish honest performance data — including the cases where the AI got it wrong — build deeper trust than programs that promote only successes. Counterintuitively, transparency about failure modes accelerates adoption because it gives users a credible mental model of when the AI is reliable.

The Most Common Misallocation Pattern

The most common misallocation we encounter is the inverse 70/20/10: the program spends 60-70% of its effort on technology and model work, 20-25% on data and infrastructure, and 10% or less on the human side. This pattern is over-represented in programs led by IT or data science functions without strong business sponsorship.

The symptoms are predictable. The technology works. The model performs well in benchmarks. The pilot users are enthusiastic. Production adoption stalls at 15-30% of intended usage. The economic case never materializes. Twelve months later, the use case is in pilot purgatory or quietly retired.

Why the misallocation persists

The misallocation persists because the 10% and 20% buckets are easier to plan, easier to estimate, easier to demonstrate, and feel more “real” than the 70%. You can show a working algorithm; it’s harder to show a process that’s been redesigned and absorbed. Technology work also has clearer professional ownership — IT and data science have well-defined roles for it — while the 70% sits in the seams between functions and is everyone’s responsibility and no one’s accountability.

Correcting the pattern requires explicit executive intervention. Left to its own devices, the program will under-resource the 70% and over-resource the 10% and 20%, because that’s the path of least organizational resistance.

The political dimension

There’s also a political dimension that’s worth naming. Algorithm work and infrastructure work are visible deliverables that demonstrate progress to executive sponsors. Change management work is less photogenic — its success often shows up as the absence of problems rather than as a positive outcome. Programs that need to demonstrate visible progress to maintain executive support tend to over-invest in the visible buckets at the expense of the durable but less visible ones. This is a leadership problem more than a program management problem; it requires sponsors who actively reward investment in the human work even when the deliverables are less tangible.

Rebalancing Your AI Investment

If your current portfolio looks more like 60/30/10 than 10/20/70, the corrective work is concrete:

  1. Audit the actual allocation. Look at where staff time and budget are going on each in-flight AI use case. The numbers usually surprise sponsors.
  2. Establish change management as a funded line item. Not “absorbed by the project team” — funded, staffed, and tracked. For Tier 2 and Tier 3 use cases, change management headcount should match or exceed implementation engineering headcount.
  3. Make adoption a primary success metric. Track it weekly. Make it visible at the executive dashboard level. Treat shortfalls with the same seriousness as cost or schedule shortfalls.
  4. Invest in process redesign, not documentation. SOPs need to evolve, not just get updated. Decision rights need to be re-examined. Hand-offs need to be redesigned. Schedule the work and assign owners.
  5. Fund trust-building activities explicitly. Transparent performance reporting, side-by-side validation periods, gradual responsibility expansion. These activities create the conditions for genuine adoption rather than nominal compliance.
  6. Build or hire change capability. Pharma change management for AI is a specialized skill. Programs that try to staff it from generalist project managers consistently underperform programs that bring in or develop dedicated change practitioners with pharma experience.

Sustaining the Discipline Over Time

Rebalancing is a one-time corrective; sustaining the discipline is the harder challenge. Pharma AI portfolios that hold their balance over years share a few practices.

They report on the 70% in the same dashboards as the 10% and 20%. Adoption metrics, training completion, change management milestones, and trust indicators get the same executive visibility as technical performance and budget consumption. What gets reported gets managed.

They build a community of practice across use cases that shares learnings on the human side. Pharma AI programs are usually too siloed by function — what the clinical team learns about adoption doesn’t reach the manufacturing team. A cross-functional community of practice closes that gap and accelerates organizational learning.

They invest in succession for the human-side roles. Change practitioners, training designers, and adoption owners are flight risks if the work isn’t recognized. Sustaining the 70% requires sustaining the people who deliver it.

They review their allocation annually and recalibrate. Use cases mature, infrastructure becomes more leveraged, and the appropriate ratio shifts over time. A mature program may have a different optimal ratio than an early-stage one, and the framework should accommodate that evolution.

They invest in measurement of the human side. Adoption rates, training completion, time-to-proficiency, user satisfaction, and the qualitative signals from use case retrospectives all get measured systematically. Programs that don’t measure the human side can’t manage it; programs that measure it well can identify and correct misallocation before it becomes visible in the form of stalled use cases.

They protect the 70% during budget pressure. When budget cycles tighten, the 70% is the easiest bucket to cut because its outcomes are slower-to-show and harder to defend than algorithm or infrastructure work. Programs that survive multiple budget cycles do so because their leadership defends the 70% explicitly when pressure mounts. This requires the executive sponsor to be aligned with the framework, not just the program team.

The 10/20/70 rule isn’t a rigid budget formula. It’s a reminder that the leverage in AI value capture lives mostly in the human work — and that programs are systematically biased toward under-investing in exactly the bucket that determines outcome. Acting on that insight is one of the highest-return things a pharma AI sponsor can do.

References

author avatar
Amie Harpe Founder and Principal Consultant
Amie Harpe is a strategic consultant, IT leader, and founder of Sakara Digital, with 20+ years of experience delivering global quality, compliance, and digital transformation initiatives across pharma, biotech, medical device, and consumer health. She specializes in GxP compliance, AI governance and adoption, document management systems (including Veeva QMS), program management, and operational optimization — with a proven track record of leading complex, high-impact initiatives (often with budgets exceeding $40M) and managing cross-functional, multicultural teams. Through Sakara Digital, Amie helps organizations navigate digital transformation with clarity, flexibility, and purpose, delivering senior-level fractional consulting directly to clients and through strategic partnerships with consulting firms and software providers. She currently serves as Strategic Partner to IntuitionLabs on GxP compliance and AI-enabled transformation for pharmaceutical and life sciences clients. Amie is also the founder of Peacefully Proven (peacefullyproven.com), a wellness brand focused on intentional, peaceful living.


One response to “The 10/20/70 Rule: Why Pharma AI Success Is 70% People and Process”

  1. […] see higher long-term usage – up to 70% of their efforts should focus on these areas [9]. If you’re not using the platform consistently, it may be a sign that your routine needs […]

Your perspective matters—join the conversation.

Discover more from Sakara Digital

Subscribe now to keep reading and get access to the full archive.

Continue reading