There's a phrase that keeps surfacing in our conversations with clinical safety officers, digital health teams, and NHS transformation leads: "We need to demonstrate clinical value."
It's understandable. Budgets are tight. Scrutiny is intense. Every digital health initiative must justify its existence in terms of patient outcomes, efficiency gains, or clinical effectiveness. The pressure to show results is immense.
But here's what we've learned after thirty years of building systems in regulated healthcare environments: clinical value is downstream of operational maturity. You cannot reliably demonstrate improved outcomes if your underlying infrastructure is fragile. You cannot build a safety case on shifting foundations. You cannot earn regulatory approval for a system that fails under load.
We call this principle "left of the regulator" — the work that must happen before regulatory scrutiny becomes relevant.
The Sequence Problem
Healthcare technology projects typically follow a familiar pattern:
- Clinical hypothesis — "We believe this intervention will improve outcomes"
- Pilot deployment — "Let's test it with a small cohort"
- Outcome measurement — "Did it work?"
- Scale-up — "Now let's roll it out everywhere"
The problem is that this sequence buries operational maturity somewhere between steps 3 and 4 — if it appears at all. By the time organisations discover that their pilot success doesn't translate to production reality, they've already committed politically and financially.
We've seen this pattern repeatedly:
- A remote monitoring platform that demonstrated 23% reduction in A&E admissions during a six-month pilot — but crashed under the load of trust-wide deployment because its data pipeline couldn't handle concurrent connections
- An AI triage tool that showed excellent sensitivity in controlled testing — but produced inconsistent results when exposed to the variety of data quality found across multiple GP systems
- A digital care pathway that saved clinicians 40 minutes per patient in the pilot ward — but created new risks when deployed to wards with different consent architectures
In each case, the clinical value was real but conditional. It depended on operational conditions that the pilot environment provided but the production environment did not guarantee.
What "Left of the Regulator" Means
Regulatory approval — whether DCB0129 for clinical safety, NICE evidence standards, or MHRA device certification — assumes certain operational foundations are in place. The regulator asks: "Is this system safe and effective?" But that question only makes sense if the system exists in a stable, observable, governable state.
The Four Operational Prerequisites
Before regulatory scrutiny becomes relevant, four foundations must be solid:
- Resilience — The system degrades gracefully under stress. Failures are contained. Recovery is automated where possible and well-understood where manual.
- Observability — You can see what the system is doing. Metrics, logs, and traces provide the visibility needed to detect problems, diagnose root causes, and verify fixes.
- Governance — Responsibilities are clear. Change control is enforced. Audit trails exist. The system's behaviour can be explained and defended.
- Consent Architecture — Data flows respect patient preferences. Sharing boundaries are enforced at the infrastructure level, not just the policy level.
These aren't optional extras. They're the substrate on which clinical value is built. Without them, any demonstrated benefit is fragile — likely to evaporate when conditions change.
The Pilot Trap
Pilots are seductive. They offer quick wins, manageable scope, and demonstrable results. But they also create a dangerous illusion: that success in controlled conditions predicts success at scale.
The pilot trap works like this:
- You deploy in a single ward, practice, or trust
- You benefit from high engagement — early adopters who are invested in success
- You operate with dedicated support — the project team is on call
- You work within a known data landscape — edge cases are identified and handled manually
- You demonstrate impressive outcomes
Then you scale, and everything changes:
- Engagement varies — many users are sceptical or busy
- Support is stretched — the project team can't be everywhere
- Data quality is inconsistent — edge cases multiply
- The outcomes that justified the pilot cannot be reproduced
The problem isn't that the clinical hypothesis was wrong. It's that the operational maturity required to realise that hypothesis at scale was never established.
Building Left to Right
At Inference Clinical, we advocate for a different sequence:
The Left-to-Right Approach
- Operational foundation — Build the infrastructure that will support clinical use at scale. Prove resilience, establish observability, implement governance.
- Safety case development — Document hazards, controls, and residual risks. Make the case that the system is safe to operate.
- Limited deployment — Deploy with operational maturity intact. The pilot tests clinical hypotheses, not infrastructure.
- Outcome measurement — Collect evidence of clinical value in conditions that will persist at scale.
- Scale-up — Expand deployment with confidence that the operational foundations will hold.
This approach takes longer at the start. It requires investment before clinical outcomes can be demonstrated. It demands patience when stakeholders are asking for quick wins.
But it produces durable results. The clinical value demonstrated in step 4 persists through step 5 and beyond. The safety case submitted to regulators describes a system that actually exists in production, not a pilot that will be re-engineered later.
What This Looks Like in Practice
Consider a remote monitoring system for patients with chronic heart failure. The clinical hypothesis is that earlier detection of deterioration will reduce hospital admissions.
The traditional approach:
- Deploy wearables to 50 patients
- Route alerts to a dedicated nursing team
- Measure admission rates over six months
- Demonstrate a 30% reduction
- Celebrate and plan trust-wide rollout
The left-of-regulator approach:
- Build the data pipeline that will handle 5,000 patients
- Implement the observability that will detect pipeline failures
- Design the alert routing that works with existing on-call structures
- Establish the consent architecture that respects patient preferences at scale
- Document the safety case for the system as it will operate in production
- Then deploy to 50 patients
- Measure admission rates
- Demonstrate the same 30% reduction — but with confidence it will persist
The second approach costs more upfront. But it avoids the painful discovery that your pilot success was a mirage. It shortens the time from pilot to scale. It reduces the risk of regulatory rejection. And it builds institutional confidence in digital health investments.
The Role of the Clinical Safety Officer
Clinical Safety Officers often find themselves trapped between two pressures: the demand to accelerate deployment and the responsibility to ensure safety. The left-of-regulator approach resolves this tension.
When operational maturity is established early, the CSO's role shifts from gatekeeping to enabling. Instead of discovering infrastructure gaps during safety case review, they're validating that known foundations are in place. Instead of blocking deployments that lack basic observability, they're confirming that observability was designed in from the start.
This is why we built DCB CoLab to integrate operational checks with safety case development. The platform doesn't just help you write hazard logs — it helps you verify that the controls you've documented actually exist in your infrastructure.
The Governance Dividend
Organisations that invest in operational maturity early gain a compounding advantage. Each subsequent project builds on proven foundations. Regulatory submissions become faster because the operational story is already documented. Clinical safety cases become stronger because they describe real systems, not aspirational architectures.
We've seen trusts reduce their time from pilot to deployment by 60% after establishing mature operational foundations. Not because they cut corners, but because they stopped re-learning the same lessons with each project.
Getting Started
If your organisation is planning a digital health initiative, ask these questions early:
- Resilience: What happens when this system fails? Who is notified? How fast can we recover?
- Observability: Can we see what the system is doing in production? Can we diagnose problems without access to individual patient data?
- Governance: Who owns this system? How are changes controlled? Where is the audit trail?
- Consent: How do patient preferences flow through this system? Are sharing boundaries enforced technically or just procedurally?
If the answers are vague or deferred, you're not ready for a pilot. You're ready for operational foundation work.
Clinical value is real and important. But it's downstream. Get the sequence right, and the outcomes follow.