Key Takeaways

A Tuesday morning, somewhere in the near future

Scenario: With SafeMesh

Dr Rachel Dunmore opens her Clinical Safety Officer dashboard at 08:14. It is a Tuesday. She has seventeen minutes before her first meeting.

The top of the screen shows a single number in amber: 23.

Twenty-three patients currently in-clearing across her Trust’s inter-organisational pathways. Transfers that have been initiated — responsibility formally offered to a receiving clinician or organisation — and not yet accepted. Some of these will have been in motion for minutes. That is normal. Transfers take time.

She filters by age. Most are under six hours. Two are under 24. One is flagged red.

She clicks it.

Patient 7741. Discharged from cardiology eleven days ago. Referral sent to the community heart failure nurse team on the day of discharge. Transfer-initiated event recorded in the system at 14:32 on the 7th. The clinical threshold for this pathway is 72 hours — after which, if the transfer has not been accepted, a governance alert fires.

The alert fired on the 10th. It has been open for eight days.

The community team has no record of ever receiving the referral. The cardiology team’s system shows it as sent. The gap between those two facts is where patient 7741 has been living for eleven days — technically discharged, practically without a responsible clinician, and until this moment, invisible to everyone with the authority to act.

Dr Dunmore opens a resolution pathway. Within the hour, patient 7741 has an assigned clinician, a home visit booked, and a formal accountability record that will survive any subsequent review.


Scenario: Without SafeMesh

Patient 7741 gets a letter from the hospital saying they have been referred. They wait. After two weeks they call the GP. The GP has no record of a cardiology referral arriving in their system — and in any case the referral went to a community team, not to them. The community team has no record either. Everyone has acted in good faith. The referral was sent. Something broke in transit — a system integration failure, an inbox that was never checked, a routing rule that silently dropped the message. Nobody knew. Nobody was watching.

This is not a hypothetical. Variations of this scenario are embedded in serious incident reports across the NHS at a frequency that should concern everyone who has read them. The harm does not always present dramatically. Sometimes the patient deteriorates slowly. Sometimes the missed follow-up is discovered only when something worse happens. Sometimes it is never discovered at all.

The question at the heart of every such review is always the same: who was responsible for this patient at this moment, and did they know?

There is no widely deployed NHS infrastructure that continuously measures responsibility transfer state across organisational boundaries in real time, before harm occurs. The Clearing Metric is how we build it.


The pre-clinical vetting void

Healthcare incident analysis has a recurring pattern. When you trace serious harm back to its origin, you rarely find a clinician who behaved negligently in isolation. What you find instead is a gap — a moment where one clinician believed responsibility had transferred, and the next clinician either did not know they were supposed to receive it, did not accept it, or accepted it without the information required to act safely.

This gap is what I call the pre-clinical vetting void. It sits between institutions. Between systems. Between legal and constitutional jurisdictions, as it does when an NHS GP refers into a Local Authority social care team. It sits between the end of an acute admission and whatever comes next. It sits, quietly, in every referral pathway in the NHS, at the moment between “sent” and “accepted.”

The void is not new. What is new is its scale. As health systems become increasingly atomised — integrated care pushing services into community settings, digital-first pathways routing patients through multiple providers, neighbourhood health linking organisations that have never shared governance infrastructure before — the number of inter-organisational transfers has grown dramatically. Each transfer creates a pre-clinical vetting void. Each void is a period during which harm is more likely. The atomisation of care is, without intervention, the scaling of risk.

The natural response is to improve communication. Better referral letters. Faster acknowledgements. Clearer handover documentation. These have value. But they treat the symptom, not the cause. Communication assumes that someone knows they are supposed to communicate. The problem is structural: responsibility transitions are not modelled, not recorded, and not measured. They are assumed.

Healthcare systems are good at recording what clinicians did. The EPR captures actions, observations, prescriptions, and outcomes. What it does not capture is who was responsible at any given moment. Whether the obligation to follow up survived the shift change. Whether the referral that was sent was actually received and accepted, or just received.

The paperwork trail is not the same as the accountability trail. And in harm scenarios, what courts and inquiries demand is the accountability trail.


Why existing systems do not solve this

A reasonable reader will ask: isn’t this just referral tracking under a different name? Worklist management? Message acknowledgement?

It is not, and the distinction is precise.

By responsibility transfer, I mean the formally recorded transition of clinical accountability from one holder to another, including the interval in which the offer has been made but acceptance is still pending. That interval is what existing systems do not model.

A message can be delivered, received, and acknowledged without any accountable clinician having accepted responsibility to act. These are different events. A referral that arrives in a team’s inbox is not a responsibility transfer. A referral that arrives, is read, and is assigned to a named clinician who accepts it — that is a responsibility transfer. Current NHS systems routinely record the first event. They rarely record the second, and almost never compute the gap between them across organisational boundaries.

There are four specific gaps:

Referral systems track messages, not accountable acceptance. They can tell you a message was delivered. They cannot tell you whether anyone accepted clinical responsibility for the patient it concerns.

Local workflow tools do not cross constitutional boundaries. A Trust’s EPR worklist is not visible to the community team receiving the referral. A GP practice system has no access to the hospital’s discharge tracking. These systems are operationally complete within their own walls and functionally blind outside them.

Acknowledgement is not acceptance. An auto-acknowledgement from a team inbox confirms message receipt. It does not constitute a clinical decision that any named person has accepted responsibility for this patient.

Incident review is retrospective. When something goes wrong, you can trace back through the evidence to establish what happened. That is important. But it does not help patient 7741, who needed an intervention before harm occurred.

The Clearing Metric is designed to answer a different question: not “what happened?” but “what is happening right now, and where is it going wrong?”


Why the financial sector solved this first

The language I use here — “clearing” — is borrowed deliberately. In financial infrastructure, clearing is the process by which a transaction moves from initiated to settled. In BACS, in SWIFT, in securities clearing, there is a defined period between the point at which a payment is sent and the point at which it is irrevocably received. During that window, the transaction is in-clearing. Both parties know this. Systems track it. If it gets stuck, the system knows and can act.

The reason financial clearing is robust is that the industry built measurement infrastructure for it. You cannot manage a payment clearing system without knowing how many transactions are currently in-clearing, how long each has been there, and what the rate of successful settlement looks like across time. Those numbers exist, they are reliable, and decisions are made against them.

Healthcare has never built the equivalent. Every inter-organisational responsibility transfer is, in governance terms, a transaction. One clinician initiates it. Another must accept it. In between, there is a defined window during which the transfer is in progress and both parties have obligations. If the receiving clinician does not accept within a clinically appropriate threshold, something has gone wrong. But nothing in common NHS deployment measures this systematically. There is no clearing rate. There is no mean clearing time. There is no stuck ratio. The transactions are invisible.

The Clearing Metric is the infrastructure for making those transactions visible.


Three states

The model is simple. Every responsibility transfer has three possible states at any given moment.

Balanced

The healthy state. Every debit has a matching credit. Every transfer-initiated event has a corresponding transfer-accepted event. No patients are in the pre-clinical vetting void. The clearing metric is zero.

In-clearing

The normal operational state during an active transfer. A clinician has sent a referral. The receiving clinician has not yet accepted. This is expected. It is time-bounded. The clock is running.

Stuck

Where the risk concentrates. An in-clearing transfer has exceeded its clinical threshold. Something has failed. The patient is in the void and the system needs to act.

The Clearing Metric continuously computes these states across all active transfers, aggregates them, and produces a real-time picture of system safety.


What the metric measures

Five numbers, continuously computed.

Clearing Volume

The raw count of responsibility transfers currently in-clearing. How many patients, right now, are between clinicians? This exists in SafeMesh as a live figure per domain, per organisation pair, per care pathway.

Clearing Rate

The fraction of transfers that complete within their clinical threshold, expressed as a ratio over time. A clearing rate that has dropped from 97% to 94% over three weeks is a different situation from one stable at 94% for a year. Trend matters as much as the number.

Mean Clearing Time

The average duration from transfer-initiated to transfer-accepted, computed by domain, by organisation pair, and by urgency band. An insurer running a digital consultation platform needs to know mean clearing time for GP-to-specialist referrals within their network. A Trust needs to know whether discharge-to-primary clearing times are getting better or worse.

Stuck Ratio

The percentage of in-clearing transfers that have exceeded their threshold at any given moment. A rising stuck ratio is an early warning of systemic stress — capacity pressure, a broken referral pathway, an integration failure — before it becomes a pattern of harm.

Trend Direction

The longitudinal signal. Is the system getting safer or less safe over time? This is what enables the question “is our investment in improving discharge governance working?” to be answered with evidence rather than assertion.


What the metric is not

A clearing rate on its own is a process metric, not a quality metric. This distinction matters and the Clearing Metric is designed around it.

A clearing rate of 100% would mean every transfer completed within threshold. It would say nothing about whether the right clinician accepted the transfer, whether they had the information needed to act safely, or whether the underlying clinical need was met. A system gaming the metric by auto-accepting all referrals regardless of capacity would score perfectly and would be clinically dangerous.

This is why the Clearing Metric is always presented alongside a coverage figure. Coverage is the proportion of responsibility transfers on a given pathway that are visible to the measurement system — where a governance event was actually recorded and tracked. A clearing rate reported at 5% coverage is a proof of concept. A clearing rate at 40% coverage is meaningful but partial. The target is universal coverage across all inter-organisational transfers on a pathway, and that is a multi-year infrastructure project.

The metric also has anti-gaming architecture built in. Auto-accept patterns are detectable — transfers consistently accepted within seconds of initiation, with no variation and no pattern of clinical assessment events, trigger a distinct hazard class. Scope-narrowing is detectable — if a domain is improving its clearing rate by progressively restricting which transfers it accepts, that shows up in coverage and in the gap between transfers initiated and transfers received. The metric is designed to be informative in the presence of gaming, not to be gamed into meaninglessness.


Why it matters for clinical safety governance

Under the DCB 0129 framework, organisations are expected to maintain appropriate clinical safety governance over deployed health IT systems, and the Clinical Safety Officer plays a central role in that assurance. The tools currently available for this role are largely static: hazard logs, risk registers, periodic audit. They are not designed for continuous operational visibility.

The Clearing Metric is, at its core, a clinical safety instrument. It answers the question the CSO actually needs: right now, in this system, in this domain, across these organisations — how safe is the responsibility transfer infrastructure? Is it getting safer or less safe? Where are the concentrations of stuck transfers?

For a Trust’s Clinical Safety Officer, the clearing metric is the difference between “we have a referral pathway” and “we have evidence that our referral pathway is working within safe parameters.” The former is a process assertion. The latter is measurable safety governance.

For commissioners and ICB leadership, the clearing metric enables governance at the right level of abstraction — not the operational detail of individual referrals, but the systemic question: are inter-organisational transitions across our integrated care system working? Where are the structural failures?

For the CQC, and for any future extension of clinical safety standards beyond the current DCB framework, continuous evidence of this kind replaces periodic audit with ongoing transparency.


Why it matters commercially

The metric is not only a safety instrument. It is also operational and contractual infrastructure. Any organisation making commitments about referral handling, discharge follow-up, authorisation time, or inter-provider transfer windows needs evidence of how responsibility is actually clearing in practice. Without that, those commitments rest on assumption. With it, they become measurable, reportable, and contractually auditable.


What exists now

In SafeMesh, the underlying infrastructure needed to compute these measures already exists. The Responsibility Ledger — our double-entry accountability record, where every transfer-initiated event is a debit and every transfer-accepted event is a credit — is built. The operational index, which maintains queryable projections of in-clearing and stuck states, is built. The Evidence Fabric, where metric snapshots are anchored with constitutional timestamps for longitudinal analysis, is built.

The Clearing Metric service is the layer that turns those underlying records into live operational metrics, APIs, and dashboard views. It is the last piece needed before the question “how many patients are between responsible clinicians right now?” has a reliable, real-time, system-wide answer.

That question has rarely had a satisfactory answer in the NHS. There is no good technical reason why. The Clearing Metric is how we change that.

Julian Bradder

Julian Bradder

Founder & CEO, Inference Clinical

Julian is Founder and CEO of Inference Clinical, an infrastructure company that enables safe, fast, legal transfer of responsibility, clinical intent, and pathway history across clinical pathways, locations, organisations and constitutions. Full profile