If you work in healthcare transformation, you've heard the pitch a hundred times. Better integration. Shared records. Seamless pathways. Data flowing freely between organisations.
It's the right direction. Fragmentation genuinely harms patients. But there's a structural problem that rarely gets addressed: when you connect systems and dissolve organisational boundaries, you also dissolve the informal mechanisms that used to make accountability clear.
In a single organisation, everyone roughly knows who's responsible for what. The consultant owns the patient. The ward sister runs the floor. The GP holds the list. It's implicit, but it works — mostly — because the boundaries are visible and the relationships are known.
Integration removes those boundaries. And unless you replace them with something structural, you create a new failure mode: responsibility that dissolves into the space between organisations, where everyone assumes someone else is watching.
This is not a technology problem. It's a governance problem. And it won't be solved by better training, clearer policies, or more assurance committees.
It requires infrastructure that makes responsibility transfer explicit, verifiable, and enforceable.
That's what Minimum Viable Responsibility Transfer is about.
What other industries already know
Healthcare isn't the first sector to face this problem. Integration and real-time coordination are standard in aviation, rail, and financial services. Each of those industries learned — usually through catastrophic failure — that governance has to be built into the system, not layered on top.
In aviation, when an aircraft is handed from one air traffic control sector to another, the transfer doesn't happen because a message was sent. It happens because the receiving controller explicitly accepts the handoff. Until that acceptance is registered, the original controller remains responsible. The system won't let responsibility float.
In payment systems, liability transfers through explicit acknowledgement messages with millisecond timestamps. If the receiving institution doesn't confirm acceptance within the defined window, the transaction fails. Silence is treated as failure, not consent. When disputes arise, the system provides definitive evidence of who was responsible at any given moment.
In rail, trains are physically stopped if they approach a red signal too fast — not because we don't trust drivers, but because the consequences of human error are too severe to rely on vigilance alone.
These aren't theoretical frameworks. They're operational realities in systems that move aircraft, money, and trains every second of every day. The engineering patterns are well-established. The question is why healthcare hasn't adopted them.
The clinical handover problem
Consider what happens when a patient is discharged from hospital to community care.
The hospital team completes their documentation. A discharge summary is generated — sometimes automatically, sometimes manually transcribed from multiple systems. It's sent electronically, or printed and scanned, or faxed, or uploaded to a portal.
At no point does the system verify that:
- The receiving clinician has acknowledged responsibility
- The information was reviewed, not just received
- The patient is now structurally "owned" by the community team
The discharge happens because the process completed. The hospital's systems show the patient as discharged. The community team may or may not have seen the summary. If something falls through the gap — a medication change missed, a follow-up not scheduled, a deterioration not monitored — the failure only becomes visible when the patient is harmed.
HSSIB investigations repeatedly identify this pattern. The systems worked. The processes were followed. The information was "sent." But responsibility never actually transferred. It dissolved somewhere in the pathway between organisations.
Why administrative controls aren't enough
Healthcare IT is full of what look like safety mechanisms but are actually administrative controls with a digital veneer.
Alerts that inform but don't prevent. Soft stops that can be clicked through. Warnings that record a decision was made but don't verify it was appropriate. Policies that require clinicians to check, confirm, and document — relying on attention and memory in environments designed to overwhelm both.
The safety engineering hierarchy is clear on this. Administrative controls — training, policies, warnings, procedures — are the weakest form of risk mitigation. Not because people ignore them, but because they fail under exactly the conditions that characterise modern clinical work: high cognitive load, constant interruption, fragmented information, time pressure.
When the system informs but doesn't intervene
A child receives a tenfold medication overdose. The EPR displayed a warning. The clinician overrode it. The system recorded the override. The child was harmed. The system did what it was designed to do. It informed. It logged. It enabled the workflow to continue. What it didn't do was intervene. It had no concept of a "hard stop" — a structural constraint that prevents progression until specific conditions are met.
That's not a configuration problem. It's an architectural one.
What Minimum Viable Responsibility Transfer requires
Minimum Viable Responsibility Transfer is the minimum set of system-level conditions required for clinical responsibility to move safely from one party to another — without assumption, inference, or hindsight.
It's not a product. It's not a standard. It's a design principle — derived from how accountability actually works in other safety-critical industries, applied to the specific constraints of healthcare.
At minimum, responsibility transfer requires:
Explicit acceptance — the receiving party must actively acknowledge they are now responsible. A message being sent is not acceptance. A record being viewable is not acceptance. Workflow progression is not acceptance. The system must capture a positive confirmation that accountability has shifted.
Time-bounded obligations — responsibility cannot wait indefinitely for someone to notice it. If acceptance doesn't happen within a defined window, the system must escalate or fail safely. Silence is failure, not consent.
Verifiable state — it must be possible to reconstruct, at any future point, exactly who was responsible at any given moment. Not just what happened, but who owned it when.
Non-overridable constraints for critical risks — some errors are too severe to rely on warnings. The system must be capable of hard stops that prevent progression, not just soft stops that inform and log.
This isn't about removing clinical judgement. Clinicians will always need to make complex decisions under uncertainty. MVRT is about ensuring that the transfer of responsibility — the moment when one party stops being accountable and another starts — is never ambiguous, never assumed, never left to inference.
When responsibility is explicit, time-bounded, and verifiable, escalation becomes earlier, duplication reduces, and clinicians spend less time compensating for uncertainty.
Why this matters now
NHS policy is moving decisively toward integration. Integrated Care Systems. Shared care records. Virtual wards. Cross-organisational pathways.
All of this increases the surface area across which responsibility can dissolve. Every interface between systems, every handover between organisations, every transition between care settings is a point where accountability can become ambiguous.
If governance remains procedural — policies, training, committees — it cannot scale with integration, and it cannot fail safely.
The alternative is governance that's infrastructural. Built into the systems. Enforced by architecture. Capable of proving, not just claiming, that responsibility transferred correctly.
That's the shift this series is about.
Safe Responsibility Transfers Series
- The Architecture of Trust (current)
- When Safety Becomes Machine Authority
- The Handover That Never Happened
Next in this series: When Safety Becomes Machine Authority — why training isn't enough, and why the hardest decisions in clinical safety are the ones we automate. View the full series