I've spent thirty years building infrastructure in regulated sectors. Finance, government, defence. Industries where accountability isn't optional and where "we assumed someone else was handling it" doesn't survive the post-incident review.
Coming to healthcare from outside the clinical professions, one pattern keeps showing up in the literature, in the serious incident reports, in conversations with Clinical Safety Officers: the failures that cause the most harm rarely look like failures at the time.
A patient gets discharged. The immediate problem is handled. The letter goes out. Everyone involved believes they've done the right thing.
Then, somewhere in the following days or weeks, something goes wrong.
When you trace it back, nobody did anything obviously wrong. The GP assumed the hospital was still monitoring. The hospital assumed the GP had picked it up. The patient assumed someone was watching. All three were acting reasonably. All three were wrong.
This pattern is familiar to me. I've seen it in financial services when trades crossed settlement boundaries. I've seen it in government programmes when responsibility moved between departments. The mechanics are always the same: accountability that seemed solid turns out to have been resting on assumptions that nobody tested.
The difference in healthcare is that the consequences aren't regulatory fines or reputational damage. They're people getting hurt.
The comfortable diagnosis
When things go wrong in healthcare, we tend to reach for visible causes. A missed diagnosis. A delayed referral. A clinician under pressure. A patient who didn't present typically.
These are emotionally satisfying explanations because they suggest the problem is local and correctable. Fix the training. Fix the staffing. Fix the guideline. Move on.
But anyone who reads serious incident reports in volume starts to notice something uncomfortable. Most harm occurs even when clinicians follow guidance, patients engage appropriately, and systems technically work. The failure isn't in the decision. It's in what happens after the decision, and before the next one.
In finance, we learned this lesson the hard way. The 2008 crisis wasn't primarily about individual bad decisions. It was about what happened in the gaps between institutions, in the spaces where everyone assumed someone else was managing the risk. Healthcare is still learning this lesson, and the tuition fees are measured in lives.
A very ordinary gap
Consider a situation that happens thousands of times a day across the NHS and private sector.
A patient is assessed, reassured, and sent home. Monitoring is implied rather than explicit. The patient is told what to watch for. A follow-up may happen if needed.
At that moment, something important changes. The clinician no longer has active authority over the patient's care. The patient doesn't yet have a clear escalation path. The system has recorded an outcome, but not a responsibility state.
Everyone assumes continuity. Nobody verifies it.
The patient believes they're still under care. The clinician believes the episode is complete unless reactivated. The organisation believes the process has closed.
All three beliefs can coexist. All three can be held in good faith. And all three can be wrong simultaneously.
The state management problem
In the systems I've built in other sectors, we'd call this a state management problem. The record shows what happened, but it doesn't capture the current state of obligations and authorities. It's like a bank knowing every transaction that occurred but not being able to tell you the current balance. Technically comprehensive, practically useless for the question that matters.
Why this isn't a communication problem
The instinctive response is to call this a communication failure. And communication is part of it. But framing it that way leads to solutions that don't work: more templates, more mandatory notifications, more tick-boxes confirming that information was shared.
Communication solutions assume someone knows they're supposed to communicate.
There's a deeper issue. If you look at how clinical safety literature actually defines a handoff, it's specific: the transfer of professional responsibility and liability. Not information. Liability. But every digital tool we've built for handoffs — the EPRs, the referral systems, the discharge summaries — transfers data. We've built infrastructure for one thing while the safety problem lives somewhere else entirely.
Healthcare today relies heavily on implicit transitions. Implicit handoffs. Implicit monitoring windows. Implicit ownership. These are socially negotiated, not structurally enforced. They work when teams are small, organisations are unified, and time horizons are short.
They fail when care stretches across organisations, across time, across professional boundaries, across digital systems that were never designed to share accountability.
This is why failures feel so hard to pin down after the fact. Everyone can show they acted reasonably. Nobody can clearly demonstrate who was responsible when it mattered most.
The myth of the end
Healthcare systems are good at starting things. Referrals get logged. Appointments get booked. Admissions get recorded. The infrastructure for initiation is mature and well-understood.
They're much worse at ending things cleanly.
What looks like an end is usually a transition. From active care to passive monitoring. From clinician-led observation to patient-led vigilance. From explicit authority to ambient responsibility. These transitions happen constantly, and they're rarely modelled explicitly.
In financial services, we learned to treat the end of a transaction as seriously as the beginning. Settlement isn't just the absence of trading; it's a distinct state with its own rules, obligations, and failure modes. The idea that you could simply stop tracking something because the active phase was complete would be considered negligent.
Healthcare hasn't made this shift. The discharge is treated as an ending when it's actually a transformation. The patient's relationship with the system changes character, but it doesn't disappear. The obligations thin out and become harder to locate, but they don't cease to exist.
And in that ambiguity, risk accumulates.
Why digital transformation hasn't touched this
Over the past two decades, healthcare has invested heavily in shared records, interoperability standards, dashboards, pathways, messaging tools. The visibility problem has been substantially addressed. Compared to twenty years ago, it's vastly easier to see what has happened to a patient across multiple settings.
But visibility isn't responsibility.
Most digital health systems are very good at remembering the past, and very bad at knowing the present. They can show you what happened. Almost none can tell you: at this moment, this person held clinical responsibility, under these conditions, with these obligations still in force.
This isn't a missing feature. It's a category error. We built systems to move clinical information, and assumed accountability would move with it. But information and liability are different things. You can copy data infinitely; you can't copy responsibility. It has to be transferred — offered by one party, accepted by another, recorded as a state change. None of our infrastructure thinks this way.
When I talk to Clinical Safety Officers about their incident investigations, they often describe spending weeks reconstructing the responsibility picture from fragmentary evidence. The EPR tells them what was done. It doesn't tell them what should have been done, by whom, and whether the conditions for that obligation were still active.
Where harm actually lives
The most dangerous moments in modern healthcare aren't the dramatic ones. They're quiet. After discharge. Between providers. During monitoring periods. When symptoms are ambiguous but persistent. When escalation criteria are understood socially but not enforced operationally.
These aren't edge cases. This is normal operations. And it's exactly where organisational boundaries stop being administrative conveniences and start being clinical risk surfaces.
The Francis Report, the Ockenden Review, the Kirkup investigations — read them closely and you see this pattern repeatedly. Not malice. Not incompetence. But a series of reasonable actions by reasonable people that somehow added up to harm, because the spaces between those actions weren't safe.
The boundaries between organisations are particularly dangerous. When a patient moves from one provider to another, they cross a line that is administratively significant but often clinically invisible. The referring organisation believes responsibility has transferred. The receiving organisation may not know they've received it. The patient certainly doesn't know the difference.
In a world of Integrated Care Systems, neighbourhood health teams, and virtual wards, these boundaries are multiplying. Every new collaboration is also a new risk surface. Every partnership that isn't explicitly modelled is an implicit assumption waiting to fail.
This isn't about blame
I want to be clear about something. The gap isn't caused by bad people. It isn't caused by uncaring organisations. It isn't caused by clinicians who need to be told to try harder.
It's structural.
It emerges because responsibility is implicit rather than explicit. Because time is treated as an afterthought rather than a first-class dimension of clinical risk. Because evidence is recorded but not contextualised. Because organisational boundaries are treated as administrative rather than clinical risk surfaces.
Telling people to be more careful won't fix structural problems. Better intentions don't compensate for missing infrastructure. The most conscientious clinician in the world can't maintain accountability that the system isn't designed to support.
In finance, we eventually learned that you can't rely on individual vigilance to manage systemic risk. You have to build the infrastructure that makes the right thing easier than the wrong thing. Healthcare is starting to have this conversation, but the infrastructure isn't there yet.
Naming the problem
If we keep talking about healthcare failure only in terms of individual decisions and individual errors, we'll keep missing where harm actually accumulates.
The real work isn't making better decisions in isolation. It's making the spaces between decisions safer, clearer, and accountable. It's treating the transition as seriously as the intervention. It's building infrastructure that knows who is responsible right now, not just what happened in the past.
That's the gap. It's structural, it's pervasive, and it's largely invisible to systems that weren't designed to see it.
Until we learn to see it, we'll keep stepping into it. Even with the best intentions. Even with the best people. Even with ever-increasing investment in systems that solve the wrong problem.
The good news is that this is a solvable problem. Other sectors have solved it. The concepts exist. The patterns are known. Healthcare doesn't need to invent something new; it needs to recognise that a problem it has been treating as inevitable is actually a design choice.
And design choices can be changed.
Next in this series: Why handoffs aren't moments but risk surfaces — and why primary-care-to-primary-care transitions may be the most dangerous gaps of all.