There's a moment in every safety-critical industry where the conversation shifts.

It stops being about better training. Better checklists. Better culture. More vigilance.

It starts being about what happens when all of that fails anyway.

That shift is uncomfortable. It means accepting that competent, well-trained professionals will make errors — not occasionally, but predictably. It means acknowledging that vigilance has limits. That attention degrades. That the systems we've built assume a level of human consistency that doesn't survive contact with reality.

Most industries resist this shift for as long as they can.

Then something fails badly enough that they can't.


The hierarchy nobody talks about

Occupational safety has a framework called the Hierarchy of Controls. It ranks interventions by effectiveness — strongest to weakest.

At the top: elimination — remove the hazard entirely.

Then substitution — replace it with something less dangerous.

Then engineering controls — physical or system-level constraints that prevent harm regardless of human behaviour.

Only then, near the bottom: administrative controls — training, procedures, policies, warnings.

And finally, personal protective equipment — the last line when everything else has failed.

This hierarchy exists because decades of evidence show that administrative controls are structurally unreliable. Not because people ignore them. Because they depend on attention, memory, and consistency in environments that systematically degrade all three.

The further down the hierarchy you rely, the more you're betting on human performance under conditions designed to erode it.

What this means for DCB0129/DCB0160 compliance

In the context of DCB0129 (manufacturer) and DCB0160 (deploying organisation), reliance on administrative controls — training, warnings, alert pop-ups — creates a structurally weak Clinical Safety Case. Engineering controls (hard stops, forced functions, system-level constraints) provide stronger evidence for your Hazard Log and reduce the assurance burden on the Clinical Safety Officer. If your safety case depends primarily on "the clinician will be trained to..." you are building on the weakest tier of the hierarchy.


Healthcare's inverted hierarchy

Healthcare IT is dominated by administrative controls dressed up as technology.

Alerts. Warnings. Pop-ups. Soft stops. Policies that require clinicians to check, verify, and document. E-learning modules. Competency frameworks. Sign-off workflows.

All of it assumes that if we inform people of the risk, they'll manage it correctly.

The hierarchy tells us this is the weakest possible approach.

Consider the pattern: an EPR identifies a potential problem — a drug interaction, an abnormal result, a contraindication. It displays a CDS (Clinical Decision Support) warning. The clinician, mid-task, clicks through it. The system logs that the warning was shown, technically satisfying the hazard log requirement. The workflow continues.

When harm occurs, the record shows that the clinician was "informed." The system did its job. The human failed.

But the human was always going to fail. Not because they're careless — because they're human. They're handling twelve things at once. They've seen that alert two hundred times this month. They're relying on pattern recognition to survive the shift.

The system was designed for a clinician who doesn't exist.


How aviation crossed the line

On 1 July 2002, two aircraft collided over Überlingen, Germany. Seventy-one people died.

The cause wasn't mechanical failure. It was a conflict between two sources of authority.

Air traffic control instructed one aircraft to descend. The onboard collision avoidance system — TCAS — instructed it to climb. The pilot followed the human voice. Both aircraft descended into each other.

The investigation made a structural finding: when human instruction conflicts with machine-enforced safety logic, the machine must win.

Post-Überlingen, global aviation rules changed. Pilots are now required to follow TCAS resolution advisories even when they contradict air traffic control. The system assumes that in a time-critical conflict, the machine's situational awareness is more reliable than the human's.

This wasn't a failure of trust in pilots. It was an acknowledgment that the operating environment exceeds human cognitive limits in specific, predictable ways.

The industry chose to engineer around that limitation rather than train against it.


How rail learned the same lesson

In 1999, a train passed a signal at danger near Ladbroke Grove. Thirty-one people died.

The inquiry found that training alone could not solve the problem of signals passed at danger. The driver's task — monitoring multiple visual inputs, managing fatigue, processing incomplete information at speed — was inherently error-prone.

The solution wasn't better training.

It was the Train Protection and Warning System: trackside equipment that applies emergency brakes automatically if a train approaches a red signal too fast.

The system assumes the driver will eventually fail. It prevents that failure from becoming catastrophe.

TPWS doesn't ask the driver to be more vigilant. It removes the need for vigilance at that specific decision point.


The soft stop problem

Healthcare hasn't made this shift.

A child receives a tenfold medication overdose. The electronic prescribing system displayed a warning — a "soft stop." The clinician overrode it. The system recorded the override. The child was harmed.

The post-incident review found no individual negligence. The clinician was experienced and competent. The override was one of dozens that day. The alert looked like every other alert.

The system informed. It logged. It enabled the workflow to continue.

What it didn't do was intervene. It had no concept of a "hard stop" — a constraint that prevents progression until specific conditions are met. It couldn't distinguish between a routine override and a catastrophic error.

The technology performed exactly as designed. The design was wrong.


Why we resist — and why resistance is failing

There's a reason healthcare resists engineering controls.

Clinicians are trained to exercise judgement. They work in conditions of genuine uncertainty. They deal with exceptions constantly. The idea of a machine blocking a clinical decision feels like a loss of professional authority.

And sometimes the resistance is justified. Medicine isn't manufacturing. Patients aren't standardised. Context matters in ways that are hard to encode.

But the resistance is increasingly hard to sustain.

Alert fatigue is now a recognised patient safety risk — not a theoretical concern, a documented cause of harm. Workarounds are endemic. The cognitive load on frontline staff is unsustainable. And the hybrid paper-digital environment creates failure modes that no amount of training will fix.

The question isn't whether healthcare needs engineering controls. The evidence is already clear.

The question is which decisions should be protected by systems that don't rely on human vigilance — and how we identify them.


Where engineering controls belong

Not every clinical decision should be constrained by a hard stop. That would be unworkable and clinically dangerous.

But some decision points have characteristics that make them candidates for engineering controls rather than administrative ones:

High severity, low ambiguity — the harm is catastrophic and the correct action is clear. A tenfold dosing error in paediatrics. A known lethal allergy. A contraindicated drug combination.

High frequency, low attention — the task is routine enough that vigilance naturally degrades. Discharge medication reconciliation. Handover acknowledgement. Result review.

Cross-boundary transitions — responsibility is transferring between parties and the risk of assumption is high. Referrals. Discharges. Escalations.

At these points, the system should not inform and hope. It should constrain and verify.


The CSO's dilemma

Clinical Safety Officers are often asked to sign off on risks mitigated purely by training and procedural controls. "The clinician will be trained to check." "The user will be alerted." "The policy requires verification."

This places an unfair burden on the CSO. They are being asked to assure a safety case whose effectiveness depends entirely on sustained human vigilance — the weakest tier of the hierarchy. When the inevitable failure occurs, the safety case becomes a liability document rather than a protection.

Moving to engineering controls protects the CSO by shifting assurance from human performance to system reliability. A hard stop that prevents a tenfold dosing error doesn't require the CSO to make a judgement call about whether training is "sufficient." The hazard is structurally mitigated. The safety case is stronger. The DCB0129 Clinical Safety Case Report documents a control that works regardless of the operating conditions.


What this means for MVRT

Minimum Viable Responsibility Transfer isn't just about making handovers explicit. It's about recognising which elements of handover are too critical to leave to administrative controls.

Explicit acceptance isn't a policy requirement — it's an engineering control. The system doesn't progress until the receiving party confirms.

Time-bounded escalation isn't a guideline — it's a constraint. Silence triggers action, not assumption.

Hard stops for critical risks aren't warnings — they're structural prevention. The error cannot propagate because the system won't allow it.

This is the shift the hierarchy of controls demands. Moving the most critical failure points out of the "training and vigilance" category and into the "engineered out" category.

Not because clinicians can't be trusted. Because the operating environment is too complex to rely on sustained human attention at every decision point.


The design question

The hardest ethical decisions in safety-critical systems aren't the ones humans make in the moment.

They're the ones made earlier — by the people who design the system. The decisions about which failure modes to engineer out, which to warn about, and which to leave to professional judgement.

Get that wrong, and you've built a system that blames humans for failures it was designed to permit.

Get it right, and you've built a system that protects both patients and clinicians — by reserving human judgement for the decisions that genuinely require it.

That's the infrastructure healthcare needs. Not systems that inform and log. Systems that intervene and prevent.


Next in this series: The Handover That Never Happened — why responsibility dissolves at organisational boundaries and disappears entirely at constitutional ones. View the full series

Julian Bradder

Julian Bradder

Founder, Inference Clinical

30 years in digital transformation, cloud infrastructure, and enterprise architecture. Deep expertise in clinical safety (DCB 0129/0160), FHIR interoperability, and building systems for regulated healthcare environments.