Key Takeaways

In 2025, researchers submitted Freedom of Information requests to 239 NHS organisations asking a simple question: how many of the digital health technologies you deploy have been assured against the mandatory clinical safety standards, DCB 0129 and DCB 0160?

The answers, published in the Journal of Medical Internet Research (JMIR), were alarming. Across these organisations, 14,747 active digital health technology deployments were identified. Of these, 70.1% had no documented clinical safety assurance. Not inadequate assurance. Not partial assurance. No assurance at all.

These are not experimental pilot projects. These are production systems - electronic health records, clinical decision support tools, referral management systems, e-prescribing platforms, diagnostic ordering systems - running in NHS organisations, processing patient data, influencing clinical decisions, and operating without the clinical safety cases that DCB 0129 and DCB 0160 have mandated since 2012.

This finding is significant in its own right. But it understates the problem - in two ways.

First, DCB 0129 and 0160 assess safety within a single organisation. They do not assess what happens when these systems operate across organisational boundaries. And most of them do.

Second, the JMIR study examined NHS organisations. But DCB 0129 applies to all manufacturers of health IT used in the provision of NHS-funded care, and DCB 0160 applies to all organisations deploying such systems - including private providers operating under NHS contracts, private hospitals treating NHS-funded patients, and digital health platforms processing NHS referrals. CQC's fundamental standards apply to all registered providers, not just NHS trusts. For private healthcare organisations operating entirely outside the NHS, the DCB standards may not apply directly, but CQC Regulation 12 (safe care and treatment) and Regulation 17 (good governance) still require clinical safety assurance for health IT systems. The compliance gap identified in the NHS almost certainly exists in the private sector as well - and at private sector boundaries, it is likely worse, because private organisations lack the standardised infrastructure (MESH, NHS Spine, DSPT) that at least partially governs NHS data flows.

This is the gap our Seven Flows Boundary Governance Audit is designed to address - across NHS, private, and cross-sector boundaries alike.

What DCB 0129 and 0160 actually cover

What is DCB 0129?

DCB 0129 is the NHS clinical safety standard for manufacturers of health IT systems. It requires manufacturers to establish a clinical risk management process, maintain a hazard log, and produce a clinical safety case demonstrating that clinical risks have been identified, assessed, and mitigated before the system is deployed in NHS-funded care.

What is DCB 0160?

DCB 0160 is the NHS clinical safety standard for organisations deploying health IT. It requires the deploying organisation to appoint a Clinical Safety Officer (CSO), maintain its own hazard log for the deployment context, and ensure that clinical risks arising from the deployment are identified and managed throughout the system's lifecycle.

Both standards work well within a single organisation. A Trust deploys an e-prescribing system. The Trust's CSO identifies that the system interacts with the pharmacy dispensing system and with the patient's allergy record. These interactions create hazards. The hazards are logged, risk-assessed, and mitigated. The clinical safety case documents this process.

The problem is that modern health IT rarely operates within a single organisation.

Where the boundary gap appears

Consider a teleradiology platform. A GP orders an X-ray. The image is acquired at a diagnostic centre, transmitted to a teleradiology provider for reporting, and the report is sent back to the GP. Three organisations. Two boundaries.

Under DCB 0129, the platform manufacturer maintains a clinical safety case. Under DCB 0160, each deploying organisation maintains its own safety case. But the boundaries - where the image crosses from diagnostic centre to platform, and where the report crosses from platform to GP - are not explicitly addressed by either standard.

Our Seven Flows methodology is designed to assess exactly this type of gap. Applied to the teleradiology boundary, the assessment would examine the Provenance flow - can the receiving organisation verify the source, authorship, and integrity of what it receives? For most diagnostic boundaries, the answer is partial: the author is identified but no structured metadata accompanies the image or report through the crossing. If an image is compressed during transmission and clinical information is degraded, the receiving clinician has no provenance trail to detect this. Under our scoring model, that is a Level 2 at best: a documented process exists but is untested and lacks hazard log connection.

The Clinical Intent flow - does the receiving organisation know precisely why the data was shared and what clinical action is expected? - presents a similar picture at diagnostic boundaries. The referral states the clinical question, but typically in free text rather than structured coding. When the report comes back, the GP may receive a narrative finding without structured action codes, leaving interpretation and urgency to individual judgement.

And Alert & Responsibility - our critical MVRT flow - exposes the most dangerous gap. A critical finding (an unexpected tumour on a routine chest X-ray) is transmitted back to the GP, but there is typically no mechanism to confirm the GP has seen it, has understood its significance, and has actioned it within a clinically safe timeframe. The diagnostic provider's responsibility ends at transmission. The GP's responsibility begins at review. Between these two points, the patient with a critical finding is clinically unowned.

No individual organisation's hazard log captures these boundary-specific risks. The diagnostic centre's hazard log addresses its deployment. The teleradiology platform's hazard log addresses its product. The GP practice's hazard log addresses its environment. The space between them is ungoverned.

This pattern repeats across private healthcare boundaries. A private hospital group operating multiple sites under a single brand may use different EPR systems at different sites, with patient records crossing between them during transfers or multi-site treatment pathways. Each site has its own CQC registration, its own clinical safety case, its own hazard log. The inter-site boundary - where data crosses between systems that may handle clinical coding, allergy alerts, and medication records differently - sits in the same governance gap as the teleradiology example. When a PE firm acquires multiple healthcare providers and integrates them, the number of these internal boundaries multiplies, and the governance challenge intensifies precisely because the corporate assumption is that "we're all one company now" - while the regulatory reality is that each entity retains its independent obligations.

Why this matters for Private Healthcare

The JMIR study examined NHS organisations, but DCB 0129 applies to all manufacturers of health IT used in NHS-funded care - including systems sold to private providers. DCB 0160 applies to all deploying organisations, including private hospitals treating NHS-funded patients.

Private healthcare groups face additional boundary risks: multiple EPR systems across sites, proprietary integration portals instead of MESH/Spine, and PE-driven consolidation that multiplies internal boundaries while assuming "one company" governance covers them. It doesn't - each CQC-registered entity retains independent obligations.

Our Private Healthcare Boundary Assessment and PE Investor Due Diligence are designed specifically for these cross-entity governance gaps.

The cascade effect the scoring model reveals

One of the most important aspects of assessing boundaries systematically is that structural dependencies between governance dimensions become visible. Our methodology applies cascading failure logic: if an upstream flow fails, dependent downstream flows are automatically capped - regardless of how well they appear to function in isolation.

In clinical safety terms, this cascade effect matters enormously.

Identity Verification Failure: verification doesn't propagate

An organisation verifies a patient's identity using PDS lookup at the point of data entry. When the data crosses to another organisation, the receiving system may re-verify - or it may assume verification was done upstream. If Identity scores below Level 2 at a boundary, Consent and Provenance are automatically capped - because you cannot meaningfully verify consent or data provenance for a patient whose identity you haven't independently confirmed.

Clinical Intent Lost in Translation: cascades to responsibility failure

A referral is generated with structured clinical information in one system. When it crosses to another organisation's system, the structured data may be rendered as free text, fields may map incorrectly, or clinical coding may translate between different terminologies. If Clinical Intent scores below Level 2, Alert & Responsibility and Service Routing are both capped - reflecting the structural reality that you cannot responsibly accept a patient or route them appropriately when you don't understand why they were sent to you.

Alert & Responsibility Routing Failure: alerts fail silently

A clinical alert is generated in one system and needs to reach a clinician in another organisation. If the alert routing crosses an organisational boundary, the mechanisms for delivery, acknowledgement, and escalation may differ between the two systems. An alert that is high-priority in the sending system may enter a general workflow queue in the receiving system. Our Alert & Responsibility assessment specifically tests for this: is there confirmed receipt? Is there a clinically-timed escalation pathway? If not, this is a MVRT failure - and under our scoring model, a boundary that cannot demonstrate Minimum Viable Responsibility Transfer cannot score above Level 2 on this flow.

Outcome Data Feedback Failure: data doesn't flow back

This is perhaps the most consequential failure. When an organisation refers a patient, the outcome of care rarely flows back in a structured way. Without outcome data, the same boundary failures repeat indefinitely because no governance system captures them. The boundary is a one-way door. Our Outcome flow is designed to expose this - and for the majority of healthcare boundaries today, the evidence would support a score of 0 or 1.

The cascade model ensures that a scorecard tells the truth about structural dependency. An organisation might see apparently-decent individual scores on Consent or Provenance - but when cascade adjustments are applied, the scorecard reveals that upstream Identity or Clinical Intent failures structurally undermine those scores. This is information that no internal governance review produces.

The CSO's impossible position - and what the audit gives them

Clinical Safety Officers are the named individuals responsible for clinical safety assurance under DCB 0129 and 0160. The role carries personal liability. If a patient comes to harm as a result of a health IT failure that should have been identified in the clinical safety process, the CSO's professional accountability is directly engaged.

This places CSOs in an impossible position at organisational boundaries. Their statutory obligation is to ensure that clinical risks from health IT are identified and managed. But the risks at boundaries fall outside the scope of any single organisation's clinical safety case.

A CSO at an NHS Trust knows that their Trust sends discharge summaries to GP practices via MESH. They know the summary is generated by their EPR system, which has a DCB 0160 clinical safety case. But what happens after the summary leaves their system? Does the GP's system display it correctly? If the summary contains a critical medication change, is the GP's system configured to flag it? The Trust's CSO cannot answer these questions because the answers depend on another organisation's systems and configuration.

The CSO cannot ignore these risks - they are real and they affect patients. But the CSO has no existing mechanism to assess, log, or mitigate risks that manifest in another organisation's environment.

This position is arguably more acute in the private sector. An NHS Trust CSO at least operates within a standardised ecosystem - MESH for messaging, NHS Spine for identity, DSPT for data security baseline. A CSO at a private healthcare group may face boundaries where data crosses via email, proprietary portals, or API integrations with no standardised governance infrastructure at all. The boundary-specific risks are the same or greater, but the tools to manage them are fewer.

This is specifically what our audit is designed to address. The methodology requires stakeholder engagement on both sides of every boundary - the sending organisation's CSO and the receiving organisation's clinical and technical leads. Evidence is cross-referenced: if the sending organisation claims a process exists but the receiving organisation is unaware of it, the evidence is flagged as unreconciled and the lower assessment prevails.

The audit output is designed to give a CSO three things they currently lack. First, a boundary-specific risk assessment with statutory traceability - findings that can be added directly to their hazard log with references to DCB 0160, CQC Regulation 12, and UK GDPR Article 6 obligations. Second, a scored maturity assessment showing where each boundary sits on a 0-to-4 scale, with cascading failure adjustments that reveal structural dependencies. Third, evidence of cross-organisation reconciliation - confirmation that the governance at each boundary has been assessed from both sides, not just their own.

We are clear about what the audit does and does not do. It evaluates governance control maturity at boundaries. It does not certify clinical safety outcomes. It does not transfer the CSO's statutory responsibility. It provides evidence that assists CSOs in understanding the scope of their boundary-specific obligations - obligations that exist under current law but that no existing assessment methodology addresses.

What boundary-aware clinical safety looks like

Addressing the boundary gap does not require rewriting DCB 0129 and 0160. The standards are sound within their scope. What is required is an additional layer of clinical safety governance that sits at the boundary.

Boundary-specific hazard identification

For each organisational boundary, the clinical risks that arise specifically from the crossing must be identified. These are not the same as risks within either organisation's system. They are the risks of translation, transmission, identity propagation, alert routing, and responsibility transfer that manifest only at the point of crossing. Our methodology identifies these risks systematically across seven dimensions.

Cross-organisation hazard log reconciliation

Both organisations at a boundary should maintain boundary-specific entries in their hazard logs, cross-referencing each other. Currently, hazard logs are entirely internal documents. Boundary governance requires them to connect. Our audit output is structured to support this - findings are formatted so they can be adopted into both organisations' hazard logs with the same reference numbers and statutory traceability.

MVRT as a clinical safety requirement

The transfer of clinical responsibility at a boundary should be treated as a clinical safety event. The hazard log should include the risk that responsibility transfer is implicit, assumed, or unconfirmed. In our scoring model, Minimum Viable Responsibility Transfer is the non-negotiable threshold for any boundary to achieve "Managed" status on Alert & Responsibility.

Technology assessment at the boundary

The infrastructure at each boundary must be assessed for its ability to enforce governance requirements. Can it verify identity across organisations? Can it propagate consent? Can it deliver alerts with confirmed receipt? Can it enforce MVRT? Our technology assessment evaluates this - and where the infrastructure cannot enforce the governance, the remediation roadmap specifies what needs to change, with a funded business case using available cloud migration incentives to offset costs.

The regulatory trajectory

The JMIR study's finding that 70.1% of deployments lack assurance has created regulatory attention. Remediation will be required - organisations will need to establish clinical safety cases for their existing deployments.

But remediation that addresses only internal deployments will miss the boundary dimension. An organisation that creates clinical safety cases for all its internal systems but ignores the boundary risks is still exposed to the most dangerous category of clinical safety failure - the category that no individual organisation's governance captures.

The Data Use and Access Act 2025 strengthens the information standards framework. As interoperability mandates increase the volume and velocity of data crossing organisational boundaries, boundary-specific clinical safety risks will increase proportionally. An organisation that has assessed its boundary risks, logged them, and implemented mitigations will have a clinical safety case that reflects reality. An organisation that has not will have a safety case that describes a system where data stays within the walls - a system that doesn't exist.

For Clinical Safety Officers - NHS and private sector

If you hold a CSO role - whether in an NHS Trust, a private hospital group, a digital health platform, or a diagnostic network - the practical implication is specific. Your hazard log almost certainly does not address the risks at your organisational boundaries. This is not a criticism of your practice - the standards as currently written do not explicitly require it. But the risks are real, they affect patients, and they fall within your professional accountability.

The first step is to map the boundaries: identify every point where your organisation's health IT systems interact with another organisation's systems. For each boundary, seven questions structure the assessment. Can both organisations verify who the patient is at the point of crossing? Has consent been propagated for the specific boundary, not just for internal processing? Is the data's provenance maintained through the crossing? Is the clinical intent communicated in a structured, actionable form? Is responsibility explicitly transferred with bilateral confirmation? Is routing based on clinical criteria and governance, not just capacity? Does the originating organisation learn the outcome?

The answers will almost certainly reveal gaps. Those gaps belong in your hazard log. And addressing them will require engagement with the organisations on the other side of each boundary - because boundary governance, by definition, cannot be done unilaterally.

That is what our audit facilitates. Both sides of the boundary, assessed together, evidence reconciled, findings traceable to statute, and a scorecard that gives your board a clear picture of where boundary governance stands and where it structurally fails.


Inference Clinical's Seven Flows Boundary Governance Audit produces per-boundary scorecards with cascading failure logic, cross-organisation evidence reconciliation, and statutory traceability to DCB 0129/0160, CQC, and UK GDPR. To assess your boundary governance, take the free Boundary Risk Score or book a scoping call.

Julian Bradder

Julian Bradder

Founder & CEO, Inference Clinical

30 years in digital transformation, cloud infrastructure, and enterprise architecture. Deep expertise in clinical safety (DCB 0129/0160), FHIR interoperability, and building systems for regulated healthcare environments.