Clinical safety isn't just a regulatory hurdle — it's the foundation of safe, scalable digital health. Too often, suppliers push responsibility onto Trusts, leaving safety treated as a paperwork exercise. In reality, accountability is shared: suppliers ensure systems are safe by design, and Trusts ensure they are safe in use.
Investing in living safety processes protects patients, prevents costly programme failures, and builds confidence with regulators and the public.
Two Frameworks, Shared Responsibility
The NHS defines two core clinical safety standards:
- Data Coordination Board (DCB) 0129 (Manufacturer responsibility) — suppliers must identify, document, and mitigate hazards when developing health IT systems.
- DCB 0160 (Deployment responsibility) — Trusts must assess safety in their own environment, where workflows, integrations, and risks may differ.
Too often suppliers attempt to hand over safety responsibility to Trusts. In reality, it is a shared accountability model. Manufacturers ensure their system is safe in design; Trusts ensure it is safe in context.
Becoming "Baseline Ready"
For digital systems to progress safely through governance, three essentials should be in place:
- Clinical Safety — A hazard log that is version-controlled, visible, and actively maintained.
- Data Protection — Alignment with GDPR and local IG processes.
- Digital Technology Assessment Criteria (DTAC) Evidence — Documentation that the system meets baseline NHS digital standards.
These aren't abstract requirements — they are concrete checkboxes that can be validated before moving projects forward.
From Compliance to Practice
Compliance documents that gather dust help no one. To make safety real:
- Keep hazard logs live — treat them as versioned artefacts reviewed at each system change.
- Make safety visible — surface key risks and mitigations on dashboards accessible to both clinical and digital leads.
- Embed safety champions — beyond Clinical Safety Officers (CSOs), frontline clinicians must be empowered to flag issues early.
Operational Realities
Phased Rollout — Safer practice is incremental: pilot on a ward, expand to a department, then scale to the whole Trust. Attempting Trust-wide deployment from day one is both unsafe and unrealistic.
Integration Risks — Patient Administration Systems (PAS), Electronic Patient Records (EPR), and diagnostic systems all fail differently. Understanding how systems behave under failure conditions is as important as how they work in normal use.
The Paper Exercise Trap — Safety files must reflect the actual system in use, not just the idealised design.
The Set-and-Forget Problem — Every update can introduce new hazards. Safety is never complete — it is continuous.
NHS Challenges to Acknowledge
Resource Burden — Quarterly reviews and workshops are valuable but demanding. Trusts must weigh safety needs against stretched clinical and IT capacity. Leaner models and prioritisation are often necessary.
Delivery Pressure — Boards want quick wins. Foundational safety takes time. This tension needs open discussion rather than wishful timelines.
Stakeholder Management — Research often uncovers complexity where commissioners want simplicity. Translating complexity into phased, fundable steps is part of the job.
What's Often Missing
Cost Framework — Safety is sometimes seen as overhead. In reality, proactive safety management avoids expensive failures, delays, and reputational damage. Framing safety as value protection, not bureaucracy, helps boards justify the investment.
Incident Management — When things go wrong, there must be clear escalation pathways, investigation processes, and shared learning. Safety management is not just prevention — it's also response and improvement.
Governance Alignment — Safety processes must link into existing Trust governance structures: risk registers, incident reporting, quality committees. Creating parallel processes undermines adoption.
Change Control Integration — IT change control is often rigid. Safety checks must integrate into these workflows without creating unnecessary bottlenecks.
Building a Safety Culture
The most important element is cultural: everyone must feel responsible for patient safety. Technology, processes, and documents matter — but culture is what determines whether hazards are spotted early and acted on.
Clinical safety standards are not tick-boxes. They are living practices, shared responsibilities, and cultural commitments.
In our next article, we'll explore the National Institute for Health and Care Excellence (NICE) Evidence Standards Framework (ESF) — showing how clinical safety, technical standards, and evidence requirements come together in practice.
Continue the series
Next: Evidence That Counts — NICE ESF and building credible evidence for NHS adoption.
Read Part 3