Home / Seven Flows / Provenance
Flow 03

Provenance Flow

Governing information lineage and trust across clinical handovers

The Provenance Flow ensures that information lineage, confidence, and currency are visible at the point of clinical decision — particularly when data has been transformed, aggregated, or inferred before reaching a clinician.

In distributed care, clinical information rarely arrives untouched. It may have been extracted from documents, summarised by algorithms, enriched from multiple sources, or generated by decision support systems. Without explicit provenance, clinicians act on outputs they cannot interrogate — and safety reviews cannot reconstruct what information was available, trusted, or assumed at the moment a decision was made.

Provenance is not metadata. It is the basis for clinical trust.

Governance responsibilities

The Provenance Flow establishes clear governance responsibilities whenever clinical information crosses organisational boundaries or passes through inference, transformation, or aggregation.

This includes:

  • responsibility for declaring source, method, and confidence at the point of generation
  • visibility of transformations, derivations, and algorithmic contributions to downstream consumers
  • explicit handling of uncertainty, staleness, or known limitations
  • responsibility for preserving provenance across inter-organisational transfer

Provenance governance ensures that clinicians can distinguish between observed facts, derived conclusions, and algorithmic suggestions — and that safety cases can trace decisions back to their informational basis.

Common failure modes

When provenance governance is implicit or absent, predictable failure modes emerge:

  • Clinicians trust outputs they cannot verify. Information appears authoritative but its basis is opaque.
  • Stale or superseded data is acted upon as current. Currency is assumed, not declared.
  • AI-generated content is indistinguishable from clinician-authored records. The boundary between human judgement and algorithmic inference dissolves.
  • Safety reviews cannot reconstruct the informational state at decision time. Post-incident analysis collapses into speculation.
  • Errors propagate silently across system boundaries. Upstream mistakes are inherited and amplified downstream.

These failure modes are particularly acute in AI-augmented pathways, where outputs may appear precise but carry hidden uncertainty, training bias, or contextual limitations.

Clinical safety implications

Data Coordination Board (DCB) 0129 / 0160 framing

From a clinical safety perspective, the Provenance Flow mitigates systemic hazard classes associated with clinical decision-making on unreliable, misunderstood, or unverifiable information.

In DCB 0129 / 0160 terms, these hazards include:

  • Clinical decisions based on stale or superseded information. Actions taken on data that no longer reflects the patient's current state.
  • Algorithmic outputs treated as observed facts without visibility of confidence or derivation. Inferences are mistaken for measurements; suggestions are mistaken for diagnoses.
  • Inability to reconstruct the informational basis of a decision after harm. Safety cases fail because lineage was never recorded.

By making provenance explicit and inspectable at runtime, the Provenance Flow supports safety arguments for AI-augmented care — ensuring that algorithmic contributions are visible, bounded, and auditable rather than silently absorbed into the clinical record.

Provenance failures often cascade into Clinical Intent and Alert failures — when the basis of information is unclear, downstream systems cannot correctly interpret its meaning, triggering inappropriate alerts or suppressing critical ones.

This is not a constraint on innovation. It is the governance substrate that makes innovation safe to deploy.

Future iterations will add worked examples and assurance artefacts as the framework is applied in practice.