Healthcare engineers spend a lot of time thinking about flow. Data should move from one point to another, safely, quickly, and with provenance intact. But what happens when the flow stops? What happens when the patient's device — our "cube" — goes dark, or the network simply isn't there?
That's been one of our challenges this past week: designing systems that remain trustworthy even when silence is the only signal.
The Cube–Edge–Central Trinity
Our architecture assumes three states:
Cube: the patient's mobile device, the closest thing to the lived experience.
Edge: the gateway, the bridge that sees intermittent flows and must decide what to hold, what to escalate, what to discard.
Central: the persistent clinical record, the system of truth that expects consistency.
It sounds neat when written down, but in practice it means acknowledging that the cube isn't always available. Patients sleep, phones lock, radios switch off, batteries die. Unlike the neat, continuous flows of finance or e-commerce, health data comes in tides.
Coping With Silence
The natural instinct is to treat silence as absence — nothing to do, nothing to see. But in healthcare, silence is itself a data point.
- Was something missed?
- Should we escalate?
- Or is this exactly what we expected?
We've been experimenting with queue systems, changeLists, and delayed handshakes — mechanisms that allow the edge and central layers to hold state while the cube slumbers. Think of it like leaving notes at the bedside: "When you wake, here's what you missed. Here's what you need to catch up on."
It's less about guaranteeing constant availability, and more about designing trust into discontinuity.
Safety in the Gaps
One of the hardest parts is ensuring that safety doesn't degrade during silence. Clinical safety isn't optional. If a device goes quiet during an escalation, that silence can be the difference between reassurance and risk.
This is where our philosophy of evidence over claims comes in. We don't just want to say, "the system is resilient." We want to show, through logs, gates, and repeatable behaviours, that the system has thought about the gap. That it recorded the absence. That it can distinguish between "nothing happened" and "something happened, but we didn't hear it."
Chaos as Teacher
One way we've been testing this is by breaking things deliberately. Borrowing from chaos engineering, we simulate device dropouts, queue corruption, and edge failures. We ask: what happens when the cube doesn't come back? What happens when the edge wakes up to find the central has moved on without it?
These aren't comfortable scenarios, but they're necessary. Trust is only trust if it survives the bad days as well as the good.
The Abstract Lesson
What we've learned this week is deceptively simple: healthcare systems can't assume presence. They must be designed to cope with absence — to treat silence as information, not failure.
It's not glamorous engineering. It doesn't look good in a slide deck. But it's the difference between systems that collapse when the patient goes offline and systems that carry forward, gracefully, until the tide comes back in.
Closing Thought
So if you're reading this in a break between commits or a badly managed stand-up, here's the thought I'll leave you with:
In healthcare, the moments of silence matter as much as the signals.
Building for that truth is hard, but it's exactly the kind of engineering challenge worth losing sleep over.
Related Reading
-
From Data Foundations to Clinical Confidence →
Patient-first, distributed architectures for trusted AI
-
Building Distributed Health Data Systems: What Buyers Need to Know →
Resilience, governance, and practical implementation considerations
-
Engineering at the Heart of Safer Digital Health →
Building healthcare systems with guardrails and community
Building Resilient Healthcare Systems?
Join our engineering community working on distributed, safety-first health data architectures.
Get in Touch