Skip to content

I Built a Healthcare App That Works Without the Internet — Part 2: The Sync Engine

Posted on:2026-03-09 · #rust

In part one I covered the overall structure of Easy HMS — three runtimes, shared business logic, Cloudflare Workers + D1 in the cloud. This part is about the hardest problem: keeping everything in sync when clients go offline.

The specific scenario I kept thinking about while building this: a receptionist creates three patient records and books two appointments while the clinic’s internet is down. An hour later the connection returns. In the meantime, a nurse at a different clinic updated one of those same patients through the web app.

When the desktop reconnects, what happens? Does anything get lost? What if the same record was edited in two places? What if the request succeeds on the server but the response never makes it back to the client?

These are the questions the sync engine has to answer.


The queue: local first, always

The foundation is a sync_queue table in the local SQLite database. Every write — patient created, appointment updated, record deleted — is queued there in the same transaction as the actual data change.

Walk through what that looks like in practice:

User creates a patient — offline
-- Both rows written in one transaction
BEGIN;
INSERT INTO patients (id, first_name, version, ...)
  VALUES ('abc-123', 'Jane', 1, ...);

INSERT INTO sync_queue
  (table_name, record_id, action, synced, created_at)
  VALUES ('patients', 'abc-123', 'INSERT', 0, now());
COMMIT;
The write is atomic. If either insert fails, neither is committed. The UI returns immediately — no network call happens here.
1 / 5

The key property is atomicity. The data write and the queue entry either both happen or neither does. The queue row persists across app restarts. So even if the app crashes immediately after a write, the event is not lost — the next time the app opens, the worker will find it and push it.


Push: sending changes to the cloud

When the network is available, a background worker drains the queue in batches of up to 500 events and sends them to POST /api/v1/sync/push. The server processes each event through a fixed pipeline.

Desktop
sync_queue table
bg worker reads outbox
mark synced=1
→ push →
← results ←
Cloudflare Worker
auth + rate limit
idempotency check
authorization
conflict detection
upsert + audit
1. Local write
The data table INSERT and the sync_queue INSERT happen in the same SQLite transaction. The UI returns immediately.
1 / 9

Step through the full pipeline. The two most important steps are 5 (idempotency) and 7 (conflict detection) — they’re what makes retries safe and stale writes impossible.

Idempotency means the same event can be sent as many times as needed. Each event carries a UUID idempotency key generated once per operation. The server stores (actor_user_id, idempotency_key) → result in a sync_idempotency table. On a retry, the server returns the cached result without touching the database again.

Conflict detection prevents an older version of a record from overwriting a newer one. There are two flavours depending on the table:

  • Version counter for patients and appointments — user-authored records where “how many times has this been edited” is meaningful.
  • updated_at timestamp for user profiles, addresses, and team records — system-managed objects where the timestamp is the natural signal.

Pull: getting changes from other users

After pushing, the client asks the server for everything that changed since its last sync. It sends an opaque since cursor (an epoch timestamp) and gets back a grouped diff.

since=0
Alice creates patient #1Bob updates appointment #5Alice updates patient #1Carol adds team memberBob creates patient #2Carol updates address
GET /sync/pull?since=0 → fetched this pull:
patientAlice creates patient #1updated_at=1000
apptBob updates appointment #5updated_at=1200
next_since=1200
pull 1 of 3

Press “next pull” to step through how the cursor advances. Each pull returns a next_since value — the client stores this as its checkpoint and uses it on the next request. If has_more is true, it pages immediately.

Different entity types have different pull behaviour:

  • Patients and appointments are filtered by updated_at > since and paginated — a large clinic could have thousands of records.
  • Members and team members are returned as a full list on every pull — they change rarely and the list is small.
  • Organization is always included regardless of the cursor — the client always needs the current org state.

The client merges incoming records into its local DB with a version guard: it only upserts if the incoming version is greater than or equal to what’s already stored locally. This prevents a slow pull from overwriting a local write that happened between the pull request and the response arriving.


Conflict resolution

Most syncs are clean. The interesting case is when two clients edit the same record while disconnected from each other.

Alice edits a patient offline on her laptop (version 3 → 4). Meanwhile, Bob edits the same patient online (version 3 → 4, applied first). When Alice reconnects and pushes version 4, the server already holds version 4.
Client sends
version = 4 (Alice, offline)
→ push →
conflict
← reply ←
Server holds
version = 4 (Bob, already applied)
rule: Reject if incoming_version ≤ existing_version
resolution: Alice's client gets 'conflict' back. It pulls the latest state (Bob's version), re-applies Alice's changes on top with version 5, and retries.

Click through the four scenarios. The most important thing about this design is what it doesn’t do: it doesn’t automatically merge conflicting changes. A conflict is returned as an error. The client must pull the latest state and re-apply the user’s changes on top.

This is deliberate. In a healthcare context, silent automatic merges on clinical data are more dangerous than surfacing a conflict. A field-level merge that silently combines two versions of a patient’s medication list is not acceptable. Better to be explicit.

The one exception is idempotent retries — those always succeed silently, because the server already has the answer cached.


The result

After a full sync cycle — local writes queued, pushed to the cloud, remote changes pulled back — every client converges on the same state. The receptionist’s offline records land in the cloud. The nurse’s update lands on the desktop. Conflicts are detected, not silently corrupted.

Part three covers the security model that makes all of this safe: the two-token architecture, how the server authorises sync events without trusting the payload, and the bootstrap pattern that lets a user create an organisation without being a member of it first.