I Built a Healthcare App That Works Without the Internet — Part 3: The Hard Parts
Part one covered the architecture. Part two covered the sync engine. This part covers the things that kept me up at night: how do you make a sync endpoint that accepts arbitrary data from clients without becoming a security hole?
The short answer: you don’t trust the payload.
The threat model
The sync push endpoint accepts batches of up to 500 data events from authenticated clients. Each event says: apply this INSERT/UPDATE/DELETE to this table for this record.
The naive implementation trusts the payload’s claimed organization IDs and applies whatever arrives. That’s a problem. A buggy client — or a malicious one — could claim membership in a different organisation and modify records it shouldn’t see.
So the server resolves everything from the database.
Authorization: the server decides scope
Every sync event goes through a fixed authorization pipeline. Click through the steps:
The critical property is step 3: org scope is always inferred from the database, never from the payload. For a patient event, the server looks up patient.organization_id from D1. For an appointment, it follows appointment.team_id → team.organization_id. The payload’s claimed org is ignored.
This means a client can’t grant itself access to another organisation’s records just by sending the right JSON. The server is always the authority.
The bootstrap problem
There’s one edge case that required special handling: creating a new organisation.
When a user creates their first org, they need to add themselves as an admin member. But they’re not a member yet — so the normal authorization check would reject the membership event.
The solution is the bootstrap scan. Before processing any event in a batch, the server scans the entire payload for member records where user_id == actor and role == admin. Those org IDs are added to a bootstrap_orgs set that bypasses the membership check — but only for this single request.
This lets a user bootstrap an org in one atomic push: create the org, add themselves as admin, and create the first team — all in a single batch.
Two tokens, two lifecycles
The desktop app stores two tokens, each with a distinct purpose:
The separation exists because of a specific failure mode: the background sync worker should keep running even if the user’s session expires.
If sync used the same token as the UI session, token rotation would interrupt any in-flight sync job. By using a separate sync_token — stored in the OS keyring, independent of the session lifecycle — sync can run continuously in the background. The auth token rotates; the sync worker doesn’t notice.
The server validates the two tokens independently. Both are issued at sign-in, but they live different lives.
The trade-offs
Every architectural decision in this project involved giving something up. Here’s an honest accounting:
The one I feel most acutely is no automatic field-level merge. The current version guard is an all-or-nothing check on the whole record. If Alice and Bob both edit the same patient offline — Alice updates the phone number, Bob updates the address — only one change can win. The other is a conflict.
A smarter system would merge non-conflicting fields automatically. That’s doable. It would require tracking field-level versions or timestamps, and writing merge logic per entity type. I haven’t done it yet because the current behaviour is at least safe and predictable — conflicts surface explicitly rather than silently combining data in unexpected ways.
What I’d change
Field-level merge for non-critical fields. Phone numbers, addresses, notes — these could be merged automatically without risk to clinical data integrity. A conflict on a medication code should still surface explicitly.
Server-side sync queue visibility. Right now there’s no way to see how much unsynced work is sitting on client devices across an organisation. That matters for support: if a clinic has been offline for a week, how many events are pending? The architecture has no answer.
Formal schema migration contract. Schema changes need to land on both D1 and local SQLite. Today that’s a manual coordination problem. A more explicit migration versioning system — where the client knows which schema version the server expects — would prevent silent drift.
Wrapping up
The offline-first constraint was the right starting point. It forced every layer to be explicit about where data lives, when it moves, and what happens when things disagree.
The sync engine that came out of it is not especially clever — it’s a queue, a push loop, a pull cursor, and a handful of conflict rules. But it’s predictable, retryable, and auditable. For a healthcare application, those properties matter more than cleverness.
The full source is at github.com/kudakwashe-mupeni/easy-hms — the sync code lives in apps/api/src/worker_runtime/sync/ and apps/desktop/src/features/sync/.