Skip to content

I Built a Healthcare App That Works Without the Internet — Part 1: The Architecture

Posted on:2026-02-23 · #rust

Easy HMS started from one hard rule: clinics must keep working when the internet is unstable. Registration, triage, and appointment lookup cannot fail because Wi-Fi drops.

That rule, offline first, shaped every architectural decision in this system.

This is part one of a three-part series. It explains the architecture, the Rust monorepo shape, and why the API runs on Cloudflare Workers + D1.

Parts two and three cover sync and security in detail.


What the system does

Easy HMS supports multi-clinic healthcare groups. Each group has clinics, teams, and members with scoped roles. Core records are patients and appointments, with inventory and billing flows around them.

The desktop app is the primary client for daily operations. A web app targets the same product surface. Both synchronize through one shared cloud backend.

Example: a receptionist creates a patient while offline. The record is committed to local SQLite first. When connectivity returns, the change is pushed and merged without re-entry.


Three runtimes, one set of rules

The codebase is a Rust monorepo with three runtimes. Each runtime has its own SQLite-backed store. All business rules live in shared crates.

↑ click a runtime to learn more
Shared Rust crates:easy-hms-coreshareddata-accessadapter-d1adapter-sqlite-nativeadapter-sqlite-wasmdb-common

The key layer is easy-hms-core. It defines traits for operations such as patient writes, appointment queries, and sync authorization.

Runtimes do not own domain logic. They own transport and persistence adapters that satisfy those traits.


The adapter pattern

The same repository traits have three adapter implementations: D1 in the cloud, rusqlite + SQLx on desktop, and SQLite WASM in the browser OPFS.

packages/core — Repository Traits
PatientRepositoryAppointmentRepositoryLocalSyncRepositoryRemoteSyncRepositoryAuthRepository
↓ impl↓ impl↓ impl
↑ click an adapter to see how it maps
↓ used by↓ used by↓ used by
apps/api
apps/desktop
apps/web

Click each adapter in the diagram to inspect the mapping.

If patient validation changes, it changes once in easy-hms-core. Every runtime gets the same behavior because each adapter executes the same rule path.

The trade-off is adapter maintenance in three places. The upside is consistency across clients, which matters more in clinical workflows than minimizing adapter code.


Why Cloudflare Workers + D1?

The API started as axum + diesel on Postgres. Migrating to Cloudflare Workers + D1 was the largest architecture shift in the project.

The decision came from three practical constraints.

No server operations. Deploy with wrangler deploy, not VM patching and manual scaling. For a solo-maintained product, reducing operational overhead is critical.

D1 stays on SQLite semantics. Desktop already uses SQLite. Keeping cloud and client on the same SQL model preserves schema reuse, migration reuse, and query behavior.

Cost profile. Workers has a generous free tier, and clinic-scale traffic does not require dedicated compute.

D1’s main constraint is transaction flexibility compared with a traditional server database.

The sync pipeline handles this with explicit atomicity boundaries. For one event, the system writes the domain change and its idempotency marker in the same bounded unit before moving forward.


The shape of each runtime

RuntimeLanguageDBAuth storage
Cloudflare WorkerRust (wasm32)D1 (Cloudflare SQLite)Cookie or Bearer token
Tauri DesktopRustSQLite via rusqlite + SQLxOS keyring
React BrowserTypeScriptSQLite WASM in OPFSBearer token

They all speak the same API contract. They all use the same sync protocol. The shared crates enforce that.


What this enables

Offline-first is mostly a data problem: commit locally, sync later, and keep user flows independent from network status.

This architecture is ideal when correctness under poor connectivity matters more than minimizing adapter complexity.

Part two breaks down the sync engine: queueing, push and pull flow, cursor tracking, and conflict handling when two users edit the same record.