-
Notifications
You must be signed in to change notification settings - Fork 2
FAQ
The common thread: the data is owned and maintained somewhere else, it's too large or too dynamic to hardcode, and your channels just need fast read access.
Reference data in external platforms — Facility configs, site settings, routing rules managed in an EHR or master data platform by another team. They edit it in their tools; your channels just need to look it up.
Large standard nomenclature / mapping tables — ICD, LOINC, SNOMED, zip codes. Tables with tens or hundreds of thousands of rows, but statistically only a small percentage of keys are ever looked up in practice. Lazy loading keeps memory proportional to actual use.
Code translation tables — HL7 table value mappings, local-to-standard code conversions, payer/insurance ID mappings maintained by an interface team in a shared database.
Message routing and tenant configuration — Facility routing tables, multi-site deployments where each site has different transformation rules stored in a shared config database.
Enrichment lookups — Zip-to-state/county/region, drug formulary, charge master lookups from a pharmacy or billing system's database.
Provider/NPI lookups — Resolving provider identifiers to names, specialties, or routing destinations from a provider directory.
Compliance and filtering — Reportable condition lists, suppression/exclusion lists maintained in an external system that change periodically.
For small, static lookups that rarely change, manually building a map in channel code or using OIE's built-in features is perfectly fine. This plugin is for when the data lives elsewhere, is too large to hardcode, or changes independently of your channels.
The plugin is a caching layer, not a data store. Manual inserts would create data that exists only in memory — lost on restart, invisible to other systems, no audit trail. The external database is the source of truth.
Reference data already lives in an existing database maintained by another team. Duplicating it creates two sources of truth with no sync guarantee. Let the data owners manage it with their existing tools.
A write API would bypass governance, validation, and audit controls on the source database. The cache should always reflect the authoritative source.
Many tables are large but only a fraction of keys are used. Lazy loading keeps memory proportional to demand. Cold-start cost is spread across the first few messages rather than blocking server startup.
Each cache may point at a different database with different capacity. Per-cache pools allow independent tuning and isolate failures. See Connection Ballooning During Cold Cache for tuning guidance.
Many vendors provide API endpoints for their data, and for some use cases those APIs are absolutely the right choice — particularly when the API provides business logic, real-time data, or context that a simple key-value lookup can't capture.
But not every lookup warrants an API call. For relatively static reference data — code tables, facility configs, mapping tables — the data doesn't change often enough to justify the overhead of a network round-trip on every request. A local cache hit returns in microseconds with no network dependency. An API call, even on a fast local network, adds latency, serialization overhead, and a runtime dependency on the vendor's service being available. When a channel processes high volumes and needs several lookups per message, that cost adds up quickly.
The cache plugin is a good fit when the data is simple, relatively stable, and just needs to be looked up fast — regardless of whether the vendor also offers an API for it.
Cache hits are served directly from memory in microseconds — no network round-trip, no database query, no connection overhead.
The Cache Inspector shows real performance numbers: hit rate, average DB load time per miss, and "Est. Without Cache" — the estimated total time if every lookup had gone to the database.
In practice, caches with high hit rates (90%+) reduce cumulative database wait time from minutes to seconds.