Appearance
TL;DR
A compact overview of Pilar Runtime — what it is, how teams use it, and how features reach customers. For full details, follow the links in each section.
What is Pilar Runtime?
A thin host server that discovers, verifies, and serves independently deployed static assets (JS, CSS, JSON manifests) at runtime. Teams build and release features without coordinating with each other or rebuilding a shared app.
| Pilar Embedded | Pilar Runtime | |
|---|---|---|
| Composition | Build-time (Next.js monorepo) | Runtime (independent artifacts) |
| Deploy unit | Single app image, all features | Per-feature static asset image |
| Release coupling | All teams share one deploy | Each team ships independently |
Pilar Embedded is the default. When to consider Pilar Runtime.
Why This Is Worth It
Pilar Runtime is not a simple frontend platform, and that is intentional. The complexity is there for a reason:
- independent team release cadence
- safe feature delivery through SMUD-GitOps in hospital deployments
- host-managed runtime activation, verification, and rollback behavior
- a clean transition path to CDN-based delivery later
- healthcare-grade rollout, rollback, and partial-failure safety
Most of that complexity is platform-owned. Teams should mainly own feature code, manifest data, backend dependencies, and a small SMUD registration product. Team Pilar owns the host, artifact verification, delivery mechanics, signing, rollout semantics, and the eventual transition from SMUD/OCI delivery to CDN delivery.
Feature types
Framework Position
The runtime should be understood as browser-asset based, not permanently tied to React. Today the platform is React-oriented in its shared dependencies, SDK, and examples, but the host fundamentally cares about serving valid JavaScript, CSS, and manifest-described behavior. React is the current default, not the only possible long-term option.
| Type | Role | Example |
|---|---|---|
Domain (featureType: "domain") | Owns a URL route, orchestrates UI and data | Patient list, appointments |
Facet (featureType: "facet") | Reusable embedded UI, receives props | Summary card, allergy badge |
How a feature is built
examples/feature-my-feature/
manifest.json # Feature contract (routes, auth, shared deps, infra)
src/
main.jsx # Entry point
App.jsx # Root component
actions.server.js # Server functions (RPC)The manifest is the single source of truth. Key fields:
json
{
"id": "feature-patientlist",
"version": "1.3.0",
"featureType": "domain",
"mount": { "pathPrefix": "/patientlist" },
"serverFunctions": { "endpoint": "http://patientlistservice/rpc" },
"infrastructure": { "services": ["patientlistservice"] }
}Manifest contract | Building features
Server functions (RPC)
Two modes — choose based on complexity:
| Module mode | Endpoint mode | |
|---|---|---|
| Code location | Ships with feature artifact | Team's own backend |
| Good for | Simple reads, reshaping data | Business logic, writes, complex deps |
| Isolation | Shares host process | Full process isolation |
How teams own and deliver features
Each team owns a domain product group containing both frontend and backend. The host knows nothing about individual features.
What each team owns:
| Responsibility | Feature team | Team Pilar |
|---|---|---|
| Feature code + manifest | Yes | |
| Domain values.yaml (backend + frontend tags) | Yes | |
| dependencyManager | Yes | |
| CI pipeline, signing, Dockerfiles | Yes | |
Host, feature-loader sidecar, favn-feature-register chart | Yes |
A team's feature registration product (standard SMUD product using Team Pilar's shared chart):
yaml
# products/favn-feature-patientlist/production/values.yaml
valuesFile:
enabled: true
featureId: feature-patientlist
image: "registry.dips.no/favn-feature-patientlist@sha256:3333..."
dependencyManager:
enabled: true
dependencies:
- name: patientlistservice
minVersion: "4.13.0"
- name: favn-hostHow features reach customers
A feature's build output is static files — JS bundles, CSS, and a signed JSON manifest. OCI images are the transport to customer clusters.
Each per-feature image is built FROM scratch and contains only static files — no OS, no runtime, no server. When a domain product group deploys, it creates a ConfigMap declaring "this feature exists at this image digest." A loader init container plus a long-running feature-loader sidecar in the host pod pull the referenced images and write assets to a per-pod emptyDir. The host detects new files and serves them — no pod restart needed and no PVC or RWX requirement.
OCI is the transport, not the architecture
The feature artifact format is transport-agnostic: signed manifests + hashed static files. OCI images are today's carrier because customer clusters already pull from a container registry. If a CDN or other channel becomes available, the same artifacts move there unchanged — only the delivery mechanism changes, not the assets or the trust model.
Release workflow
Upgrading: team or CI updates the image digest in domain values.yaml → SMUD deploys → ConfigMap updated → loader reconciles the new image → host reloads.
Rolling back: team reverts the image digest → SMUD redeploys → ConfigMap updated → loader reconciles the old image → host reloads. Same flow, older digest. No pod restart.
Automatic fallback: if a version fails runtime validation (bad signature, SRI mismatch), the host loads another signed rollback candidate from the same image when present — no redeployment needed for that immediate recovery path.
Deployment via SMUD-GitOps
Features deploy as domain product groups — frontend assets and backend services together:
yaml
# productGroups.yaml
patientlist-domain:
- patientlistservice # backend (running service)
- patientlist-db # database (running service)
- favn-feature-patientlist # frontend (static asset image — not a running service)Backend dependencies are enforced per domain product group via dependencyManager — SMUD blocks upgrades if required backends aren't present. Customers only need backends for features they actually deploy.
Customers choose which features to deploy and which versions to run:
The host doesn't break if a feature or backend is missing — it serves whatever is present.
Deployment model | Full GitOps walkthrough
Transition to cloud CDN
The ConfigMap-based discovery makes cloud CDN migration straightforward. Today, the ConfigMap points to an OCI image. Tomorrow, it can point to a CDN URL:
yaml
# Today: OCI image
valuesFile:
featureId: feature-patientlist
image: "favn-feature-patientlist@sha256:abc123..."
# Future: cloud CDN
valuesFile:
featureId: feature-patientlist
cdnBaseUrl: "https://your-cdn.azureedge.net/feature-patientlist/"In both cases the sidecar writes the same artifact structure to the emptyDir — releases.json + versioned directories. Same format, same trust model, different transport.
Asset distribution — Cloud CDN
Security
Every feature artifact passes through a cryptographic trust chain before serving:
- Ed25519 signatures on manifests and release indexes
- SHA-256 SRI integrity on every JS/CSS asset
- Automatic fallback — if a version fails validation, the host loads another signed rollback candidate when present
- Transport-agnostic — verification works regardless of whether assets come from OCI images, emptyDir, or CDN
The trust chain is transport-agnostic by design: it protects assets whether they travel over a private cluster network or a public CDN. When features move to CDN delivery — where assets flow through untrusted public networks to hospital environments — the same verification applies with zero changes.