Skip to content

Asset Distribution

How feature artifacts — static files (JS, CSS, signed JSON manifests) — reach the host across deployment environments.

Features are static assets

A built feature is a set of static files: JS bundles, CSS, and a signed JSON manifest. The host reads and serves them to the browser. The same files work regardless of how they are delivered.

Distribution models

ModelTransportBest for
Per-feature OCI imagesContainer registry (ACR), one image per featureIndependent team delivery, customer clusters, air-gapped environments
Cloud CDNAzure CDN / Blob StorageCentrally operated clusters with outbound internet access

Per-feature OCI images are the primary model. Each domain product group creates a ConfigMap declaring its feature image digest. The host pod uses the same loader image as an init container for the initial sync and as a long-running sidecar for watch/reconcile. The host detects completed writes in the per-pod emptyDir and serves them.

Distribution via OCI

Customer clusters operate in a pull model — they fetch container images from a shared Azure Container Registry (ACR) but may not have outbound internet access to a cloud CDN. OCI images are a natural fit because the container registry is an established delivery channel to every customer cluster.

Per-feature OCI images carry static assets through this channel. Each image is built FROM scratch — a versioned, registry-hosted bundle of files with no OS or runtime.

OCI is the transport, not the architecture

The artifact format (signed manifests + hashed static files) is transport-agnostic. OCI images are today's carrier because the container registry is the only path to customer clusters. When a CDN or other delivery channel becomes available, the same artifacts move there unchanged — only the transport changes, not the assets or the trust model.

Per-feature OCI images

Each team publishes their own OCI image containing only static assets. The image is part of the team's domain product group alongside their backend services.

Image structure

Each image contains one feature directory with releases.json and versioned subdirectories. In production, the image should carry a small rollback window of signed versions, not just the newest one. That gives the host something compatible to fall back to if the active version fails runtime validation, but it should not be described as a guaranteed customer-specific "previous healthy" version.

The image contains only static files:

dockerfile
FROM scratch
COPY artifacts/feature-patientlist/ /artifacts/feature-patientlist/
# No CMD, no ENTRYPOINT — this image is never "run"
favn-feature-patientlist:1.2.0/
  artifacts/
    feature-patientlist/
      releases.json
      1.2.0/
        manifest.json
        feature-patientlist.CIw_rr7d.js
        feature-patientlist.DEtLqRSc.css
      1.1.9/
        manifest.json           (rollback candidate if validation fails)
        feature-patientlist.*.js
        feature-patientlist.*.css

How assets reach the host

The delivery mechanism is a ConfigMap + init-sync + feature-loader sidecar pattern:

  1. Domain product group deploys → Helm creates a ConfigMap with the feature image digest
  2. The feature-loader init container performs the first full sync into the per-pod emptyDir
  3. The host only reports ready after the loader writes the configured readiness sentinel
  4. The feature-loader sidecar watches for later ConfigMap changes and reconciles updates
  5. The host's existing file watcher (HOT_REFRESH_WATCH=1) detects completed writes and reloads
  6. Host verifies signatures and integrity, then serves the feature

No pod restart needed. No persistent storage needed.

Each pod uses its own emptyDir volume — a temporary directory that exists for the lifetime of the pod. There is no PVC and no RWX requirement, which means:

  • Scaling replicas just works — each pod independently performs the same init sync and watches the same ConfigMaps
  • All pods converge to the same state (same ConfigMaps → same images → same assets)
  • No dependency on storage provisioners or volume access modes
  • Rolling updates remain eventually consistent rather than perfectly synchronized; readiness gating prevents a pod from serving before it has converged locally

What teams configure

Each feature has its own product in SMUD-GitOps using a shared Helm chart (favn-feature-register) owned by Team Pilar. Teams configure two values:

yaml
# products/favn-feature-patientlist/production/values.yaml
valuesFile:
  enabled: true
  featureId: feature-patientlist
  image: "registry.dips.no/favn-feature-patientlist@sha256:1111..."

That's the full extent of asset distribution config a feature team writes. The shared chart creates the ConfigMap.

CI pipeline

The platform CI pipeline (owned by Team Pilar) builds per-feature OCI images. Feature teams do not maintain CI config — they merge code, and the pipeline handles the rest:

bash
# Platform CI pipeline (Team Pilar) — NOT run from developer machines

# 1. Build and sign the changed feature
pnpm run build --filter feature-patientlist
pnpm run publish:local -- --feature feature-patientlist

# 2. Build and push per-feature OCI image
docker build -t $REGISTRY/favn-feature-patientlist:$VERSION \
  -f Dockerfile.feature .
docker push $REGISTRY/favn-feature-patientlist:$VERSION

After the image is published, CI should resolve the pushed tag to a digest before the GitOps change is merged.

Migration path

Migrate incrementally from a single CDN image to per-feature images:

  1. Start with the single image as a base — the existing favn-cdn image continues to serve all features
  2. Extract one feature — build a per-feature image for a single team, create a favn-feature-* registration product in SMUD
  3. Remove the extracted feature from the single image — the feature-loader sidecar now owns that feature's delivery
  4. Repeat — extract features one at a time as teams are ready
  5. Remove the single CDN image — when all features have been extracted, the base image is no longer needed

During migration, both models coexist. The host discovers features from whatever is present in its artifacts directory.

Cloud CDN (target architecture)

The target model is Azure Blob Storage with Azure CDN in front:

CI pipeline:
  build features → cdn-release.js → upload to Azure Blob Storage → CDN invalidation

Host:
  FEATURE_REMOTE_INDEX_URL=https://your-cdn.azureedge.net/remote-index.json

This removes the OCI packaging step entirely. CI uploads artifacts directly to blob storage, and Azure CDN serves them globally with edge caching.

The update mechanics do not change: both OCI mode and CDN mode should stage downloads out of band, keep version directories immutable, and atomically swap only small metadata files such as releases.json. Replacing a populated feature directory in one filesystem rename is not a valid update strategy.

Migration from OCI to cloud CDN

The ConfigMap-based discovery makes migration straightforward. The ConfigMap can point to a CDN base URL instead of an OCI image:

yaml
# Today: OCI image (sidecar pulls and extracts)
data:
  featureId: feature-patientlist
  image: registry.dips.no/favn-feature-patientlist@sha256:abc123...

# Future: cloud CDN (sidecar fetches from CDN)
data:
  featureId: feature-patientlist
  cdnBaseUrl: https://your-cdn.azureedge.net/feature-patientlist/

In CDN mode, the loader fetches <cdnBaseUrl>/releases.json (signed), resolves the active version, and downloads the versioned assets to the same emptyDir. The artifact structure is identical — releases.json + versioned directories — just served over HTTPS instead of pulled from a registry. The host's trust model (signatures + SRI) is transport-agnostic.

Coexistence

Both models can run simultaneously across different deployments:

  • Your cluster: cloud CDN (opted in)
  • Customer A: cloud CDN (opted in)
  • Customer B: OCI images via feature-loader (not yet opted in)

Each deployment is independent. Customers opt in by switching their feature registration product's values from image to cdnBaseUrl when they are ready.

Security considerations

All distribution models inherit the full Pilar Runtime trust chain:

  • Ed25519 manifest signatures — verified regardless of transport
  • SHA-256 SRI integrity — verified regardless of origin
  • Release index signatures — verified on every load/refresh

The transport layer (OCI image, emptyDir, Azure CDN) is untrusted by design. A compromised transport cannot serve unsigned or tampered assets — the host will reject them.

Transport-agnostic verification

The trust chain is designed to work regardless of delivery mechanism — private registry, cluster-local volume, or public CDN. Cryptographic signatures and SRI integrity checks ensure that a compromised transport cannot serve tampered content. The host verifies every asset regardless of where it came from.

This means the same verification model applies whether assets travel over a private cluster network today or a public CDN to hospital environments tomorrow. Moving to CDN delivery requires zero changes to the security model.

Operational prerequisites

OCI mode adds two cluster requirements that should be validated early in customer environments:

  • The host pod namespace needs egress to ACR
  • The loader process needs explicit registry credentials or workload identity support

Kubernetes imagePullSecrets are not sufficient by themselves unless the loader implementation is wired to use the same Docker config or federated identity inside the container process.