Skip to content

Deployment Model

How feature teams deploy frontend and backend together through SMUD-GitOps, and how the platform delivers features to the host at runtime.

What teams do

Each feature team owns a domain product group in SMUD-GitOps. This groups their backend services together with a feature registration product — a small SMUD product that tells the host "my feature exists at this artifact reference."

The product group is an authoring convenience, not a transactional deployment unit. SMUD expands groups into independent product/stage assignments, so keep the backend product(s) and favn-feature-* registration product in the same group and promote them together in normal workflows.

yaml
# environments/productGroups.yaml

patientlist-domain:
  - patientlistservice              # backend service (team's existing product)
  - patientlist-db                  # database (team's existing product)
  - favn-feature-patientlist        # feature registration product (new)

The feature registration product is a standard SMUD product. It uses a shared Helm chart (favn-feature-register) provided by Team Pilar. The team creates the product directory once, then only updates values.yaml when releasing:

products/favn-feature-patientlist/
  product.yaml
  development/
    app.yaml
    values.yaml
  production/
    app.yaml
    values.yaml

product.yaml (create once):

yaml
productName: favn-feature-patientlist
responsible: team-patientlist
chartName: favn-feature-register

The chartName tells SMUD to use Team Pilar's shared chart instead of looking for a chart matching the product name. All favn-feature-* products use the same chart.

app.yaml (create once, same in all stages):

yaml
helm:
  chartVersion: 1.0.0
useGoTemplate: true

values.yaml (the only file you update regularly):

yaml
valuesFile:
  enabled: true
  featureId: feature-patientlist
  image: "registry.dips.no/favn-feature-patientlist@sha256:8e1d..."
  dependencyManager:
    enabled: true
    dependencies:
      - name: patientlistservice
        minVersion: "4.12.0"
      - name: favn-host

That's it. When SMUD deploys this product, the platform takes over — the host discovers and serves the feature automatically.

Releasing a new version

Update the digest-pinned image reference in your feature registration values.yaml:

yaml
# products/favn-feature-patientlist/development/values.yaml (diff)
 valuesFile:
   enabled: true
   featureId: feature-patientlist
-  image: "registry.dips.no/favn-feature-patientlist@sha256:1111..."
+  image: "registry.dips.no/favn-feature-patientlist@sha256:2222..."

Team Pilar's CI should resolve tags to digests before the GitOps change is merged. Production values should not rely on mutable tags.

No other products change. The appointments team, forms team, and host are unaffected. Promote through stages using the standard SMUD workflow.

Rolling back

Revert the image digest:

yaml
# products/favn-feature-patientlist/production/values.yaml (rollback)
 valuesFile:
   enabled: true
   featureId: feature-patientlist
-  image: "registry.dips.no/favn-feature-patientlist@sha256:2222..."
+  image: "registry.dips.no/favn-feature-patientlist@sha256:1111..."

Same flow as a release, just with an older digest. No pod restart needed.

The host can also use a packaged rollback window: if a version fails runtime validation (bad signature, integrity mismatch), the host can load another signed version bundled in the same artifact set when present. Treat this as a fast safety net, not a guarantee that every customer always has a cluster-specific previous healthy version available.

Upgrading with backend dependencies

When a feature version requires a new backend version, update both products in one commit:

yaml
# products/patientlistservice/production/app.yaml
 helm:
-  chartVersion: 4.12.1
+  chartVersion: 4.13.0

# products/favn-feature-patientlist/production/values.yaml
 valuesFile:
   enabled: true
   featureId: feature-patientlist
-  image: "registry.dips.no/favn-feature-patientlist@sha256:1111..."
+  image: "registry.dips.no/favn-feature-patientlist@sha256:2222..."
   dependencyManager:
     enabled: true
     dependencies:
       - name: patientlistservice
-        minVersion: "4.12.0"
+        minVersion: "4.13.0"
       - name: favn-host

SMUD blocks the upgrade if the customer's cluster doesn't have patientlistservice >= 4.13.0.

Adding a new feature (first time setup)

  1. Merge your feature code with a version in manifest.json
  2. The shared CI pipeline builds the feature, signs it, pushes favn-feature-labresults:<version> to ACR, and resolves the published image to a digest
  3. Create the product directory products/favn-feature-labresults/ in the GitOps repo (product.yaml, stage folders, app.yaml, values.yaml — see structure above)
  4. Add favn-feature-labresults to your product group in productGroups.yaml
  5. Add your product group to the relevant environment files
  6. SMUD deploys → the loader performs initial sync, the host becomes ready, and the feature is served

Steps 3–5 are one-time setup. After that, releasing is just step 1 + updating values.yaml.

Features with no backend

Create a minimal product group with just the registration product:

yaml
# productGroups.yaml
helpwidget-domain:
  - favn-feature-helpwidget

No dependencyManager needed since there's no backend.

What each team owns

ResponsibilityFeature teamTeam Pilar (platform)
Feature code + manifest.jsonYes
Feature registration product (values.yaml)Yes
dependencyManager declarationsYes
CI pipeline, signing, DockerfilesYes
Pilar Runtime hostYes
favn-feature-register chartYes
feature-loader sidecarYes

Teams do not own or configure: CI pipelines, Dockerfiles, the host, the sidecar, signing keys, or the shared chart. Team Pilar handles all of that.

How it works (platform internals)

This section explains what happens after SMUD deploys your feature registration product. You don't need to understand this to use the system — it's here for those who want to know how the pieces fit together.

Why domain product groups

With Pilar Runtime, many features share a single host. If backend dependencies were declared on the host product, it would depend on everything — and every customer would need every backend installed even if they only use a few features.

Domain product groups solve this: each team's frontend registration and backend live together, with dependencyManager scoped to that domain. Customers only need backends for features they actually deploy.

Because SMUD product groups are expansion helpers rather than atomic deploy units, keep membership stable and avoid placing the same product in multiple groups with different stage expectations.

The host is a separate product group (favn-runtime) with zero feature knowledge:

yaml
# environments/productGroups.yaml
favn-runtime:
  - favn-host                       # knows nothing about features

patientlist-domain:
  - patientlistservice
  - patientlist-db
  - favn-feature-patientlist

appointments-domain:
  - appointment-service
  - appointment-bff
  - favn-feature-appointments

From product group to running feature

When SMUD deploys a feature registration product, the shared chart creates a ConfigMap in the cluster:

yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: favn-feature-patientlist
  labels:
    pilar-runtime: feature
data:
  featureId: feature-patientlist
  image: registry.dips.no/favn-feature-patientlist@sha256:a1b2c3...

The host pod uses the same loader image in two roles:

  • An init container performs the initial full sync before the host becomes ready
  • A long-running feature-loader sidecar watches for changes and reconciles updates during pod lifetime

Both write into the same per-pod emptyDir, shared only between the init container, sidecar, and host container in that pod. The host's file watcher detects completed updates and starts serving the feature.

Key properties:

  • No persistent storage — each pod uses a per-pod emptyDir. No PVC, no RWX dependency, and no storage class requirements.
  • Cold-start safety — the init container completes the first sync, writes a readiness sentinel, and the host only reports ready after that sentinel exists.
  • Multiple replicas converge — each pod independently watches the same ConfigMaps and pulls the same digests, so all replicas converge on the same content.
  • No host restart on feature update — the sidecar writes new files, then the host reloads automatically.
  • Safe live updates — the loader stages new content out of band, validates it, writes immutable version directories, and atomically swaps only small metadata such as releases.json. It does not rely on replacing a populated feature directory in one rename.
  • Periodic reconciliation — the sidecar re-lists all ConfigMaps every 5 minutes to catch missed events or watch reconnect gaps.
  • On pod restart — the init container and sidecar rebuild local state from ConfigMaps. No persistent disk state is required.

Image digests

The CI pipeline resolves image tags to digests (@sha256:...) to ensure all pods pull identical bytes. Mutable tags like :1.3.0 can point to different content after a re-push — digests prevent this.

The host pod

yaml
spec:
  serviceAccountName: favn-host
  initContainers:
    - name: feature-loader-init
      image: registry.dips.no/favn-feature-loader:1.0.0
      args: ["sync-once"]
      env:
        - name: ARTIFACTS_DIR
          value: /shared-artifacts
        - name: ARTIFACTS_READY_SENTINEL
          value: .runtime-ready
      volumeMounts:
        - name: feature-artifacts
          mountPath: /shared-artifacts
  containers:
    - name: host
      image: registry.dips.no/favn-host:3.5.0
      env:
        - name: ARTIFACTS_DIR
          value: /shared-artifacts
        - name: ARTIFACTS_READY_SENTINEL
          value: .runtime-ready
        - name: HOT_REFRESH_WATCH
          value: "1"
      volumeMounts:
        - name: feature-artifacts
          mountPath: /shared-artifacts
      readinessProbe:
        httpGet:
          path: /_ready
          port: 3000

    - name: feature-loader
      image: registry.dips.no/favn-feature-loader:1.0.0
      env:
        - name: ARTIFACTS_DIR
          value: /shared-artifacts
        - name: ARTIFACTS_READY_SENTINEL
          value: .runtime-ready
      volumeMounts:
        - name: feature-artifacts
          mountPath: /shared-artifacts

  volumes:
    - name: feature-artifacts
      emptyDir: {}

The shared container filesystem path is backed by a per-pod emptyDir, not a PVC. The init container populates it before the host becomes ready, and the sidecar keeps it current afterward.

The loader also needs two cluster-level assumptions documented up front:

  • Pod-level egress from the host namespace to ACR
  • Explicit registry credentials or workload identity that the loader process itself can use

Kubernetes imagePullSecrets help kubelet pull container images, but they do not automatically authenticate a registry client running inside the sidecar.

Lifecycle contract

This section defines the intended rollout and failure-handling contract for Pilar Runtime in OCI mode. It is the behavior the loader, host readiness, and operational runbooks should converge on.

Frontend upgrade

Trigger: the feature registration product changes valuesFile.image from one digest to another.

Expected sequence:

  1. SMUD updates the feature ConfigMap with the new digest
  2. Already-ready pods keep serving the currently active releases.json
  3. The sidecar fetches the new image into .staging/
  4. The sidecar validates signatures, manifests, and expected files before publishing anything live
  5. The sidecar copies immutable version directories into the live feature path
  6. The sidecar atomically replaces releases.json
  7. The host watcher reloads and begins serving the new active version

What remains active during convergence: the previously active frontend version stays active until step 6 completes successfully. The host must never serve a partially extracted version.

Readiness: existing ready pods stay ready during a steady-state frontend upgrade. A new pod is not ready until init-sync has completed and written the readiness sentinel.

Failure boundary: if fetch, extraction, validation, or publish fails, the sidecar leaves the existing live feature untouched. Desired state is newer, active state remains current.

Frontend downgrade

Trigger: the feature registration product changes valuesFile.image to an older digest.

Expected sequence: identical to frontend upgrade. A downgrade is just another digest reconciliation.

What remains active during convergence: the currently active frontend version stays active until the older artifact has been fetched, validated, and published safely.

Readiness: existing ready pods stay ready while reconciling a downgrade. Cold-start pods still require a full successful init-sync before becoming ready.

Failure boundary: if the older image cannot be fetched or validated, the pod keeps serving the current active version. Downgrade intent does not justify dropping the last known good state.

Backend upgrade

Trigger: the backend product version is increased, often in the same Git commit as a frontend image update and dependencyManager bump.

Expected sequence:

  1. SMUD applies the backend and feature registration product updates according to its normal product expansion rules
  2. Because product groups are not atomic deployment units, backend and frontend may converge at different times
  3. The currently active frontend remains served until the new frontend has been fetched and published safely
  4. The backend must remain backward compatible across the rollout window, or the frontend must tolerate both old and new backend behavior during that window

What remains active during convergence: the old frontend remains active until the new frontend is fully available locally. Other unrelated features remain unaffected regardless of backend rollout order.

Readiness: host readiness should not depend on a specific feature backend being upgraded successfully. Backend reachability is a feature health concern, not a pod readiness concern.

Failure boundary: if the backend upgrade succeeds but the new frontend never converges, the old frontend continues serving. If backend compatibility is broken during the rollout window, only that feature's RPC/API calls should fail; the host must not crash or go unready.

Backend downgrade

Trigger: the backend product version is rolled back, with or without a matching frontend rollback.

Expected sequence:

  1. SMUD applies the backend rollback according to normal product/stage rules
  2. The host continues serving the currently active frontend until a frontend rollback is explicitly reconciled
  3. If the rolled-back backend no longer satisfies the active frontend's expectations, only that feature's backend-dependent calls fail
  4. A coordinated rollback should revert the frontend digest as well when compatibility is not preserved

What remains active during convergence: the currently active frontend remains active until another frontend artifact is successfully published locally.

Readiness: host readiness still reflects local artifact convergence, not downstream backend compatibility. A backend rollback does not by itself make the host pod unready.

Failure boundary: when backend and frontend are no longer compatible, keep serving the current frontend shell but fail feature RPC/API calls explicitly. Do not drop unrelated features or the entire host pod out of service.

Failed fetch / failed verify

Trigger: the loader cannot pull the target image, cannot extract it, or the host/runtime validation rejects the new artifact.

Expected sequence:

  1. The sidecar attempts reconciliation against the desired digest
  2. The fetch or validation step fails before live metadata is swapped
  3. The sidecar records the failure and retries on watch/relist/backoff
  4. The host continues serving the currently active local version if one exists

What remains active during convergence: the last successfully published local version remains active. If the pod has never successfully converged that feature on local disk, there is no active version to serve for that feature.

Readiness: this is the critical cold-start vs steady-state distinction:

  • On cold start, readiness stays false until the initial required artifact set has been fetched and published successfully
  • On an already-ready pod, a failed refresh attempt does not make the pod unready if the existing active set is still valid and being served

Failure boundary: failed desired state must not overwrite valid active state. The safe fallback is "keep serving current" for existing pods and "stay unready" for new pods that never completed initial sync.

What remains active during convergence

These rules apply across all scenarios:

  • The host serves the last fully published and validated local artifact set
  • .staging/ content is never considered active
  • A new version becomes active only when live metadata has been swapped successfully
  • Different pods may converge at different times, but each pod must individually obey the same safe-swap rule
  • Convergence should be monotonic from the point of view of a single pod: old active state, then new active state, with no partial intermediate state exposed

What readiness should mean

Readiness should answer one narrow question: "Can this pod safely serve its currently active local feature set?"

Readiness should mean:

  • Init-sync has completed for the pod
  • The readiness sentinel has been written by the loader
  • The active artifact set on disk is internally valid enough for the host to serve
  • The host process is up and able to load from ARTIFACTS_DIR

Readiness should not mean:

  • Every feature backend is healthy
  • The pod has already converged to the newest desired digest for every feature
  • A background refresh attempt succeeded most recently

Operationally, this gives the correct behavior for hospital environments:

  • New pods do not enter service until they have a complete local artifact set
  • Existing pods stay in service while newer frontend artifacts are being fetched and validated
  • Feature-specific backend failures surface as feature health issues, not full-host readiness failures

Customer deployments

Customers choose which product groups to deploy. A customer who doesn't use appointments excludes appointments-domain from their environment — the host simply doesn't serve those features.

yaml
# Full environment (all features)
applications:
  - stage: production
    productGroups:
      - favn-runtime
      - iam
      - patientlist-domain
      - appointments-domain
      - forms-domain

# Minimal environment (subset)
applications:
  - stage: production
    productGroups:
      - favn-runtime
      - iam
      - patientlist-domain

Customers pin versions independently per feature through their stage-specific values. There is no "all-or-nothing" upgrade.

The host doesn't break if a feature's backend is missing. Server functions return 503 (endpoint unreachable) — other features are completely unaffected.

Example: full GitOps walkthrough

Repository structure

smud-gitops/
  environments/
    productGroups.yaml            # defines what belongs together
    dips-dev.yaml                 # DIPS internal development
    customer-ahus.yaml            # Akershus universitetssykehus
  products/
    favn-host/                    # Pilar Runtime host (Team Pilar)
      product.yaml
      production/
        app.yaml
        values.yaml
    favn-feature-patientlist/     # feature registration (team-patientlist)
      product.yaml
      production/
        app.yaml
        values.yaml
    patientlistservice/           # backend (team-patientlist)
      product.yaml
      production/
        app.yaml
        values.yaml

Upgrading a single feature

The patientlist team ships version 1.3.0:

  1. Team bumps version in manifest.json to "1.3.0" and merges
  2. Platform CI builds, signs, pushes favn-feature-patientlist:1.3.0 to ACR, and resolves the digest
  3. Team or CI bot updates image in products/favn-feature-patientlist/development/values.yaml
  4. SMUD deploys → ConfigMap updated → sidecar pulls new image → host reloads
  5. Team promotes through stages → customers get the update on their own schedule

Declaring backend dependencies in the manifest

Features should also declare backend dependencies in the manifest infrastructure field. This is metadata — the host does not enforce it. It exists for the Admin UI, health endpoints, and pnpm run doctor:

json
{
  "id": "feature-patientlist",
  "infrastructure": {
    "services": ["patientlistservice"],
    "productGroup": "patientlist-domain"
  },
  "serverFunctions": {
    "endpoint": "http://patientlistservice.your-namespace.svc.cluster.local/rpc",
    "exports": ["getPatients", "searchPatients"]
  }
}

Keep manifest and SMUD in sync

The manifest's infrastructure.services should mirror the feature registration's dependencyManager. The manifest is what developers and the Admin UI see; dependencyManager is what SMUD enforces.

Transition to cloud CDN

The ConfigMap-based discovery makes cloud CDN migration straightforward. Today, the ConfigMap points to an OCI image. Tomorrow, it can point to a CDN base URL:

yaml
# Today: OCI image (sidecar pulls and extracts)
data:
  featureId: feature-patientlist
  image: registry.dips.no/favn-feature-patientlist@sha256:abc123...

# Future: cloud CDN (sidecar fetches from CDN)
data:
  featureId: feature-patientlist
  cdnBaseUrl: https://your-cdn.azureedge.net/feature-patientlist/

In CDN mode, the sidecar fetches <cdnBaseUrl>/releases.json (signed), resolves the active version, and downloads the versioned assets. The host's trust model (signatures + SRI) is transport-agnostic.

The same readiness and staging rules still apply in CDN mode: populate the shared artifacts directory before readiness, keep version directories immutable, and atomically swap only the small metadata files that change active state.

For the full migration path, see Asset Distribution — Cloud CDN.