Skip to content

Release Workflow

How feature versions are built, published, and promoted across environments.

Important

Feature releases happen in CI pipelines, not from developer machines. The pnpm commands shown here are the underlying primitives — CI scripts call them. Local use is for development and testing only.

Overview

A release touches three systems: the feature repo (code), ACR (image), and SMUD-GitOps (deployment). The feature team controls code and deployment timing. Team Pilar owns the CI pipeline that bridges them.

Releasing a new version (team perspective)

A team wanting to ship version 1.2.0 of their feature:

  1. Bump version — update version in manifest.json to "1.2.0"
  2. Merge — standard PR/merge workflow
  3. Shared CI builds the image — the platform pipeline detects the change, builds the feature, signs the manifest, pushes favn-feature-patientlist:1.2.0 to ACR, and resolves the digest
  4. Team or CI bot updates their feature registration values.yaml — set the new digest-pinned image reference
  5. SMUD deploys — ConfigMap updated → loader reconciles the new digest → host detects completed writes

The team's code responsibilities are steps 1 and 2. Step 4 is one line in a file they already maintain. Everything else is automated.

yaml
# products/favn-feature-patientlist/development/values.yaml (diff)
 valuesFile:
   enabled: true
   featureId: feature-patientlist
-  image: "registry.dips.no/favn-feature-patientlist@sha256:1111..."
+  image: "registry.dips.no/favn-feature-patientlist@sha256:2222..."

No other products change. The appointments team, forms team, and host are unaffected.

CI pipeline

The platform CI pipeline (owned by Team Pilar) builds and publishes per-feature OCI images:

bash
# Platform CI pipeline — owned by Team Pilar, NOT run from developer machines

# 1. Build and sign the changed feature
pnpm run build --filter feature-patientlist
pnpm run publish:local -- --feature feature-patientlist

# 2. Build and push per-feature OCI image
docker build -t $REGISTRY/favn-feature-patientlist:$VERSION -f Dockerfile.feature .
docker push $REGISTRY/favn-feature-patientlist:$VERSION

The pipeline detects which features changed and builds only those. Feature teams do not configure or maintain any CI — they merge code, and the platform pipeline produces the image. Before the GitOps change is merged, CI should resolve the pushed tag to a digest and use that digest in values.yaml.

Version promotion via SMUD

The feature image digest lives in the team's domain product group values.yaml. Promotion happens through SMUD stages:

  1. Team or CI updates image digest → SMUD deploys to development stage
  2. ConfigMap updated → loader pulls image → host discovers new version → team validates
  3. SMUD promotes to internal-test
  4. After further validation, SMUD promotes to production

Each team promotes independently. No coordination with other feature teams. This is the same promotion model used for every other DIPS product.

Product groups help keep related products together, but they are still expanded into independent product/stage assignments in SMUD. Do not rely on product groups as atomic deploy transactions.

What the image contains

Each per-feature OCI image is a FROM scratch container with static files: JS bundles, CSS, and signed JSON manifests. It should carry a small rollback window of signed versions so the host has another compatible candidate if the active version fails validation.

favn-feature-patientlist:1.2.0/
  artifacts/
    feature-patientlist/
      releases.json         # activeVersion: "1.2.0"
      1.2.0/                # new version (active)
        manifest.json
        feature-patientlist.*.js
      1.1.9/                # rollback candidate if validation fails
        manifest.json
        feature-patientlist.*.js

The releases.json tracks the active version and the available rollback candidates with their manifest hashes. CI is responsible for including a suitable rollback window alongside the new version when building the image.

Why ship a rollback window

If the active version fails at runtime — bad signature, integrity mismatch, schema error — the host can automatically fall back to another signed version in the same image. No redeployment is needed for that immediate recovery path.

This protects against:

  • Corrupted image layers
  • Broken builds that passed CI but fail runtime validation
  • Partial artifact uploads

The Admin UI at /_admin shows whether fallback was triggered and which version is actually serving.

This is a fast mitigation, not a replacement for explicit rollback through SMUD. The bundled rollback candidate is not guaranteed to be the exact cluster-specific last-known-good version for every customer.

Rollback

Two mechanisms, depending on urgency:

Rollback via SMUD (standard)

Revert the image digest in the team's domain values.yaml:

yaml
# products/favn-feature-patientlist/production/values.yaml (rollback)
 valuesFile:
   enabled: true
   featureId: feature-patientlist
-  image: "registry.dips.no/favn-feature-patientlist@sha256:2222..."
+  image: "registry.dips.no/favn-feature-patientlist@sha256:1111..."

SMUD redeploys → ConfigMap updated → feature-loader pulls old image → host reloads → previous version is served. Same flow as an upgrade, just with an older digest. No pod restart needed.

Automatic fallback (immediate)

If a feature version fails runtime validation (bad signature, SRI mismatch), the host automatically loads another signed rollback candidate from the same image when present. No redeployment is needed for that immediate recovery path:

The Admin UI shows which version is active and whether fallback was triggered.

Upgrading with backend dependencies

When a feature version requires a new backend version, the team coordinates both in one commit:

yaml
# products/patientlistservice/production/app.yaml (coordinated upgrade)
helm:
  chartVersion: 4.13.0              # new backend version

# products/favn-feature-patientlist/production/values.yaml
valuesFile:
  enabled: true
  featureId: feature-patientlist
  image: "registry.dips.no/favn-feature-patientlist@sha256:3333..."
  dependencyManager:
    enabled: true
    dependencies:
      - name: patientlistservice
        minVersion: "4.13.0"       # updated dependency
      - name: favn-host

SMUD checks dependencyManager before deploying. If the customer's cluster doesn't have patientlistservice >= 4.13.0, the upgrade is blocked — preventing a frontend/backend mismatch.

What Pilar Runtime channels mean in OCI mode

In OCI mode, version promotion is handled through SMUD stages. The releases.json provides:

  • Fallback ordering — host tries versions in order, skips broken ones
  • Provenance — who built it, when, git SHA, manifest hashes (audit trails)

Cloud CDN model (target)

When the cloud CDN model is adopted, features publish directly to Azure Blob Storage / CDN. The host polls remote-index.json and picks up new versions dynamically — no image rebuild, no SMUD deployment for feature changes.

In this model, Pilar Runtime's channel system becomes the feature-level deployment workflow:

CI pipeline (cloud CDN)

bash
# CI pipeline

# 1. Build and sign
pnpm run build
pnpm run publish -- --feature feature-dashboard --channel canary

# 2. After soak period (separate pipeline or manual trigger)
pnpm run release:promote -- --feature feature-dashboard --version 1.1.0 \
  --actor ci-pipeline --reason "Canary healthy, metrics green"

Channel promotion

Features move through channels independently of host deployments:

canary → stable (promote)
stable → previous stable (rollback)

The host picks up channel changes on its next poll cycle (FEATURE_REMOTE_POLL_MS). No redeployment needed.

When this unlocks

Cloud CDN channels become valuable when:

  • Customer clusters can reach the cloud CDN (requires customer agreement)
  • You want to update features without rebuilding OCI images
  • You want per-feature canary validation in production traffic

Until then, SMUD stage promotion handles everything.

For cold-start, convergence, and readiness semantics during upgrades and rollbacks, see Deployment Model — Lifecycle contract.

Release index

Regardless of distribution model, each feature has a signed releases.json that tracks:

  • Schema version, feature ID, monotonic sequence number
  • Active version designation
  • Version list with manifest SHA-256 hashes
  • Provenance metadata (build time, git SHA, actor, reason)

The host validates the release index signature on every load/refresh cycle.

Admin UI

Operators can inspect release state from the Admin UI at /_admin:

  • View all loaded features, versions, and active version
  • Inspect signature and integrity status per version
  • View fallback history (if fallback was triggered)
  • Hot-refresh manifests without restarting the host

In cloud CDN mode, the Admin UI also supports promote and rollback actions.

Low-level CLI commands

These commands manipulate releases.json directly. They are called by CI scripts — not run from developer machines in production workflows.

bash
# Release index commands (used by CI)
pnpm run release:canary -- --feature feature-my-feature --version 1.1.0
pnpm run release:promote -- --feature feature-my-feature --version 1.1.0 \
  --actor ci-pipeline --reason "Canary healthy"
pnpm run release:rollback -- --feature feature-my-feature --version 1.0.0 \
  --actor ci-pipeline --reason "Error budget exceeded" --ticket OPS-42

# Check release status
pnpm run release:status
pnpm run release:status -- --index-url https://your-cdn.example.com/remote-index.json

--actor, --reason, and --ticket are provenance metadata fields for audit.

Local development and testing

For local development, the interactive workflow handles everything:

bash
# Local dev (build + publish to local artifacts + start host)
pnpm run favn

# Or explicitly
pnpm run dev

Local development does not use the registration pattern. The host scans the local artifacts/ directory directly. The ConfigMap + sidecar pattern is only for cluster deployments.

To test remote CDN discovery locally:

bash
# Start local mock CDN
pnpm run mock:cdn

# Start host against mock remote index (separate terminal)
FEATURE_REMOTE_INDEX_URL=http://127.0.0.1:4100/remote-index.json pnpm run dev:host

# Verify remote discovery works
pnpm run verify:remote-cdn

Production checklist

Before deploying to production:

  • NODE_ENV=production
  • Trusted public keys configured (FEATURE_TRUST_PUBLIC_KEYS_JSON or FEATURE_TRUST_PUBLIC_KEYS_PATH)
  • FEATURE_ALLOW_INSECURE_DEV_KEYS is not enabled
  • ALLOW_UNAUTHENTICATED_ADMIN is not enabled
  • Auth mode configured (RPC token, DFS, or trusted-cluster)
  • pnpm run doctor, pnpm run lint:routes, and pnpm run validate:compat:strict pass
  • Feature manifests have infrastructure fields documenting backend dependencies
  • Arena Desktop arena fields are set for features that need navigation integration