Skip to content

Architecture Comparison

The long-term goal for Pilar is one frontend architecture — not two, not five. Maintaining multiple composition models is expensive, and nobody wants that as a permanent state.

Pilar Embedded is the production platform today and works well for most teams. Pilar Runtime is an architectural exploration: does runtime composition solve the scaling and deployment edge cases that build-time composition can't cover? If it does, the learnings feed into the platform's evolution. If it doesn't, we've tested the hypothesis cheaply.

Two frameworks running in parallel is the current reality, not the desired end state. Pilar Runtime exists to answer questions that can only be answered by building and running real features in production.

ApproachModelStatus
Pilar EmbeddedOne Next.js app, features as workspace packagesProduction
Pilar RuntimeThin host, features as signed runtime artifactsExploring
Many small appsOne app per team, each independently deployedConsidered and rejected

Architecture at a glance

Pilar RuntimePilar Embedded
Composition modelRuntime (CDN artifacts discovered on request)Build-time (monorepo compiled into single Next.js app)
Host technologyThin Hono serverNext.js 16 App Router
Feature unitIndependent ESM bundle + signed manifest (static assets)Workspace package rendered as a page
Deployment unitDomain product group (frontend + backend together)Entire app (single Docker image)
Asset deliveryfeature-loader sidecar pulls OCI images to emptyDirBuilt into app image
DiscoveryHost scans ARTIFACTS_DIR (file watcher) + optional remote CDN/api/pages endpoint reads central app-registry
Feature visibilitySMUD stage promotion + runtime fallbackRuntime feature flags via GitOps

Pilar Embedded

Pilar Embedded is the established platform and works well for most teams at DIPS today. Its strengths are real and significant:

  • Familiar developer experience — Standard Next.js patterns: file-based routing, server components, server actions. Developers write pages the way every Next.js tutorial teaches.
  • Full React Server Components and Server Actions — Async server components for data fetching, 'use server' for mutations, Suspense for streaming.
  • Build-time type safety across featuresworkspace:* dependencies mean features share TypeScript types with the shell and each other. A breaking change is caught at compile time.
  • Atomic consistency — Every deployment contains all features at the same version. No version skew between features, no stale shared dependency negotiations.
  • Mature ecosystem — Next.js, Turbo, SWC — well-documented, widely adopted, community-supported.
  • Lower platform complexity — The platform team maintains a monorepo build, not a signing/CDN infrastructure.

Where teams sometimes hit limits

None of these are failures of Pilar Embedded — they're inherent trade-offs of build-time composition that become more visible as team count and diversity grow:

  • Release coupling — A team that ships daily may be waiting on a shared deploy window. At 10 teams this is manageable; at 40 it creates queuing.
  • Build graph growth — Turbo caching helps significantly, but the monorepo still rebuilds and redeploys everything as a unit. Build times grow with feature count.
  • Framework uniformity — All features run inside Next.js App Router. Teams with different needs (non-React, legacy integration, specialized rendering) must work within that constraint.
  • Shared failure domain — A broken feature can affect the entire app. Pilar Embedded mitigates this with good testing practices, but there's no runtime isolation between features.

Pilar Runtime

Pilar Runtime is exploring whether runtime composition addresses the edge cases above. It's not a replacement for Pilar Embedded — it's a focused experiment to test specific hypotheses about independent deployment, runtime isolation, and distribution flexibility.

What Pilar Runtime is testing

  • Independent deployment per team — Publish one feature without rebuilding or redeploying anything else. The host discovers it at runtime.
  • Build times stay constant — Each feature builds only its own Vite bundle. The 41st feature doesn't slow down the 1st.
  • Runtime isolation — A bad feature is a failed manifest load or a contained client error. Other features continue working.
  • Framework flexibility — The contract is ESM entrypoints + a JSON manifest. React is the default, but the host doesn't mandate it.
  • Cryptographic supply chain — Signed manifests and SRI integrity on every asset. Built into the artifact format today to future-proof for cloud CDN delivery, where assets will flow over untrusted public networks to hospital environments.
  • Automatic fallback — If a feature version fails validation, the host loads the previous healthy version. No manual intervention.

What Pilar Runtime trades away (known costs of this approach)

  • No React Server Components — Features are client-rendered ESM bundles. No streaming HTML, no server-side rendering in the component tree. (See note on RSC below.)
  • No build-time type safety across features — Features are independent artifacts. A breaking change in a shared contract is caught at integration time, not compile time.
  • Higher platform complexity — Signing infrastructure, release index management, and admin UI are additional systems to maintain.
  • Import map coordination — Shared dependencies are negotiated at runtime. Version mismatches between features are a new failure mode.
  • Developer onboarding — Concepts like signed manifests, release indexes, and SRI integrity are unfamiliar to most React developers.

Common ground

Both approaches share the same DIPS infrastructure: IAM authentication, Puls design system, SMUD-GitOps deployment, and Arena Desktop integration.

CapabilityPilar RuntimePilar Embedded
Server-side data fetchingServer functions via RPC (*.server.js)React Server Components + Server Actions
Feature scaffoldingpnpm run create-featurepnpm create:embedded
Shared UI componentsHost import map + declared shared depsworkspace:* packages
Feature flag gatingManifest visibility + SMUD stage promotionGitOps feature toggle + FeatureFlag wrapper
Health/diagnosticspnpm run doctor + admin health endpointTurbo build + type checking
Auth modelDFS OIDC, trusted-cluster, static tokenDFS OIDC via middleware
Design systemPuls tokensPuls tokens
Arena Desktoparena manifest fieldApp registry

Choosing between them (today)

While Pilar Runtime is being explored, the decision framework is straightforward:

Pilar Embedded is the default. Use it unless you have a specific reason not to. It's production-proven, well-understood, and covers the majority of team needs.

Consider Pilar Runtime when a team hits a concrete limitation:

  • Release independence is blocked by the monorepo deploy train
  • The feature doesn't fit the Next.js App Router model
  • OCI-based distribution to customer clusters is a hard requirement
  • A high-churn feature where shared deploy coordination creates real friction

Both can run behind the same Arena Desktop Client — they serve different URLs within the same environment. A team can start in Pilar Embedded and move a feature to Pilar Runtime later (or vice versa) without affecting other teams.

The goal is to learn from teams that try Pilar Runtime: what works, what's painful, what should feed back into the platform regardless of which composition model wins long-term.

Why not many small apps?

The third option: give each team its own Next.js app in apps/, let Turbo cache builds per app, and deploy each as an independent service.

Pilar/
  apps/
    patientlist/        # Team A's app
    medications/        # Team B's app
    lab-results/        # Team C's app
    appointments/       # Team D's app

This is appealing because it's simple: each team owns a standard app, builds independently, deploys independently. No manifests, no signing, no host runtime.

Where it breaks down

N deployments × M customers. With 40 features, you'd need 40 Helm charts, 40 Docker images, 40 ingress rules, and 40 SMUD-GitOps product entries — per customer cluster. Both Pilar Embedded and Pilar Runtime keep this to 1–2 products.

No shared context. Patient selection in the patient list needs to propagate to labs, medications, and orders. With separate apps, every navigation is a full page load. Shared state requires coordination via URL parameters, cookies, or external stores.

No unified shell. Each app controls its own chrome — header, navigation bar, patient banner, theme. Keeping 40 apps visually consistent across releases is a governance challenge.

Arena Desktop integration multiplies. Arena Desktop would need to manage navigation across 40 different URLs instead of one host.

Infrastructure per customer. 40 pods (each running Node.js) instead of 1–2 pods. Hospital infrastructure is not cloud-elastic — resource budgets matter.

What small apps do well

  • Full SSR and React Server Components per feature
  • Complete process isolation (one app can't crash another)
  • Standard deployment model (no special tooling)
  • Teams can use any framework independently

Three-way comparison

One big app (Embedded)Many small appsPilar Runtime
Shared UX contextBuilt-in (same React tree)Manual (URL params, cookies)Built-in (host store, event bus)
Products in SMUD1N (one per feature)1 + N (host + per-feature images)
Build independenceNo (monorepo rebuild)Yes (Turbo cached)Yes (per-feature artifact)
Shared shellBuilt-inNone (each app owns chrome)Built-in (host renders shell)
Infrastructure per customer1 podN pods1 pod (host + feature-loader sidecar)
SSR/RSCYesYesNo
Arena Desktop URLs1N1

The many-small-apps model works well in organizations with cloud-native infrastructure and few shared UX concerns. At DIPS — with customer-operated clusters, shared patient context, and Arena Desktop as the entry point — the operational multiplication makes it impractical.

A note on React Server Components

RSC is often cited as the primary advantage of a Next.js-based approach. In the DIPS deployment context — hospital environments with hardwired connections and low-latency local networks — the practical impact is narrower than in public-facing web applications:

  • Streaming HTML — latency to backend is ~1ms on a hospital LAN; the streaming advantage is less perceptible.
  • Reduced client JS — workstations typically have sufficient memory and processing power.
  • Server-side data fetching — Pilar Runtime's server functions solve the same data-loading problem with a different transport (JSON-RPC vs inline server component).

RSC is a genuine developer experience improvement and a valid reason to choose Pilar Embedded. It's not, however, a decisive architectural differentiator in this deployment context.

Where this is heading

Running two composition models is a means, not an end. The exploration has a few possible outcomes:

  1. Runtime composition proves its value — the learnings shape how the platform evolves. Maybe Pilar Embedded adopts runtime discovery for certain feature types. Maybe Pilar Runtime grows into the primary model.
  2. Build-time composition covers enough — the edge cases turn out to be manageable within the monorepo. Pilar Runtime's ideas (signed artifacts, fallback, independent release) get folded back into Embedded where useful.
  3. A hybrid emerges — some capabilities from each approach combine into a single platform that handles both tight integration and independent deployment.

In all cases, the goal is convergence. Pilar Runtime is designed to coexist with Pilar Embedded during this exploration — they share the same DIPS infrastructure and run behind the same Arena Desktop Client. But coexistence is the bridge, not the destination.