Local Development

Table of Contents


Overview

Developer experience is the single most important factor in the long-term velocity of a micro frontends platform. With 3-5 teams working across a monorepo (shell, design system, shared libraries) and multiple polyrepos (individual MFEs), the local development story must be fast, reliable, and require minimal ceremony to get running.

Goals

  1. Fast feedback loops — sub-second HMR for in-MFE changes, seconds for cross-MFE integration changes.
  2. Realistic integration testing — the ability to run the full stack locally, including Workers, KV, D1, and Durable Objects, so that what works locally works in staging.
  3. Minimal setup friction — a new developer should go from git clone to a running dev environment in under five minutes.

Prerequisites

ToolRequired Version
pnpm10.30.3
Wranglerv4
Node.js>= 20 LTS

Three Modes of Local Development

Three development modes: Standalone MFE, Integrated Dev, and Cross-Repo Dev Standalone MFE Day-to-day feature work Rsbuild dev server Module Federation remote Mock shell context Sub-second HMR Integrated Dev Shell + MFEs together Shell on :3000 MFEs on :3001, :3003... Turborepo orchestration Full integration testing Cross-Repo Dev Library changes in consumer yalc publish / push Verdaccio local registry Full npm lifecycle Pre-publish verification
ModeUse CaseTools
Standalone MFE devDay-to-day feature work within a single MFERsbuild dev server, Module Federation, mock shell context
Cross-repo dev with yalcTesting changes to the design system or shared libraries in a consuming MFE before publishingyalc publish/push
Team integration with VerdaccioTesting the full publish-and-consume cycle across teams before releasing to GitHub PackagesVerdaccio local registry

Daily Development Workflow

Standalone MFE Development

The most common development mode. A developer works on a single MFE in isolation, with fast HMR and a mock shell context that simulates the host application.

Running pnpm dev starts the Rsbuild dev server with Module Federation configured. The MFE exposes its remote entry and can be loaded by the shell, but it also renders standalone with a mock shell context provider that supplies routing, auth state, and theme tokens.

Typical rsbuild.config.ts dev setup:

// packages/mfe-dashboard/rsbuild.config.ts
import { defineConfig } from '@rsbuild/core';           // ^1.7.3
import { pluginReact } from '@rsbuild/plugin-react';    // ^1.4.1
import { pluginModuleFederation } from '@module-federation/rsbuild-plugin';

export default defineConfig({
  plugins: [pluginReact()],
  server: {
    port: 3001,
    headers: {
      'Access-Control-Allow-Origin': '*',
    },
  },
  dev: {
    hmr: true,
    liveReload: true,
  },
  pluginModuleFederation({
    name: 'mfe_dashboard',
    filename: 'remoteEntry.js',
    exposes: {
      './DashboardApp': './src/bootstrap.tsx',
      './DashboardWidgets': './src/widgets/index.tsx',
    },
    shared: {
      react: { singleton: true, requiredVersion: '^19.2.4' },
      'react-dom': { singleton: true, requiredVersion: '^19.2.4' },
      'react-router': { singleton: true, requiredVersion: '^7.13.1' },
      '@org/design-system': { singleton: true },
    },
  }),
});

Note: react-router-dom has been consolidated into react-router as of v7. Use react-router for both shared config and imports.

Mock shell context for standalone rendering:

// packages/mfe-dashboard/src/dev-shell.tsx
import React from 'react';
import { BrowserRouter } from 'react-router';
import { ThemeProvider } from '@org/design-system';
import { ShellContext } from '@org/shell-contracts';
import { DashboardApp } from './bootstrap';

const mockShellContext = {
  user: {
    id: 'dev-user-001',
    email: 'developer@example.com',
    organizationId: 'org-dev-001',
    roles: ['admin'],
  },
  auth: {
    accessToken: 'dev-token-xxx',
    isAuthenticated: true,
    logout: () => console.log('[dev] logout called'),
  },
  navigation: {
    basePath: '/dashboard',
    navigate: (path: string) => console.log(`[dev] navigate to ${path}`),
  },
  featureFlags: {
    'dashboard-v2': true,
    'analytics-export': true,
  },
};

export function DevShell() {
  return (
    <BrowserRouter>
      <ThemeProvider defaultTheme="light">
        <ShellContext.Provider value={mockShellContext}>
          <DashboardApp />
        </ShellContext.Provider>
      </ThemeProvider>
    </BrowserRouter>
  );
}

How the design system is consumed during standalone development:

  • Option A (recommended for speed): The design system is consumed as a Module Federation remote from the staging CDN. This avoids running the design system dev server locally and guarantees alignment with the latest published version.
  • Option B (when modifying the design system): The design system dev server runs on localhost:3002, and the MFE's Module Federation config points to it. This enables HMR for design system changes reflected in the MFE.
# Option A: MFE with design system from staging CDN (default)
pnpm dev

# Option B: MFE with local design system
# Terminal 1 — design system
cd packages/design-system && pnpm dev  # starts on :3002

# Terminal 2 — MFE dashboard
MFE_DESIGN_SYSTEM_URL=http://localhost:3002 pnpm dev

package.json dev scripts:

// packages/mfe-dashboard/package.json
{
  "scripts": {
    "dev": "rsbuild dev",
    "dev:standalone": "STANDALONE=true rsbuild dev",
    "build": "rsbuild build",
    "preview": "rsbuild preview",
    "typecheck": "tsc --noEmit",
    "lint": "eslint src/ --ext .ts,.tsx",
    "test": "vitest run",
    "test:watch": "vitest"
  }
}

Integrated Development (Shell + MFEs)

For integration testing, the shell app runs locally and loads MFE remotes from either localhost dev servers or the staging CDN.

Shell dev server configuration:

// packages/shell/rsbuild.config.ts
import { defineConfig, loadEnv } from '@rsbuild/core';  // ^1.7.3
import { pluginReact } from '@rsbuild/plugin-react';    // ^1.4.1
import { pluginModuleFederation } from '@module-federation/rsbuild-plugin';

const env = loadEnv({ prefixes: ['MFE_', 'RSBUILD_PUBLIC_'] });

const MFE_DEFAULTS = {
  dashboard: 'https://mfe-dashboard.staging.example.com',
  settings: 'https://mfe-settings.staging.example.com',
  analytics: 'https://mfe-analytics.staging.example.com',
};

function remotePath(name: string): string {
  const envKey = `MFE_${name.toUpperCase()}_URL`;
  const baseUrl = process.env[envKey] || MFE_DEFAULTS[name as keyof typeof MFE_DEFAULTS];
  return `${name}@${baseUrl}/remoteEntry.js`;
}

export default defineConfig({
  plugins: [pluginReact()],
  server: {
    port: 3000,
  },
  pluginModuleFederation({
    name: 'shell',
    remotes: {
      mfe_dashboard: remotePath('dashboard'),
      mfe_settings: remotePath('settings'),
      mfe_analytics: remotePath('analytics'),
    },
    shared: {
      react: { singleton: true, requiredVersion: '^19.2.4' },
      'react-dom': { singleton: true, requiredVersion: '^19.2.4' },
      'react-router': { singleton: true, requiredVersion: '^7.13.1' },
      '@org/design-system': { singleton: true },
    },
  }),
});

.env.local for the shell (loading MFEs from localhost):

# packages/shell/.env.local
# Override MFE remote URLs for local development.
# Comment out any line to fall back to the staging CDN.

MFE_DASHBOARD_URL=http://localhost:3001
MFE_SETTINGS_URL=http://localhost:3003
# MFE_ANALYTICS_URL=http://localhost:3004  # use staging for analytics

Starting integrated development:

# Terminal 1 — Shell
cd packages/shell && pnpm dev          # :3000

# Terminal 2 — Dashboard MFE
cd packages/mfe-dashboard && pnpm dev  # :3001

# Terminal 3 — Settings MFE (optional)
cd packages/mfe-settings && pnpm dev   # :3003

With Turborepo, this can be simplified:

# From monorepo root — starts shell + all MFEs concurrently
pnpm turbo dev --filter=shell --filter=mfe-dashboard --filter=mfe-settings

Cross-Repo Development with yalc

The intuitive approach for cross-repo development is pnpm link, which creates a symlink from the consuming project to the local source. However, this causes real problems in a React + Module Federation setup:

  1. Duplicate React instances. When pnpm link creates a symlink, the linked package resolves its own node_modules/react instead of the host's. React detects this and throws the infamous "Invalid hook call. Hooks can only be called inside the body of a function component." error. This happens because React hooks rely on a module-level singleton — two different react imports means two different dispatchers.

  2. Broken React context. Even if you work around duplicate React with aliases, context values from React.createContext() do not cross symlink boundaries. The provider and consumer reference different context objects.

  3. Module Federation shared scope conflicts. Module Federation's singleton: true configuration resolves duplicates at runtime, but symlinked packages bypass this resolution because they resolve at build time through the filesystem.

yalc avoids all of these issues by simulating a real npm publish locally. It packs the library into a tarball, stores it in a local store (~/.yalc), and installs it into the consuming project as if it came from a registry. The consuming project sees a normal dependency in node_modules/, not a symlink.

yalc Workflow

yalc workflow: Monorepo build to yalc publish, then MFE repo yalc add, test, and yalc push cycle Monorepo pnpm build yalc publish ~/.yalc store MFE Repo yalc add pnpm install wire deps Test pnpm dev yalc push iterate change → rebuild → push again

Initial setup (one-time):

# Install yalc globally
pnpm add -g yalc

Step-by-step cross-repo development:

# Step 1: In the monorepo, build and publish the design system to yalc's local store
cd ~/dev/monorepo/packages/design-system
pnpm build
yalc publish
# Output: @org/design-system@1.4.0 published to local store.

# Step 2: In the polyrepo MFE, add the local version
cd ~/dev/mfe-dashboard
yalc add @org/design-system
# This:
#   - Creates a .yalc/ directory with the package contents
#   - Updates package.json to point to "file:.yalc/@org/design-system"
#   - Creates/updates yalc.lock

# Step 3: Install dependencies to wire everything up
pnpm install

# Step 4: Run the MFE dev server — it now uses the local design system
pnpm dev

Iterating on changes:

# After making changes to the design system:
cd ~/dev/monorepo/packages/design-system
pnpm build
yalc push
# yalc push automatically updates ALL projects that have added this package.
# The MFE dev server picks up the change (may require a page reload if
# HMR cannot handle the scope of the change).

Recommended scripts in the design system package.json:

// packages/design-system/package.json
{
  "scripts": {
    "dev": "rsbuild dev",
    "build": "rsbuild build",
    "yalc:publish": "pnpm build && yalc publish",
    "yalc:push": "pnpm build && yalc push",
    "yalc:watch": "chokidar 'src/**/*.{ts,tsx,css}' -c 'pnpm yalc:push' --debounce 500"
  },
  "devDependencies": {
    "chokidar-cli": "^3.0.0"
  }
}

Recommended scripts in the consuming MFE package.json:

// packages/mfe-dashboard/package.json (polyrepo)
{
  "scripts": {
    "yalc:add-ds": "yalc add @org/design-system && pnpm install",
    "yalc:remove-ds": "yalc remove @org/design-system && pnpm install",
    "yalc:check": "yalc check"
  }
}

Files generated by yalc (all gitignored):

mfe-dashboard/
├── .yalc/
│   └── @org/
│       └── design-system/      # local copy of the package
├── yalc.lock                   # tracks yalc-managed dependencies
└── .gitignore                  # must include .yalc/ and yalc.lock

Add to .gitignore:

# yalc local packages
.yalc/
yalc.lock

Cleanup

When finished with cross-repo development, restore the original dependency:

# Remove the yalc-linked package and restore the original version from the registry
cd ~/dev/mfe-dashboard
yalc remove @org/design-system
pnpm install

# Verify package.json no longer references file:.yalc/...
cat package.json | grep design-system
# Should show: "@org/design-system": "^1.4.0"

To clean up all yalc installations globally:

# Remove all yalc installations across all projects
yalc installations clean

# Remove all packages from the local yalc store
yalc installations show  # see what's stored
rm -rf ~/.yalc/packages  # nuclear option

Verdaccio for Team Integration Testing

Setup

Verdaccio is a lightweight, private npm proxy registry. It is used for team-wide integration testing of the full publish-and-consume lifecycle before packages are released to GitHub Packages.

Docker-based setup (recommended for teams):

# Start Verdaccio with persistent storage
docker run -d \
  --name verdaccio \
  -p 4873:4873 \
  -v verdaccio-storage:/verdaccio/storage \
  -v verdaccio-conf:/verdaccio/conf \
  verdaccio/verdaccio

# Verdaccio is now available at http://localhost:4873

Global install (simpler for individual use):

pnpm add -g verdaccio
verdaccio  # starts on http://localhost:4873

Configuration (.verdaccio/config.yaml):

# .verdaccio/config.yaml
storage: ./storage
plugins: ./plugins

web:
  title: "Platform Local Registry"
  logo: ""

auth:
  htpasswd:
    file: ./htpasswd
    max_users: 100

uplinks:
  npmjs:
    url: https://registry.npmjs.org/
  github-packages:
    url: https://npm.pkg.github.com/
    auth:
      type: bearer
      token: "${GITHUB_TOKEN}"

packages:
  "@org/*":
    # Try local first, fall back to GitHub Packages
    access: "$all"
    publish: "$authenticated"
    proxy: github-packages

  "**":
    access: "$all"
    publish: "$authenticated"
    proxy: npmjs

server:
  keepAliveTimeout: 60

middlewares:
  audit:
    enabled: true

listen: 0.0.0.0:4873

log:
  type: stdout
  format: pretty
  level: warn

Workflow

Publishing to Verdaccio:

# Create a user (first time only)
npm adduser --registry http://localhost:4873

# Publish a pre-release version to Verdaccio
cd packages/design-system
pnpm build

# Use Changesets to version, then publish to Verdaccio instead of GitHub Packages
pnpm changeset version  # bumps versions based on changesets
pnpm publish --registry http://localhost:4873 --no-git-checks

# Or publish a specific pre-release tag
pnpm publish --registry http://localhost:4873 --tag canary --no-git-checks

Consuming from Verdaccio (other team members):

# In the consuming project, create or update .npmrc to point to Verdaccio
# .npmrc (project-level, gitignored for local testing)
@org:registry=http://localhost:4873

# Install the pre-release version
pnpm add @org/design-system@canary
# or
pnpm add @org/design-system@1.5.0-canary.1

# Run the project and verify the integration
pnpm dev

Using Verdaccio to test a full Changesets release cycle:

# 1. Create changesets as normal
pnpm changeset

# 2. Version packages
pnpm changeset version

# 3. Publish to Verdaccio (instead of GitHub Packages) to verify
pnpm changeset publish --registry http://localhost:4873

# 4. In consuming projects, install from Verdaccio and test
# 5. Once verified, publish for real to GitHub Packages
pnpm changeset publish

When to Use yalc vs Verdaccio

AspectyalcVerdaccio
ScopeSingle developer, local machineTeam-wide, network-accessible
Setup timeSeconds (pnpm add -g yalc)Minutes (Docker or global install + config)
Simulatesnpm pack + local installFull npm registry publish/install cycle
Version resolutionFile path reference in package.jsonStandard semver resolution
Changesets integrationNoneFull — can test changeset publish
CI/CD testingNot applicableCan run in CI for integration tests
Use whenIterating on a library while testing in a consumerTesting the full release pipeline before publishing
SpeedInstant updates with yalc pushRequires pnpm publish + pnpm install per iteration
Team visibilityOnly on your machineShared across the team (if Verdaccio is on a shared server)

Rule of thumb:

  • Use yalc for the inner development loop — you are actively changing a library and want to see the effect in a consumer immediately.
  • Use Verdaccio for the outer verification loop — you want to confirm that the publish artifact, version resolution, and install behavior work correctly before releasing.

Running Workers Locally

wrangler dev

Wrangler v4 local development stack: Wrangler dev to Miniflare to local Durable Objects, KV, D1, and R2 Wrangler v4 wrangler dev Miniflare Local runtime Local Persistence (.wrangler/state/) Durable Objects WebSockets + storage KV Key-value store D1 SQLite database R2 Object storage

Cloudflare's wrangler dev (v4) command starts a local Worker development server powered by Miniflare. It supports KV, R2, D1, and Durable Objects with local persistence so that state survives restarts.

Note: Wrangler v3 reached end-of-life in Q1 2026. All projects should use Wrangler v4. Run pnpm add -D wrangler@^4 to upgrade.

Typical wrangler.toml configuration for local development:

# workers/api-gateway/wrangler.toml
name = "api-gateway"
main = "src/index.ts"
compatibility_date = "2026-02-25"

[dev]
port = 8787
local_protocol = "http"
ip = "0.0.0.0"

# KV namespace bindings
[[kv_namespaces]]
binding = "SESSION_STORE"
id = "abc123"                         # production KV ID
preview_id = "dev-session-store"      # used in wrangler dev

# D1 database binding
[[d1_databases]]
binding = "DB"
database_name = "platform-db"
database_id = "def456"

# R2 bucket binding
[[r2_buckets]]
binding = "ASSETS"
bucket_name = "platform-assets"
preview_bucket_name = "platform-assets-dev"

# Service bindings to other Workers
[[services]]
binding = "AUTH_WORKER"
service = "auth-worker"

[[services]]
binding = "TENANT_WORKER"
service = "tenant-worker"

# Durable Object bindings
[durable_objects]
bindings = [
  { name = "COLLAB_SESSIONS", class_name = "CollabSession" }
]

[[migrations]]
tag = "v1"
new_classes = ["CollabSession"]

Starting the Worker locally:

# Start with local persistence (state stored in .wrangler/state/)
cd workers/api-gateway
wrangler dev --persist-to .wrangler/state

# The Worker is now available at http://localhost:8787

Running multiple Workers with service bindings:

When Workers reference each other through service bindings, you need to run each Worker in a separate terminal. Wrangler v4 automatically discovers other locally running Workers for service binding resolution. Remote service bindings are now GA and no longer require any experimental flags.

# Terminal 1 — Auth Worker
cd workers/auth-worker
wrangler dev --port 8788 --persist-to .wrangler/state

# Terminal 2 — Tenant Worker
cd workers/tenant-worker
wrangler dev --port 8789 --persist-to .wrangler/state

# Terminal 3 — API Gateway (calls Auth and Tenant Workers via service bindings)
cd workers/api-gateway
wrangler dev --port 8787 --persist-to .wrangler/state

# The API Gateway will route service binding calls to the locally running Workers.

Seeding local D1 for development:

# Apply migrations to local D1
cd workers/api-gateway
wrangler d1 execute platform-db --local --file=./migrations/0001_init.sql
wrangler d1 execute platform-db --local --file=./seeds/dev-data.sql

# Verify
wrangler d1 execute platform-db --local --command="SELECT * FROM tenants LIMIT 5"

Miniflare for Durable Objects

wrangler dev uses Miniflare under the hood, which provides full local support for Durable Objects including WebSocket connections.

Local Durable Object testing:

// workers/collab-worker/src/collab-session.ts
import { DurableObject } from 'cloudflare:workers';

export class CollabSession extends DurableObject {
  private connections: Set<WebSocket> = new Set();

  async fetch(request: Request): Promise<Response> {
    if (request.headers.get('Upgrade') === 'websocket') {
      const pair = new WebSocketPair();
      const [client, server] = Object.values(pair);

      this.ctx.acceptWebSocket(server);
      this.connections.add(server);

      return new Response(null, { status: 101, webSocket: client });
    }

    // REST API for Durable Object state
    const url = new URL(request.url);
    if (url.pathname === '/state') {
      const state = await this.ctx.storage.get('document');
      return Response.json({ state });
    }

    return new Response('Not found', { status: 404 });
  }

  async webSocketMessage(ws: WebSocket, message: string | ArrayBuffer) {
    // Broadcast to all connected clients
    const data = typeof message === 'string' ? message : new TextDecoder().decode(message);
    for (const conn of this.connections) {
      if (conn !== ws && conn.readyState === WebSocket.OPEN) {
        conn.send(data);
      }
    }

    // Persist state
    const parsed = JSON.parse(data as string);
    if (parsed.type === 'update') {
      await this.ctx.storage.put('document', parsed.payload);
    }
  }

  async webSocketClose(ws: WebSocket) {
    this.connections.delete(ws);
  }
}

Testing WebSockets locally:

# Start the collab Worker
cd workers/collab-worker
wrangler dev --port 8790 --persist-to .wrangler/state

# Test with wscat (install: pnpm add -g wscat)
wscat -c ws://localhost:8790/session/doc-123

# Or test from the browser console:
# const ws = new WebSocket('ws://localhost:8790/session/doc-123');
# ws.onmessage = (e) => console.log('received:', e.data);
# ws.send(JSON.stringify({ type: 'update', payload: { text: 'hello' } }));

Integration test setup with Miniflare (via unstable_dev):

// workers/collab-worker/test/collab-session.test.ts
import { unstable_dev, UnstableDevWorker } from 'wrangler';
import { describe, it, expect, beforeAll, afterAll } from 'vitest';

describe('CollabSession Durable Object', () => {
  let worker: UnstableDevWorker;

  beforeAll(async () => {
    worker = await unstable_dev('src/index.ts', {
      vars: {
        ENVIRONMENT: 'test',
      },
    });
  });

  afterAll(async () => {
    await worker.stop();
  });

  it('should accept WebSocket connections', async () => {
    const resp = await worker.fetch('/session/test-doc', {
      headers: { Upgrade: 'websocket' },
    });
    expect(resp.status).toBe(101);
  });

  it('should persist document state', async () => {
    // Send an update via WebSocket, then verify via REST
    const resp = await worker.fetch('/session/test-doc/state');
    const data = await resp.json();
    expect(data).toHaveProperty('state');
  });
});

Known limitations of local Durable Object development:

  • No hibernation simulation. The Hibernatable WebSockets API works locally, but the actual hibernation behavior (eviction from memory, wake-on-message) is not simulated. The Durable Object stays in memory for the lifetime of the wrangler dev process.
  • No global uniqueness. Locally, all Durable Object instances run in the same isolate. In production, they are globally unique and may run in different data centers.
  • Alarm scheduling. Durable Object alarms work locally but timing may differ from production.

Environment Variables and Secrets

.dev.vars

Cloudflare Workers use .dev.vars for local secrets, analogous to .env for Node.js applications. This file is loaded by wrangler dev and its contents are injected into the Worker's environment bindings.

Security: Never commit .dev.vars to version control. Rotate any secrets in .dev.vars immediately if they are accidentally committed. Use wrangler secret put <KEY> to manage production secrets separately. Treat all values in .dev.vars as test/development-only credentials and avoid reusing production secrets for local development.

Example .dev.vars file:

# workers/api-gateway/.dev.vars
# This file is NOT committed to git.

# WorkOS authentication
WORKOS_API_KEY=sk_test_abc123def456
WORKOS_CLIENT_ID=client_01ABC
WORKOS_WEBHOOK_SECRET=whsec_test_xyz789

# Database
DATABASE_URL=postgresql://localhost:5432/platform_dev

# External services
STRIPE_SECRET_KEY=sk_test_stripe_key_here
SENDGRID_API_KEY=SG.test_key_here

# Internal
JWT_SIGNING_SECRET=dev-signing-secret-not-for-production
ENCRYPTION_KEY=dev-encryption-key-32-bytes-long!!

# Environment indicator
ENVIRONMENT=development

Add to .gitignore:

# Cloudflare Workers local secrets
.dev.vars

# Wrangler local state (KV, D1, R2 data)
.wrangler/

Frontend Environment Variables

Rsbuild uses the RSBUILD_PUBLIC_ prefix for environment variables that should be available in client-side code. These are statically replaced at build time.

.env (committed, shared defaults):

# packages/shell/.env
# Default values — overridden by .env.local and .env.production

RSBUILD_PUBLIC_APP_NAME=Platform
RSBUILD_PUBLIC_API_GATEWAY_URL=https://api.staging.example.com
RSBUILD_PUBLIC_WORKOS_CLIENT_ID=client_01ABC
RSBUILD_PUBLIC_WORKOS_REDIRECT_URI=http://localhost:3000/auth/callback

.env.local (not committed, local overrides):

# packages/shell/.env.local
# Local overrides — NOT committed to git.

# Point API gateway to local Worker
RSBUILD_PUBLIC_API_GATEWAY_URL=http://localhost:8787

# MFE remote URLs (local dev servers)
MFE_DASHBOARD_URL=http://localhost:3001
MFE_SETTINGS_URL=http://localhost:3003
MFE_ANALYTICS_URL=http://localhost:3004

# WorkOS local development
RSBUILD_PUBLIC_WORKOS_CLIENT_ID=client_01ABC_DEV
RSBUILD_PUBLIC_WORKOS_REDIRECT_URI=http://localhost:3000/auth/callback

# Feature flags for local development
RSBUILD_PUBLIC_ENABLE_DEV_TOOLS=true
RSBUILD_PUBLIC_MOCK_AUTH=false

Accessing environment variables in code:

// packages/shell/src/config.ts
export const config = {
  appName: process.env.RSBUILD_PUBLIC_APP_NAME!,
  apiGatewayUrl: process.env.RSBUILD_PUBLIC_API_GATEWAY_URL!,
  workos: {
    clientId: process.env.RSBUILD_PUBLIC_WORKOS_CLIENT_ID!,
    redirectUri: process.env.RSBUILD_PUBLIC_WORKOS_REDIRECT_URI!,
  },
  enableDevTools: process.env.RSBUILD_PUBLIC_ENABLE_DEV_TOOLS === 'true',
} as const;

Environment variable loading order in Rsbuild:

  1. .env — shared defaults (committed)
  2. .env.local — local overrides (gitignored)
  3. .env.development — development-specific (committed)
  4. .env.development.local — local development overrides (gitignored)
  5. Shell environment variables — highest priority

The following Makefile provides a unified interface for all common development tasks. Place it at the monorepo root.

Prerequisite: The dev-workers and dev-all targets use concurrently to run multiple processes in parallel. Install it as a root devDependency:

pnpm add -D concurrently -w
# Makefile — Monorepo root
# Usage: make <target>

.PHONY: help dev dev-shell dev-workers dev-all \
        yalc-publish yalc-push yalc-watch yalc-clean \
        test test-integration e2e \
        build typecheck lint format \
        verdaccio-start verdaccio-publish \
        db-migrate db-seed clean

# ─── Development ──────────────────────────────────────────────────

dev: ## Start a specific MFE dev server (usage: make dev MFE=dashboard)
	@if [ -z "$(MFE)" ]; then \
		echo "Usage: make dev MFE=<name>"; \
		echo "Available: dashboard, settings, analytics"; \
		exit 1; \
	fi
	pnpm --filter mfe-$(MFE) dev

dev-shell: ## Start shell with local MFE remotes
	pnpm --filter shell dev

dev-design-system: ## Start design system dev server (for cross-MFE testing)
	pnpm --filter @org/design-system dev

dev-workers: ## Start all Workers locally (parallel)
	@echo "Starting Workers..."
	npx concurrently \
		-n "api-gw,auth,tenant" \
		-c "blue,green,yellow" \
		"cd workers/api-gateway && wrangler dev --port 8787 --persist-to .wrangler/state" \
		"cd workers/auth-worker && wrangler dev --port 8788 --persist-to .wrangler/state" \
		"cd workers/tenant-worker && wrangler dev --port 8789 --persist-to .wrangler/state"

dev-all: ## Start everything: shell + MFEs + Workers
	npx concurrently \
		-n "shell,dashboard,settings,api-gw,auth,tenant" \
		-c "cyan,blue,magenta,green,yellow,red" \
		"pnpm --filter shell dev" \
		"pnpm --filter mfe-dashboard dev" \
		"pnpm --filter mfe-settings dev" \
		"cd workers/api-gateway && wrangler dev --port 8787 --persist-to .wrangler/state" \
		"cd workers/auth-worker && wrangler dev --port 8788 --persist-to .wrangler/state" \
		"cd workers/tenant-worker && wrangler dev --port 8789 --persist-to .wrangler/state"

# ─── Cross-Repo (yalc) ───────────────────────────────────────────

yalc-publish: ## Build and publish design system to yalc local store
	pnpm --filter @org/design-system build
	cd packages/design-system && yalc publish
	@echo "Published to yalc. Run 'yalc add @org/design-system' in consuming repos."

yalc-push: ## Build and push design system updates to all yalc consumers
	pnpm --filter @org/design-system build
	cd packages/design-system && yalc push
	@echo "Pushed to all yalc consumers."

yalc-watch: ## Watch design system sources and auto-push to yalc on change
	cd packages/design-system && pnpm yalc:watch

yalc-clean: ## Remove all yalc installations
	yalc installations clean
	@echo "All yalc installations cleaned."

# ─── Verdaccio ────────────────────────────────────────────────────

verdaccio-start: ## Start Verdaccio local registry via Docker
	docker run -d \
		--name verdaccio \
		-p 4873:4873 \
		-v verdaccio-storage:/verdaccio/storage \
		verdaccio/verdaccio
	@echo "Verdaccio running at http://localhost:4873"

verdaccio-publish: ## Publish all packages to local Verdaccio
	pnpm turbo build --filter='./packages/*'
	pnpm --filter './packages/*' publish --registry http://localhost:4873 --no-git-checks
	@echo "Published all packages to Verdaccio."

# ─── Testing ──────────────────────────────────────────────────────

test: ## Run unit tests across all packages
	pnpm turbo test

test-integration: ## Run integration tests (requires Workers running locally)
	pnpm turbo test:integration

e2e: ## Run Playwright E2E tests
	pnpm --filter e2e-tests exec playwright test

e2e-ui: ## Run Playwright E2E tests with UI mode
	pnpm --filter e2e-tests exec playwright test --ui

# ─── Build & Quality ─────────────────────────────────────────────

build: ## Production build of all packages
	pnpm turbo build

typecheck: ## TypeScript type checking across all packages
	pnpm turbo typecheck

lint: ## Run ESLint + Prettier check
	pnpm turbo lint

format: ## Auto-fix formatting with Prettier
	pnpm prettier --write "**/*.{ts,tsx,json,md,css}"

# ─── Database ─────────────────────────────────────────────────────

db-migrate: ## Run D1 migrations locally
	cd workers/api-gateway && \
	for f in migrations/*.sql; do \
		echo "Applying $$f..."; \
		wrangler d1 execute platform-db --local --file=$$f; \
	done

db-seed: ## Seed local D1 with development data
	cd workers/api-gateway && \
	wrangler d1 execute platform-db --local --file=seeds/dev-data.sql
	@echo "Local database seeded."

# ─── Utilities ────────────────────────────────────────────────────

clean: ## Clean all build artifacts, node_modules caches, and wrangler state
	pnpm turbo clean
	rm -rf .turbo
	find . -name '.wrangler' -type d -prune -exec rm -rf {} +
	@echo "Cleaned."

help: ## Show this help message
	@grep -E '^[a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | sort | \
		awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-20s\033[0m %s\n", $$1, $$2}'

.DEFAULT_GOAL := help

Troubleshooting Common Issues

Full local development environment: MFE dev servers, Shell, Workers, and local registries running together Full Local Development Environment Browser localhost:3000 Shell :3000 Module Federation Host Dashboard MFE :3001 Settings MFE :3003 Analytics MFE :3004 API Gateway :8787 Auth Worker :8788 Tenant Worker :8789 yalc store ~/.yalc Verdaccio :4873

Port Conflicts When Running Multiple Dev Servers

Symptom: Error: listen EADDRINUSE :::3001 when starting a dev server.

Solution:

# Find what is using the port
lsof -i :3001

# Kill the process
kill -9 <PID>

# Or use a different port
PORT=3005 pnpm dev

Assign fixed, well-known ports to each service to avoid conflicts:

ServicePort
Shell3000
MFE Dashboard3001
MFE Design System3002
MFE Settings3003
MFE Analytics3004
API Gateway Worker8787
Auth Worker8788
Tenant Worker8789
Collab Worker8790
Verdaccio4873

Module Federation HMR Not Working Across Remotes

Symptom: Changes in a remote MFE do not reflect in the shell via HMR; only a full page reload picks them up.

Explanation: Module Federation v2 supports HMR within a remote (changes to files inside the remote's boundary hot-reload correctly), but changes to the remote entry manifest or exposed module boundaries require a full reload of the host. This is a known limitation.

Workarounds:

// In the shell's rsbuild.config.ts, enable live reload as a fallback
export default defineConfig({
  dev: {
    hmr: true,
    liveReload: true, // falls back to full reload when HMR fails
  },
});

For faster iteration, develop the MFE in standalone mode (with the mock shell context) where HMR works fully, and use integrated mode only for final verification.

CORS Errors When Loading MF Remotes from Different Localhost Ports

Symptom: Access to script at 'http://localhost:3001/remoteEntry.js' from origin 'http://localhost:3000' has been blocked by CORS policy.

Solution: Ensure each MFE dev server sets the Access-Control-Allow-Origin header:

// rsbuild.config.ts for each MFE
export default defineConfig({
  server: {
    port: 3001,
    headers: {
      'Access-Control-Allow-Origin': '*',
      'Access-Control-Allow-Methods': 'GET, OPTIONS',
      'Access-Control-Allow-Headers': 'Content-Type',
    },
  },
});

KV/D1 State Not Persisting Between wrangler dev Restarts

Symptom: Local KV or D1 data disappears after restarting wrangler dev.

Cause: By default, wrangler dev uses in-memory storage. State is only persisted when you explicitly pass --persist-to.

Solution:

# Always use --persist-to for stateful development
wrangler dev --persist-to .wrangler/state

# Verify state files exist
ls .wrangler/state/
# Should show: v4/ directory with kv/, d1/, r2/ subdirectories

Add --persist-to .wrangler/state to all Worker dev scripts in the Makefile and package.json.

Design System Types Not Updating After yalc push

Symptom: TypeScript still sees old type definitions for the design system after running yalc push, even though the runtime code has updated.

Cause: TypeScript caches resolved module types. The TypeScript language server does not watch .yalc/ for changes.

Solution:

# Option 1: Restart the TypeScript language server
# In VS Code: Cmd+Shift+P → "TypeScript: Restart TS Server"

# Option 2: Clear the TypeScript build cache
rm -rf node_modules/.cache
rm -rf tsconfig.tsbuildinfo

# Option 3: Touch the tsconfig to force re-resolution
touch tsconfig.json

For a more robust workflow, add a postpush script to the design system:

// packages/design-system/package.json
{
  "scripts": {
    "yalc:push": "pnpm build && yalc push --changed"
  }
}

The --changed flag ensures yalc only pushes if the build output actually changed, reducing unnecessary invalidation noise.

wrangler dev Cannot Resolve Service Bindings

Symptom: Calls to env.AUTH_WORKER.fetch() fail with "Service not found" when running locally.

Solution: All Workers involved in service bindings must be running locally at the same time. Wrangler discovers other local Workers automatically, but the order of startup matters — the Worker making the service binding call must start after the target Worker is ready.

# Start dependencies first, then the gateway
# Terminal 1
cd workers/auth-worker && wrangler dev --port 8788

# Terminal 2 (after auth-worker is ready)
cd workers/api-gateway && wrangler dev --port 8787

References