Cloudflare Workers & Durable Objects for Micro Frontends Architecture
Table of Contents
- Durable Objects for WebSockets
- Workers as API Gateway
- Workers for Serving Micro Frontends
- Backend-for-Frontend Pattern on Workers
- Local Development with Wrangler
- Overall Architecture Recommendation
- Gotchas & Pitfalls
1. Durable Objects for WebSockets
How Durable Objects Handle WebSocket Connections
Durable Objects (DOs) provide single-threaded, stateful coordination points ideal for WebSocket servers. There are two APIs available:
Hibernatable WebSocket API (Recommended)
This is the primary pattern for production use. It allows Durable Objects to sleep while maintaining WebSocket connections, dramatically reducing costs for applications with many idle connections. During hibernation, billable duration charges do not accrue, but the WebSocket connection stays open. When a message arrives, the runtime automatically recreates the DO, runs the constructor, and delivers the message to the appropriate handler.
import { DurableObject } from "cloudflare:workers";
interface Env {
ROOMS: DurableObjectNamespace<ChatRoom>;
}
export class ChatRoom extends DurableObject {
sessions: Map<WebSocket, { id: string; username: string }>;
constructor(ctx: DurableObjectState, env: Env) {
super(ctx, env);
// Re-hydrate sessions from hibernation
this.sessions = new Map();
this.ctx.getWebSockets().forEach((ws) => {
const meta = ws.deserializeAttachment();
this.sessions.set(ws, { ...meta });
});
}
async fetch(request: Request): Promise<Response> {
const url = new URL(request.url);
if (request.headers.get("Upgrade") !== "websocket") {
return new Response("Expected WebSocket", { status: 426 });
}
const username = url.searchParams.get("username");
if (!username) {
return new Response("Missing username", { status: 400 });
}
const webSocketPair = new WebSocketPair();
const [client, server] = Object.values(webSocketPair);
// acceptWebSocket() makes this connection "hibernatable"
// Unlike ws.accept(), this tells the runtime the DO can sleep
this.ctx.acceptWebSocket(server);
const sessionData = { id: crypto.randomUUID(), username };
server.serializeAttachment(sessionData); // Persists across hibernation (max 2048 bytes)
this.sessions.set(server, sessionData);
// Notify other clients
this.broadcast({ type: "join", username }, sessionData.id);
return new Response(null, { status: 101, webSocket: client });
}
async webSocketMessage(ws: WebSocket, message: string | ArrayBuffer) {
if (typeof message !== "string") return;
const session = this.sessions.get(ws);
if (!session) return;
const parsed = JSON.parse(message);
switch (parsed.type) {
case "chat":
this.broadcast({
type: "chat",
username: session.username,
text: parsed.text,
timestamp: Date.now(),
});
break;
case "get-participants":
const participants = Array.from(this.sessions.values())
.map(s => s.username);
ws.send(JSON.stringify({ type: "participants", participants }));
break;
}
}
async webSocketClose(ws: WebSocket, code: number, reason: string, wasClean: boolean) {
const session = this.sessions.get(ws);
if (session) {
this.broadcast({ type: "leave", username: session.username }, session.id);
}
this.sessions.delete(ws);
ws.close(code, "Closing"); // Always reciprocate close frames
}
async webSocketError(ws: WebSocket, error: unknown) {
const session = this.sessions.get(ws);
this.sessions.delete(ws);
ws.close(1011, "Unexpected error");
}
private broadcast(message: object, excludeId?: string) {
const payload = JSON.stringify(message);
this.ctx.getWebSockets().forEach((ws) => {
const { id } = ws.deserializeAttachment();
if (id !== excludeId) {
ws.send(payload);
}
});
}
}
// Entry Worker routes to the correct DO
export default {
async fetch(request: Request, env: Env) {
const url = new URL(request.url);
const roomName = url.pathname.split("/")[2]; // e.g., /ws/room-name
// Deterministic routing: same room name always hits same DO
const id = env.ROOMS.idFromName(roomName);
const stub = env.ROOMS.get(id);
return stub.fetch(request);
},
};
Key pattern: Per-connection state with serializeAttachment/deserializeAttachment
This is critical for hibernation. You can store up to 2,048 bytes per connection that persists across hibernation cycles. Store user IDs, session tokens, and metadata here.
Patterns for Real-Time Communication
Chat Rooms / Channels
- One DO per room/channel (the "atom of coordination" pattern)
- Use
idFromName(roomName)for deterministic routing - Broadcast messages to all connected WebSockets via
this.ctx.getWebSockets()
Live Updates (Dashboards, Notifications)
- One DO per user or per entity being observed
- Clients subscribe by connecting; the DO pushes updates
- Combine with alarms for polling external data sources
Collaborative Editing
- One DO per document
- Store document state in SQLite-backed storage
- Use operational transforms or CRDTs within the DO
- The single-threaded nature of DOs provides natural serialization of edits
Parent-Child Hierarchies for Scale
When a single DO becomes a bottleneck, shard into child DOs. For example, a game server DO managing matches can spawn per-match DOs.
Connection Limits, Pricing, and Scaling
Pricing (Workers Paid Plan, $5/month minimum):
| Resource | Included | Overage |
|---|---|---|
| Requests (includes WS msgs at 20:1 ratio) | 1M/month | $0.15/million |
| Duration (GB-s) | 400,000/month | $12.50/million GB-s |
| SQLite row reads | 25B/month | $0.001/million |
| SQLite row writes | 50M/month | $1.00/million |
| Storage | 5 GB-month | $0.20/GB-month |
WebSocket-specific billing:
- Each WebSocket connection counts as 1 request
- Incoming WebSocket messages use a 20:1 ratio: 100 incoming messages = 5 billable requests
- No charge for outgoing WebSocket messages
- With Hibernation API: no duration charges while hibernating (only charged while event handlers actively run)
Limits:
- Soft limit: ~1,000 requests/second per DO instance
- No hard cap on WebSocket connections per DO, but practical limits depend on workload
- WebSocket message size: 32 MiB maximum (received)
- CPU per request: 30 seconds (configurable up to 5 minutes)
- Per-connection attachment: 2,048 bytes max
- Per-object storage: 10 GB (SQLite-backed)
Scaling characteristics:
- Simple operations: ~1,000 req/sec per DO
- Moderate processing: ~500-750 req/sec
- Complex operations: ~200-500 req/sec
- Formula:
Required DOs = Total req/sec / Per-DO capacity
Structuring a Durable Object for Rooms/Channels
Recommended pattern: SQLite-backed DO with hibernation
export class ChatRoom extends DurableObject {
sql: SqlStorage;
constructor(ctx: DurableObjectState, env: Env) {
super(ctx, env);
this.sql = ctx.storage.sql;
// Run migrations once, blocking concurrent requests
ctx.blockConcurrencyWhile(async () => {
this.sql.exec(`
CREATE TABLE IF NOT EXISTS messages (
id INTEGER PRIMARY KEY AUTOINCREMENT,
username TEXT NOT NULL,
content TEXT NOT NULL,
created_at INTEGER NOT NULL
);
CREATE INDEX IF NOT EXISTS idx_messages_created
ON messages(created_at);
`);
});
// Re-hydrate sessions from hibernated WebSockets
this.ctx.getWebSockets().forEach((ws) => {
// Connections survive hibernation automatically
});
}
async fetch(request: Request) {
const url = new URL(request.url);
// HTTP endpoints for the room
if (url.pathname.endsWith("/history")) {
const rows = this.sql.exec(
"SELECT * FROM messages ORDER BY created_at DESC LIMIT 50"
).toArray();
return Response.json(rows);
}
// WebSocket upgrade
if (request.headers.get("Upgrade") === "websocket") {
return this.handleWebSocketUpgrade(request);
}
return new Response("Not found", { status: 404 });
}
private handleWebSocketUpgrade(request: Request): Response {
const pair = new WebSocketPair();
const [client, server] = Object.values(pair);
this.ctx.acceptWebSocket(server);
// Tag connections for filtering (e.g., by role or sub-channel)
const tags = ["all"]; // Can be used with getWebSockets(tag)
// this.ctx.acceptWebSocket(server, tags);
return new Response(null, { status: 101, webSocket: client });
}
async webSocketMessage(ws: WebSocket, message: string) {
const data = JSON.parse(message);
const session = ws.deserializeAttachment();
// Persist to SQLite
this.sql.exec(
"INSERT INTO messages (username, content, created_at) VALUES (?, ?, ?)",
session.username, data.text, Date.now()
);
// Broadcast to all connected clients
const outgoing = JSON.stringify({
type: "message",
username: session.username,
text: data.text,
timestamp: Date.now(),
});
for (const client of this.ctx.getWebSockets()) {
client.send(outgoing);
}
}
}
wrangler.toml configuration:
name = "chat-service"
main = "src/index.ts"
compatibility_date = "2024-12-01"
[[durable_objects.bindings]]
name = "ROOMS"
class_name = "ChatRoom"
[[migrations]]
tag = "v1"
new_sqlite_classes = ["ChatRoom"]
Message Batching Best Practice
For high-frequency data (sensor readings, game state), batch messages to reduce context switches:
async webSocketMessage(ws: WebSocket, message: string) {
// Messages may arrive batched from client
const batch = JSON.parse(message);
if (Array.isArray(batch)) {
// Process batch atomically
for (const msg of batch) {
this.processMessage(ws, msg);
}
// Single broadcast with aggregated state
this.broadcastState();
}
}
Recommendation: batch every 50-100ms or every 50-100 messages on the client side, whichever threshold is hit first.
2. Workers as API Gateway
Implementing the API Gateway Pattern
A gateway Worker serves as the single entry point for all API requests, handling routing, authentication, rate limiting, and request fanout to backend services.
import { WorkerEntrypoint } from "cloudflare:workers";
interface Env {
// Service bindings to backend Workers
AUTH_SERVICE: Service<AuthService>;
USER_SERVICE: Service<UserService>;
MFE_CATALOG: Service<MfeCatalogService>;
NOTIFICATION_SERVICE: Service<NotificationService>;
// KV for rate limiting / config
GATEWAY_CONFIG: KVNamespace;
}
export default {
async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise<Response> {
const url = new URL(request.url);
// --- CORS handling ---
if (request.method === "OPTIONS") {
return handleCORS(request);
}
// --- Rate limiting (lightweight, per-IP) ---
const clientIP = request.headers.get("CF-Connecting-IP") ?? "unknown";
const rateLimitOk = await checkRateLimit(env.GATEWAY_CONFIG, clientIP);
if (!rateLimitOk) {
return new Response("Too Many Requests", { status: 429 });
}
// --- Authentication (except public routes) ---
const publicPaths = ["/api/auth/login", "/api/auth/register", "/api/health"];
let authContext = null;
if (!publicPaths.some(p => url.pathname.startsWith(p))) {
const token = request.headers.get("Authorization")?.replace("Bearer ", "");
if (!token) {
return new Response("Unauthorized", { status: 401 });
}
// Validate via Auth service (RPC call, no HTTP overhead)
authContext = await env.AUTH_SERVICE.validateToken(token);
if (!authContext.valid) {
return new Response("Invalid token", { status: 401 });
}
}
// --- Route to backend services ---
try {
if (url.pathname.startsWith("/api/auth")) {
return await env.AUTH_SERVICE.fetch(stripPrefix(request, "/api/auth"));
}
if (url.pathname.startsWith("/api/users")) {
// Inject auth context into the request for downstream
const enrichedRequest = addAuthHeader(request, authContext);
return await env.USER_SERVICE.fetch(stripPrefix(enrichedRequest, "/api/users"));
}
if (url.pathname.startsWith("/api/mfe")) {
return await env.MFE_CATALOG.fetch(stripPrefix(request, "/api/mfe"));
}
if (url.pathname.startsWith("/api/notifications")) {
return await env.NOTIFICATION_SERVICE.fetch(
stripPrefix(request, "/api/notifications")
);
}
return new Response("Not Found", { status: 404 });
} catch (err) {
console.error("Gateway error:", err);
return new Response("Internal Server Error", { status: 500 });
}
},
};
function stripPrefix(request: Request, prefix: string): Request {
const url = new URL(request.url);
url.pathname = url.pathname.slice(prefix.length) || "/";
return new Request(url.toString(), request);
}
function addAuthHeader(request: Request, authContext: any): Request {
const headers = new Headers(request.headers);
headers.set("X-User-Id", authContext.userId);
headers.set("X-User-Role", authContext.role);
return new Request(request.url, { ...request, headers });
}
Service Bindings: Worker-to-Worker Communication
There are two patterns for inter-Worker communication:
1. RPC via WorkerEntrypoint (Recommended)
This feels like calling a local function. No HTTP serialization overhead.
// auth-service/src/index.ts
import { WorkerEntrypoint } from "cloudflare:workers";
export default class AuthService extends WorkerEntrypoint<Env> {
async validateToken(token: string): Promise<{ valid: boolean; userId?: string; role?: string }> {
// Validate JWT, check revocation list, etc.
try {
const payload = await verifyJWT(token, this.env.JWT_SECRET);
return { valid: true, userId: payload.sub, role: payload.role };
} catch {
return { valid: false };
}
}
async createSession(email: string, password: string): Promise<{ token: string }> {
// Authenticate and return JWT
const user = await this.env.DB.prepare(
"SELECT * FROM users WHERE email = ?"
).bind(email).first();
if (!user || !await verifyPassword(password, user.password_hash)) {
throw new Error("Invalid credentials");
}
const token = await signJWT({ sub: user.id, role: user.role }, this.env.JWT_SECRET);
return { token };
}
// Named entrypoint for admin operations
// Bind separately: entrypoint = "AdminAuth"
}
export class AdminAuth extends WorkerEntrypoint<Env> {
async revokeAllSessions(userId: string): Promise<void> {
await this.env.DB.prepare(
"DELETE FROM sessions WHERE user_id = ?"
).bind(userId).run();
}
}
2. HTTP via fetch() (For forwarding full requests)
// Forward entire request to downstream Worker
const response = await env.USER_SERVICE.fetch(request);
wrangler.toml for the Gateway Worker:
name = "api-gateway"
main = "src/index.ts"
compatibility_date = "2024-12-01"
# Service bindings to other Workers
[[services]]
binding = "AUTH_SERVICE"
service = "auth-service"
[[services]]
binding = "USER_SERVICE"
service = "user-service"
[[services]]
binding = "MFE_CATALOG"
service = "mfe-catalog-service"
[[services]]
binding = "NOTIFICATION_SERVICE"
service = "notification-service"
# Named entrypoint binding
[[services]]
binding = "ADMIN_AUTH"
service = "auth-service"
entrypoint = "AdminAuth"
# KV for gateway config
[[kv_namespaces]]
binding = "GATEWAY_CONFIG"
id = "abc123"
Authentication/Authorization at the Gateway Level
Pattern: Centralized auth check, distributed authorization
- The gateway validates the JWT (via RPC to AuthService) on every request
- The gateway injects
X-User-IdandX-User-Roleheaders into downstream requests - Each downstream Worker performs its own authorization (e.g., "can this user access this resource?")
// In the gateway: after auth validation
const enriched = new Request(request.url, {
method: request.method,
headers: new Headers({
...Object.fromEntries(request.headers),
"X-User-Id": authContext.userId,
"X-User-Role": authContext.role,
"X-Request-Id": crypto.randomUUID(), // For distributed tracing
}),
body: request.body,
});
Key consideration: Service bindings are internal and cannot be accessed from the public internet. Downstream Workers trust the gateway's injected headers because only the gateway can reach them via service bindings.
3. Workers for Serving Micro Frontends
Serving Static Assets
Cloudflare offers three primary approaches for serving MFE bundles:
Option A: Workers Static Assets (Recommended for MFEs)
Each MFE is its own Worker with static assets bundled in. The router Worker forwards requests via service bindings.
# mfe-dashboard/wrangler.toml
name = "mfe-dashboard"
main = "src/index.ts"
compatibility_date = "2024-12-01"
[assets]
directory = "./dist" # Built frontend assets
binding = "ASSETS" # Access assets programmatically
// mfe-dashboard/src/index.ts
import { WorkerEntrypoint } from "cloudflare:workers";
export default class DashboardMFE extends WorkerEntrypoint<Env> {
async fetch(request: Request): Promise<Response> {
// Serve static assets, with SSR fallback for routes
return this.env.ASSETS.fetch(request);
}
// RPC method for the router to call
async getAsset(path: string): Promise<Response> {
return this.env.ASSETS.fetch(new Request(`https://assets.local${path}`));
}
}
Option B: R2 for Large/Versioned Bundles
Store multiple versions of each MFE in R2, using a structured key scheme:
r2-bucket/
mfe-dashboard/
v1.2.0/
index.html
assets/
main.abc123.js
styles.def456.css
v1.3.0/
index.html
assets/
main.xyz789.js
styles.uvw012.css
mfe-settings/
v2.0.0/
...
// asset-server/src/index.ts
export default {
async fetch(request: Request, env: Env) {
const url = new URL(request.url);
const mfeName = url.pathname.split("/")[1];
// Look up active version from KV
const activeVersion = await env.MFE_CONFIG.get(`active:${mfeName}`);
if (!activeVersion) {
return new Response("MFE not found", { status: 404 });
}
// Construct R2 key
const assetPath = url.pathname.split("/").slice(2).join("/") || "index.html";
const r2Key = `${mfeName}/${activeVersion}/${assetPath}`;
const object = await env.ASSETS_BUCKET.get(r2Key);
if (!object) {
return new Response("Asset not found", { status: 404 });
}
const headers = new Headers();
headers.set("Content-Type", getContentType(assetPath));
headers.set("ETag", object.httpEtag);
// Fingerprinted assets get long cache; HTML gets short cache
if (assetPath.match(/\.[a-f0-9]{8,}\.(js|css|woff2?)$/)) {
headers.set("Cache-Control", "public, max-age=31536000, immutable");
} else {
headers.set("Cache-Control", "public, max-age=60, s-maxage=300");
}
return new Response(object.body, { headers });
},
};
Option C: Cloudflare Pages (Simplest)
Each MFE is a Pages project. The router Worker fetches from each Pages domain. This is simpler but gives less control over version selection.
Version Management System
Use KV (for speed and global distribution) or D1 (for relational queries and audit trails) to store which version of each MFE is active.
KV-based version configuration:
// Data structure in KV
// Key: "mfe-config"
// Value:
{
"dashboard": {
"activeVersion": "v1.3.0",
"availableVersions": ["v1.2.0", "v1.3.0", "v1.4.0-beta"],
"updatedAt": "2025-06-15T10:30:00Z",
"updatedBy": "admin@example.com"
},
"settings": {
"activeVersion": "v2.0.0",
"availableVersions": ["v1.9.0", "v2.0.0"],
"updatedAt": "2025-06-14T08:00:00Z",
"updatedBy": "admin@example.com"
},
"navbar": {
"activeVersion": "v3.1.0",
"availableVersions": ["v3.0.0", "v3.1.0"],
"updatedAt": "2025-06-10T14:00:00Z",
"updatedBy": "admin@example.com"
}
}
Admin API Worker for version management:
// mfe-admin/src/index.ts
import { WorkerEntrypoint } from "cloudflare:workers";
interface MfeConfig {
activeVersion: string;
availableVersions: string[];
updatedAt: string;
updatedBy: string;
}
interface AllConfigs {
[mfeName: string]: MfeConfig;
}
export default class MfeAdminService extends WorkerEntrypoint<Env> {
// Get all MFE configurations
async getConfigs(): Promise<AllConfigs> {
const config = await this.env.MFE_CONFIG.get("mfe-config", "json");
return config as AllConfigs ?? {};
}
// Get active version for a specific MFE
async getActiveVersion(mfeName: string): Promise<string | null> {
const configs = await this.getConfigs();
return configs[mfeName]?.activeVersion ?? null;
}
// Set the active version (admin operation)
async setActiveVersion(mfeName: string, version: string, adminEmail: string): Promise<void> {
const configs = await this.getConfigs();
if (!configs[mfeName]) {
throw new Error(`MFE '${mfeName}' not found`);
}
if (!configs[mfeName].availableVersions.includes(version)) {
throw new Error(`Version '${version}' not available for '${mfeName}'`);
}
configs[mfeName].activeVersion = version;
configs[mfeName].updatedAt = new Date().toISOString();
configs[mfeName].updatedBy = adminEmail;
await this.env.MFE_CONFIG.put("mfe-config", JSON.stringify(configs));
// Purge CDN cache for this MFE
await this.purgeCache(mfeName);
}
// Register a new version after deployment
async registerVersion(mfeName: string, version: string): Promise<void> {
const configs = await this.getConfigs();
if (!configs[mfeName]) {
configs[mfeName] = {
activeVersion: version,
availableVersions: [version],
updatedAt: new Date().toISOString(),
updatedBy: "system",
};
} else {
if (!configs[mfeName].availableVersions.includes(version)) {
configs[mfeName].availableVersions.push(version);
}
}
await this.env.MFE_CONFIG.put("mfe-config", JSON.stringify(configs));
}
private async purgeCache(mfeName: string) {
// Use Cloudflare API to purge cache by prefix
// or use Cache-Tag based purging
await fetch(
`https://api.cloudflare.com/client/v4/zones/${this.env.ZONE_ID}/purge_cache`,
{
method: "POST",
headers: {
Authorization: `Bearer ${this.env.CF_API_TOKEN}`,
"Content-Type": "application/json",
},
body: JSON.stringify({ prefixes: [`/${mfeName}/`] }),
}
);
}
// HTTP handler for admin UI
async fetch(request: Request): Promise<Response> {
const url = new URL(request.url);
if (request.method === "GET" && url.pathname === "/configs") {
return Response.json(await this.getConfigs());
}
if (request.method === "POST" && url.pathname === "/set-version") {
const body = await request.json() as any;
await this.setActiveVersion(body.mfeName, body.version, body.adminEmail);
return Response.json({ success: true });
}
return new Response("Not found", { status: 404 });
}
}
D1-based alternative (for audit trails):
CREATE TABLE mfe_versions (
id INTEGER PRIMARY KEY AUTOINCREMENT,
mfe_name TEXT NOT NULL,
version TEXT NOT NULL,
is_active BOOLEAN DEFAULT FALSE,
deployed_at TEXT NOT NULL,
activated_at TEXT,
activated_by TEXT,
UNIQUE(mfe_name, version)
);
CREATE TABLE version_audit_log (
id INTEGER PRIMARY KEY AUTOINCREMENT,
mfe_name TEXT NOT NULL,
old_version TEXT,
new_version TEXT NOT NULL,
changed_by TEXT NOT NULL,
changed_at TEXT NOT NULL,
reason TEXT
);
D1 is better when you need:
- Audit trails of who changed what and when
- Querying version history
- Relational lookups (e.g., "which MFEs were updated in the last 24 hours?")
- Transactional guarantees when updating multiple MFEs atomically
KV is better when you need:
- Fastest possible reads (globally replicated, eventually consistent)
- Simple key-value lookups for the router Worker
- Lower cost at scale
Recommended hybrid approach: Use D1 as the source of truth in the admin service, and sync active versions to KV for the router Worker to read at sub-millisecond latency.
CDN Caching Strategies
// In the router or asset-serving Worker
function getCacheHeaders(assetPath: string, mfeVersion: string): Headers {
const headers = new Headers();
// Fingerprinted assets (main.abc123.js) - immutable, cache forever
if (assetPath.match(/\.[a-f0-9]{8,}\.(js|css|woff2?|png|jpg|svg)$/)) {
headers.set("Cache-Control", "public, max-age=31536000, immutable");
}
// HTML files - short cache, revalidate often
else if (assetPath.endsWith(".html") || assetPath === "/") {
headers.set("Cache-Control", "public, max-age=0, must-revalidate");
headers.set("CDN-Cache-Control", "max-age=60"); // Cloudflare edge caches 60s
}
// Manifests, service workers - no cache
else if (assetPath.match(/manifest\.json|service-worker\.js/)) {
headers.set("Cache-Control", "no-cache, no-store");
}
// Everything else
else {
headers.set("Cache-Control", "public, max-age=3600, s-maxage=86400");
}
// Cache-Tag for targeted purging when version changes
headers.set("Cache-Tag", `mfe-${mfeVersion}`);
return headers;
}
Cache purging on version switch: When the admin changes an active version, purge the CDN cache for that MFE using the Cloudflare API (purge by prefix or Cache-Tag).
4. Backend-for-Frontend Pattern on Workers
Architecture Overview
BFF Worker Per Micro Frontend
Each MFE has a dedicated BFF Worker that:
- Aggregates data from multiple backend services
- Transforms data into the shape the MFE needs
- Handles MFE-specific business logic
- Keeps the MFE thin (no complex API orchestration on the client)
// bff-dashboard/src/index.ts
import { WorkerEntrypoint } from "cloudflare:workers";
interface Env {
AUTH_SERVICE: Service<AuthService>;
USER_SERVICE: Service<UserService>;
ANALYTICS_DB: D1Database;
CACHE: KVNamespace;
}
export default class DashboardBFF extends WorkerEntrypoint<Env> {
// Called by the dashboard MFE via API gateway
async getDashboardData(userId: string): Promise<DashboardData> {
// Parallel fetch from multiple sources
const [user, recentActivity, stats] = await Promise.all([
this.env.USER_SERVICE.getUser(userId),
this.getRecentActivity(userId),
this.getStats(userId),
]);
// Transform into the exact shape the dashboard MFE expects
return {
user: {
name: user.name,
avatar: user.avatarUrl,
plan: user.subscription.planName,
},
activity: recentActivity.map(a => ({
id: a.id,
description: a.description,
timeAgo: formatTimeAgo(a.timestamp),
})),
stats: {
totalProjects: stats.projects,
activeCollaborators: stats.collaborators,
storageUsed: formatBytes(stats.storageBytes),
},
};
}
private async getRecentActivity(userId: string) {
// Check KV cache first
const cached = await this.env.CACHE.get(`activity:${userId}`, "json");
if (cached) return cached;
const result = await this.env.ANALYTICS_DB.prepare(
"SELECT * FROM activity WHERE user_id = ? ORDER BY timestamp DESC LIMIT 20"
).bind(userId).all();
// Cache for 5 minutes
await this.env.CACHE.put(
`activity:${userId}`,
JSON.stringify(result.results),
{ expirationTtl: 300 }
);
return result.results;
}
private async getStats(userId: string) {
return this.env.ANALYTICS_DB.prepare(
"SELECT COUNT(DISTINCT project_id) as projects, COUNT(DISTINCT collaborator_id) as collaborators, SUM(storage_bytes) as storageBytes FROM user_stats WHERE user_id = ?"
).bind(userId).first();
}
// HTTP handler fallback
async fetch(request: Request): Promise<Response> {
const url = new URL(request.url);
const userId = request.headers.get("X-User-Id");
if (!userId) return new Response("Unauthorized", { status: 401 });
if (url.pathname === "/dashboard-data") {
const data = await this.getDashboardData(userId);
return Response.json(data);
}
return new Response("Not found", { status: 404 });
}
}
Shared Services as Separate Workers
// shared/auth-service/src/index.ts
import { WorkerEntrypoint } from "cloudflare:workers";
export default class AuthService extends WorkerEntrypoint<Env> {
async validateToken(token: string) {
return verifyJWT(token, this.env.JWT_SECRET);
}
async getUserPermissions(userId: string): Promise<string[]> {
const result = await this.env.DB.prepare(
"SELECT permission FROM user_permissions WHERE user_id = ?"
).bind(userId).all();
return result.results.map(r => r.permission as string);
}
async hasPermission(userId: string, permission: string): Promise<boolean> {
const perms = await this.getUserPermissions(userId);
return perms.includes(permission);
}
}
// shared/user-service/src/index.ts
export default class UserService extends WorkerEntrypoint<Env> {
async getUser(userId: string): Promise<User> {
return this.env.DB.prepare(
"SELECT * FROM users WHERE id = ?"
).bind(userId).first();
}
async updateUser(userId: string, updates: Partial<User>): Promise<User> {
// ... update logic
}
}
Service Bindings Configuration for BFF Pattern
# bff-dashboard/wrangler.toml
name = "bff-dashboard"
main = "src/index.ts"
compatibility_date = "2024-12-01"
[[services]]
binding = "AUTH_SERVICE"
service = "auth-service"
[[services]]
binding = "USER_SERVICE"
service = "user-service"
[[d1_databases]]
binding = "ANALYTICS_DB"
database_name = "analytics"
database_id = "xxx"
[[kv_namespaces]]
binding = "CACHE"
id = "yyy"
Environment Management (dev/staging/prod)
Per-environment configuration:
# bff-dashboard/wrangler.toml
name = "bff-dashboard"
main = "src/index.ts"
compatibility_date = "2024-12-01"
# Shared (inheritable) config
[vars]
APP_NAME = "dashboard-bff"
# --- Development ---
[env.dev]
vars = { ENVIRONMENT = "development", LOG_LEVEL = "debug" }
[[env.dev.services]]
binding = "AUTH_SERVICE"
service = "auth-service-dev"
[[env.dev.services]]
binding = "USER_SERVICE"
service = "user-service-dev"
[[env.dev.d1_databases]]
binding = "ANALYTICS_DB"
database_name = "analytics-dev"
database_id = "dev-xxx"
[[env.dev.kv_namespaces]]
binding = "CACHE"
id = "dev-yyy"
# --- Staging ---
[env.staging]
vars = { ENVIRONMENT = "staging", LOG_LEVEL = "info" }
[[env.staging.services]]
binding = "AUTH_SERVICE"
service = "auth-service-staging"
[[env.staging.services]]
binding = "USER_SERVICE"
service = "user-service-staging"
[[env.staging.d1_databases]]
binding = "ANALYTICS_DB"
database_name = "analytics-staging"
database_id = "staging-xxx"
[[env.staging.kv_namespaces]]
binding = "CACHE"
id = "staging-yyy"
# --- Production ---
[env.production]
vars = { ENVIRONMENT = "production", LOG_LEVEL = "warn" }
[[env.production.services]]
binding = "AUTH_SERVICE"
service = "auth-service"
[[env.production.services]]
binding = "USER_SERVICE"
service = "user-service"
[[env.production.d1_databases]]
binding = "ANALYTICS_DB"
database_name = "analytics"
database_id = "prod-xxx"
[[env.production.kv_namespaces]]
binding = "CACHE"
id = "prod-yyy"
Deployment commands:
# Deploy to dev
npx wrangler deploy --env dev
# Deploy to staging
npx wrangler deploy --env staging
# Deploy to production
npx wrangler deploy --env production
Secrets per environment:
# Set secrets per environment
npx wrangler secret put JWT_SECRET --env production
npx wrangler secret put JWT_SECRET --env staging
Local development secrets:
# .dev.vars (for default environment)
JWT_SECRET=local-dev-secret
# .dev.vars.staging (for staging environment locally)
JWT_SECRET=staging-secret
Important: Bindings and environment variables are non-inheritable in Wrangler. You must redeclare them in each environment block. The environment name becomes part of the deployed Worker name (e.g., bff-dashboard-staging), which is what other Workers reference in their service bindings.
5. Local Development with Wrangler
Running Multiple Workers Locally
Method 1: Multi-config single command (Recommended)
Pass multiple configuration files to a single wrangler dev invocation:
npx wrangler dev \
--config ./gateway/wrangler.toml \
--config ./bff-dashboard/wrangler.toml \
--config ./auth-service/wrangler.toml \
--config ./user-service/wrangler.toml
The first config is the primary Worker exposed over HTTP. The remaining Workers are only accessible via service bindings from the primary Worker.
Method 2: Separate terminal sessions (also works)
Since September 2025, Workers running in separate wrangler dev sessions can communicate with each other via a dev registry. This means you can:
# Terminal 1
cd gateway && npx wrangler dev
# Terminal 2
cd auth-service && npx wrangler dev
# Terminal 3
cd bff-dashboard && npx wrangler dev
Service bindings automatically resolve across separate dev commands. The dev registry handles discovery.
Method 3: Remote bindings for services you do not own
For services maintained by other teams, use remote bindings to hit deployed versions:
# wrangler.toml (local development overrides)
[[services]]
binding = "AUTH_SERVICE"
service = "auth-service"
remote = true # Hit the deployed auth-service instead of local
Local Durable Object Testing
Durable Objects work locally with wrangler dev out of the box. Miniflare (embedded in Wrangler) simulates DOs using the same workerd runtime used in production.
npx wrangler dev
This automatically:
- Loads DO bindings from your
wrangler.toml - Creates local SQLite databases for DO storage
- Handles WebSocket connections locally
- Simulates hibernation behavior
Important limitation: Durable Object bindings cannot be set to remote: true. You must either:
- Run DOs locally (default behavior)
- Deploy the DO Worker and use a remote service binding from your local Worker to communicate with it
Adding local test data:
# Seed local KV data
npx wrangler kv key put --binding MFE_CONFIG "mfe-config" '{"dashboard":{"activeVersion":"v1.0.0"}}' --local
# Seed local D1 data
npx wrangler d1 execute analytics-dev --local --command "INSERT INTO ..."
Testing with Vitest
Cloudflare provides @cloudflare/vitest-pool-workers for isolated testing:
// __tests__/chat-room.test.ts
import { env, runInDurableObject, runDurableObjectAlarm } from "cloudflare:test";
import { describe, it, expect } from "vitest";
describe("ChatRoom", () => {
it("should accept WebSocket connections", async () => {
const id = env.ROOMS.idFromName("test-room");
const stub = env.ROOMS.get(id);
const response = await stub.fetch("http://localhost/ws?username=alice", {
headers: { Upgrade: "websocket" },
});
expect(response.status).toBe(101);
expect(response.webSocket).toBeDefined();
});
it("should broadcast messages", async () => {
const id = env.ROOMS.idFromName("test-room");
const stub = env.ROOMS.get(id);
await runInDurableObject(stub, async (instance) => {
// Access the DO instance directly for testing
const sessions = instance.ctx.getWebSockets();
expect(sessions.length).toBeGreaterThan(0);
});
});
});
How wrangler dev Works Under the Hood
- Wrangler reads your
wrangler.tomlconfiguration - It bundles your Worker code using esbuild
- It starts a local workerd process (the same runtime as production) via Miniflare
- Bindings (KV, D1, DOs, R2) are simulated locally using SQLite
- File watching detects changes and hot-reloads the Worker
- A dev registry (filesystem-based) enables cross-session service binding resolution
Key flags:
# Default: fully local
npx wrangler dev
# Use remote resources (not recommended for DOs)
npx wrangler dev --remote
# Specify port
npx wrangler dev --port 8787
# Specify environment
npx wrangler dev --env staging
# With Inspector (Chrome DevTools debugging)
npx wrangler dev --inspector-port 9229
6. Overall Architecture Recommendation
Recommended Project Structure
cloudflare-micro-frontends/
├── packages/
│ ├── gateway/ # API Gateway Worker
│ │ ├── src/index.ts
│ │ └── wrangler.toml
│ ├── router/ # MFE Router Worker (frontend)
│ │ ├── src/index.ts
│ │ └── wrangler.toml
│ ├── mfe-dashboard/ # Dashboard MFE (static + optional SSR)
│ │ ├── src/
│ │ ├── dist/ # Built assets
│ │ └── wrangler.toml
│ ├── mfe-settings/ # Settings MFE
│ │ ├── src/
│ │ ├── dist/
│ │ └── wrangler.toml
│ ├── mfe-navbar/ # Navbar MFE (shared shell)
│ │ ├── src/
│ │ ├── dist/
│ │ └── wrangler.toml
│ ├── bff-dashboard/ # BFF for Dashboard
│ │ ├── src/index.ts
│ │ └── wrangler.toml
│ ├── bff-settings/ # BFF for Settings
│ │ ├── src/index.ts
│ │ └── wrangler.toml
│ ├── services/
│ │ ├── auth/ # Shared Auth Service
│ │ │ ├── src/index.ts
│ │ │ └── wrangler.toml
│ │ ├── user/ # Shared User Service
│ │ │ ├── src/index.ts
│ │ │ └── wrangler.toml
│ │ └── realtime/ # Durable Objects for WebSockets
│ │ ├── src/index.ts
│ │ └── wrangler.toml
│ ├── mfe-admin/ # Admin panel for version management
│ │ ├── src/index.ts
│ │ └── wrangler.toml
│ └── shared/ # Shared TypeScript types/utilities
│ ├── types/
│ └── utils/
├── package.json # Monorepo root (npm workspaces or turborepo)
├── turbo.json # If using Turborepo
└── tsconfig.base.json
Router Worker Configuration (Cloudflare's MFE Pattern)
# router/wrangler.toml
name = "mfe-router"
main = "src/index.ts"
compatibility_date = "2024-12-01"
# Service bindings to each MFE
[[services]]
binding = "DASHBOARD"
service = "mfe-dashboard"
[[services]]
binding = "SETTINGS"
service = "mfe-settings"
[[services]]
binding = "NAVBAR"
service = "mfe-navbar"
[vars]
ROUTES = '''
[
{ "path": "/dashboard", "binding": "DASHBOARD", "preload": true },
{ "path": "/settings", "binding": "SETTINGS" },
{ "path": "/", "binding": "NAVBAR" }
]
'''
# Optional: smooth transitions between MFEs
SMOOTH_TRANSITIONS = "true"
The Cloudflare MFE router automatically:
- Rewrites asset paths (e.g.,
/assets/main.jsbecomes/dashboard/assets/main.js) - Handles CSS
url()rewrites - Supports View Transitions API for smooth navigation
- Injects Speculation Rules for Chromium browser prefetching
- Strips mount prefixes before forwarding to MFE Workers
7. Gotchas & Pitfalls
Durable Objects
-
Deploying new code disconnects ALL WebSockets. Every code deployment restarts all DO instances, terminating existing connections. Plan for client-side reconnection logic with exponential backoff.
-
DOs do not know their own name/ID. Use an explicit
init()method or pass the identity via the request URL when first creating the DO. -
In-memory state is lost on eviction. Class properties are wiped when a DO is evicted from memory due to inactivity. Use SQLite storage or
serializeAttachment()for anything that must survive. -
blockConcurrencyWhile()kills throughput. Only use it for initialization (schema migrations). It limits throughput to approximately 200 req/sec if each call takes 5ms. -
Alarms can fire more than once. Make alarm handlers idempotent. Check state before performing actions.
-
No shutdown hooks. You cannot reliably run cleanup logic when a DO is being evicted. Persist state incrementally as you process, not in a final cleanup step.
-
Single global singleton is an anti-pattern. Never funnel all traffic through one DO instance. Find natural sharding boundaries (per user, per room, per document).
Service Bindings & Workers
-
Service bindings are account-scoped. Both Workers must be in the same Cloudflare account. Cross-account service bindings are not supported.
-
A new
WorkerEntrypointinstance is created per invocation. They are stateless. Do not store state in instance properties expecting it to persist across calls. -
Always
awaitRPC calls. Forgetting to await swallows errors silently, leading to hard-to-debug issues. -
Environment bindings are non-inheritable. In Wrangler environments, you must redeclare KV, D1, R2, and service bindings in every
[env.X]block. They do not inherit from the top level. -
Worker name changes with environment. A Worker named
my-workerwith envstagingdeploys asmy-worker-staging. All service bindings referencing it must use the full name.
Caching & Assets
-
KV is eventually consistent. After updating a KV value, it may take up to 60 seconds for the new value to propagate globally. For version switches, purge CDN cache immediately and accept this propagation delay, or use a Cache API read-through pattern.
-
Workers Static Assets do not support custom
Cache-Controlvia_headersfor SSR responses. If your MFE uses SSR, set cache headers in your Worker code directly, not in the_headersfile. -
R2 is not a CDN by default. Reads from R2 go to the nearest storage region. Put Cloudflare Cache in front of R2 for edge caching, or use Workers to cache responses.
Local Development
-
Durable Object bindings cannot be remote. You must run DOs locally or access them through a deployed Worker via a remote service binding. There is no
remote: trueoption for DO bindings. -
Multi-config wrangler dev exposes only the first Worker over HTTP. Other Workers in the same command are reachable only through service bindings. If you need to test them directly via HTTP, run them in separate terminal sessions.
-
Storage state is reset between
wrangler devsessions. Local KV, D1, and DO storage is ephemeral unless you use--persist(enabled by default in recent Wrangler versions) or seed data explicitly. -
Cross-session dev registry is filesystem-based. If sessions are in different filesystem contexts (e.g., Docker containers), cross-session service binding discovery may not work.
Micro Frontend Specific
-
Path rewriting is automatic but not magic. The Cloudflare MFE router rewrites known asset prefixes (
/assets/,/static/,/build/,/_astro/). If your framework uses non-standard paths, configureASSET_PREFIXESexplicitly, or asset loading will break. -
Each MFE deployment is independent but the router binding is static. Adding a new MFE requires updating the router's
wrangler.tomlwith a new service binding and redeploying the router. Removing an MFE also requires a router update. -
Versioned deployments do not track storage state. KV, R2, D1, and DO data are not versioned with your Worker code. A rollback to a previous Worker version does not roll back database schema changes or stored data.
Sources
- Use WebSockets - Durable Objects Best Practices
- Build a WebSocket Server with Hibernation
- Rules of Durable Objects
- Durable Objects Pricing
- Durable Objects Limits
- Service Bindings - Runtime APIs
- Service Bindings - RPC (WorkerEntrypoint)
- Service Bindings GA Blog Post
- Microfrontends on Workers
- Cloudflare Workers and Micro-Frontends: Made for One Another
- Building Vertical Microfrontends on Cloudflare
- Build Microfrontend Applications on Workers (Jan 2026)
- Workers Static Assets
- Workers Versions & Deployments
- Build a Distributed Configuration Store with KV
- Wrangler Environments
- Development & Testing
- Improved Multi-Worker wrangler dev (Sep 2025)
- Miniflare
- How the Cache Works with Workers
- Run Multiple Cloudflare Workers Locally