Cloudflare Infrastructure
Table of Contents
- Overview
- Workers Architecture
- Storage Services
- Service Bindings and RPC
- CDN Caching Strategy
- Environment Management
- Cost Considerations and Limits
- Backup Strategy
- References
Overview
The entire backend infrastructure for the micro frontends platform runs on Cloudflare Workers, an edge compute platform built on V8 isolates. Unlike traditional serverless platforms (AWS Lambda, Google Cloud Functions) that rely on containers or microVMs, Cloudflare Workers execute within V8 isolates — the same JavaScript engine that powers Chrome.
Why Cloudflare Workers
V8 Isolate Model:
- No cold starts. V8 isolates spin up in under 5 milliseconds, effectively providing 0ms startup time from the caller's perspective. There is no container to provision, no runtime to bootstrap.
- Global deployment by default. Every Worker is deployed to Cloudflare's network of 300+ data centers worldwide. There is no region selection — code runs at the edge closest to the user.
- Sub-millisecond overhead. Isolates share a single OS process, making them orders of magnitude lighter than containers. Thousands of isolates can run within a single process.
- Security through isolation. Each request runs in its own isolate with a fresh global scope. There is no shared state between requests unless explicitly managed through Durable Objects.
Workers in This Platform:
| Worker | Purpose |
|---|---|
| API Gateway | Central entry point: CORS, auth, rate limiting, routing |
| BFF Workers | Domain-specific backends (one per micro frontend or logical domain) |
| Version Config Service | Manages which MFE versions are deployed per environment |
| Asset Serving Worker | Serves static MFE bundles from R2 with caching logic |
All inter-Worker communication uses service bindings, which provide zero-latency, zero-cost function calls between Workers without going through the public internet.
Note on Cloudflare Pages deprecation: Cloudflare Pages was deprecated in April 2025. For static asset hosting, use Workers Static Assets instead, which integrates directly with Workers and supports the same deployment model used in this platform. All new projects should use Workers Static Assets; existing Pages projects should plan migration.
Workers Architecture
API Gateway Worker
The API Gateway Worker is the central entry point for all API requests from the frontend. It sits at api.example.com and handles cross-cutting concerns before routing requests to the appropriate BFF Worker.
Responsibilities:
- CORS handling — Validates
Originheaders, sets appropriateAccess-Control-*response headers - Rate limiting — Uses KV (or Durable Objects for precise counting) to track request rates per IP or API key
- JWT validation — Verifies access tokens using the
joselibrary against JWKS endpoints (e.g., Auth0, Clerk) - Request routing — Maps URL paths to downstream BFF Workers via service bindings
// src/gateway/index.ts
import { jwtVerify, createRemoteJWKSet } from "jose";
export interface Env {
// Service bindings to BFF Workers
BFF_DASHBOARD: Service<BffDashboardWorker>;
BFF_SETTINGS: Service<BffSettingsWorker>;
BFF_ANALYTICS: Service<BffAnalyticsWorker>;
// KV for rate limiting
RATE_LIMIT_KV: KVNamespace;
// Environment variables
ALLOWED_ORIGINS: string; // comma-separated
JWKS_URL: string;
RATE_LIMIT_MAX: string; // requests per window
RATE_LIMIT_WINDOW_SECONDS: string;
}
// JWKS set cached at module scope (persists across requests within the same isolate).
// createRemoteJWKSet handles caching internally and respects HTTP cache headers from the
// JWKS endpoint. For key rotation scenarios, ensure your identity provider sets appropriate
// Cache-Control headers (e.g., max-age=600 for a 10-minute TTL). If a token fails
// verification with a "kid" not found in the cached JWKS, jose will automatically
// re-fetch the JWKS endpoint. For forced invalidation, set jwks = null.
let jwks: ReturnType<typeof createRemoteJWKSet> | null = null;
// TTL-based invalidation: recreate the JWKS set periodically to handle key rotation
let jwksCreatedAt: number = 0;
const JWKS_TTL_MS = 10 * 60 * 1000; // 10 minutes
export default {
async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise<Response> {
// --- CORS Preflight ---
if (request.method === "OPTIONS") {
return handleCorsPreflightRequest(request, env);
}
// --- CORS Origin Validation ---
const origin = request.headers.get("Origin");
const allowedOrigins = env.ALLOWED_ORIGINS.split(",").map((o) => o.trim());
if (origin && !allowedOrigins.includes(origin)) {
return new Response("Forbidden", { status: 403 });
}
// --- Rate Limiting ---
const clientIp = request.headers.get("CF-Connecting-IP") ?? "unknown";
const rateLimitResult = await checkRateLimit(env, clientIp);
if (!rateLimitResult.allowed) {
return new Response("Too Many Requests", {
status: 429,
headers: {
"Retry-After": String(rateLimitResult.retryAfter),
"X-RateLimit-Limit": env.RATE_LIMIT_MAX,
"X-RateLimit-Remaining": "0",
},
});
}
// --- JWT Validation ---
const authHeader = request.headers.get("Authorization");
if (!authHeader?.startsWith("Bearer ")) {
return new Response("Unauthorized", { status: 401 });
}
const token = authHeader.slice(7);
let jwtPayload: Record<string, unknown>;
try {
if (!jwks || Date.now() - jwksCreatedAt > JWKS_TTL_MS) {
jwks = createRemoteJWKSet(new URL(env.JWKS_URL));
jwksCreatedAt = Date.now();
}
const { payload } = await jwtVerify(token, jwks, {
issuer: env.JWKS_URL.replace("/.well-known/jwks.json", ""),
audience: "mfe-platform-api",
});
jwtPayload = payload as Record<string, unknown>;
} catch (error) {
return new Response("Invalid token", { status: 401 });
}
// --- Route to BFF Workers ---
const url = new URL(request.url);
const response = await routeRequest(url.pathname, request, env, jwtPayload);
// --- Attach CORS Headers ---
const corsHeaders = new Headers(response.headers);
if (origin && allowedOrigins.includes(origin)) {
corsHeaders.set("Access-Control-Allow-Origin", origin);
corsHeaders.set("Access-Control-Allow-Credentials", "true");
}
return new Response(response.body, {
status: response.status,
statusText: response.statusText,
headers: corsHeaders,
});
},
} satisfies ExportedHandler<Env>;
// --- Routing Logic ---
interface RouteDefinition {
prefix: string;
handler: (
request: Request,
env: Env,
jwtPayload: Record<string, unknown>,
subpath: string,
) => Promise<Response>;
}
const routes: RouteDefinition[] = [
{
prefix: "/api/dashboard",
handler: async (request, env, jwt, subpath) => {
const userId = jwt.sub as string;
try {
if (subpath === "/metrics") {
const data = await env.BFF_DASHBOARD.getMetrics(userId);
return Response.json(data);
}
if (subpath === "/recent-activity") {
const data = await env.BFF_DASHBOARD.getRecentActivity(userId, 20);
return Response.json(data);
}
} catch (error) {
console.error("BFF_DASHBOARD service binding error:", error);
return Response.json(
{ error: "Service temporarily unavailable" },
{ status: 503 },
);
}
return new Response("Not Found", { status: 404 });
},
},
{
prefix: "/api/settings",
handler: async (request, env, jwt, subpath) => {
const userId = jwt.sub as string;
try {
if (subpath === "/profile" && request.method === "GET") {
const data = await env.BFF_SETTINGS.getProfile(userId);
return Response.json(data);
}
if (subpath === "/profile" && request.method === "PUT") {
const body = await request.json();
const data = await env.BFF_SETTINGS.updateProfile(userId, body);
return Response.json(data);
}
} catch (error) {
console.error("BFF_SETTINGS service binding error:", error);
return Response.json(
{ error: "Service temporarily unavailable" },
{ status: 503 },
);
}
return new Response("Not Found", { status: 404 });
},
},
{
prefix: "/api/analytics",
handler: async (request, env, jwt, subpath) => {
const userId = jwt.sub as string;
try {
if (subpath === "/events" && request.method === "POST") {
const body = await request.json();
await env.BFF_ANALYTICS.trackEvent(userId, body);
return new Response(null, { status: 204 });
}
} catch (error) {
console.error("BFF_ANALYTICS service binding error:", error);
return Response.json(
{ error: "Service temporarily unavailable" },
{ status: 503 },
);
}
return new Response("Not Found", { status: 404 });
},
},
];
async function routeRequest(
pathname: string,
request: Request,
env: Env,
jwtPayload: Record<string, unknown>,
): Promise<Response> {
for (const route of routes) {
if (pathname.startsWith(route.prefix)) {
const subpath = pathname.slice(route.prefix.length) || "/";
return route.handler(request, env, jwtPayload, subpath);
}
}
return new Response("Not Found", { status: 404 });
}
// --- Rate Limiting using KV ---
async function checkRateLimit(
env: Env,
clientIp: string,
): Promise<{ allowed: boolean; retryAfter: number }> {
const windowSeconds = parseInt(env.RATE_LIMIT_WINDOW_SECONDS, 10) || 60;
const maxRequests = parseInt(env.RATE_LIMIT_MAX, 10) || 100;
const key = `rate-limit:${clientIp}`;
const current = await env.RATE_LIMIT_KV.get(key, "json") as {
count: number;
windowStart: number;
} | null;
const now = Math.floor(Date.now() / 1000);
if (!current || now - current.windowStart > windowSeconds) {
// New window
await env.RATE_LIMIT_KV.put(
key,
JSON.stringify({ count: 1, windowStart: now }),
{ expirationTtl: windowSeconds * 2 },
);
return { allowed: true, retryAfter: 0 };
}
if (current.count >= maxRequests) {
const retryAfter = windowSeconds - (now - current.windowStart);
return { allowed: false, retryAfter };
}
await env.RATE_LIMIT_KV.put(
key,
JSON.stringify({ count: current.count + 1, windowStart: current.windowStart }),
{ expirationTtl: windowSeconds * 2 },
);
return { allowed: true, retryAfter: 0 };
}
// --- CORS Preflight ---
function handleCorsPreflightRequest(request: Request, env: Env): Response {
const origin = request.headers.get("Origin") ?? "";
const allowedOrigins = env.ALLOWED_ORIGINS.split(",").map((o) => o.trim());
if (!allowedOrigins.includes(origin)) {
return new Response(null, { status: 403 });
}
return new Response(null, {
status: 204,
headers: {
"Access-Control-Allow-Origin": origin,
"Access-Control-Allow-Methods": "GET, POST, PUT, DELETE, OPTIONS",
"Access-Control-Allow-Headers": "Content-Type, Authorization",
"Access-Control-Allow-Credentials": "true",
"Access-Control-Max-Age": "86400",
},
});
}
Note on rate limiting precision: KV-based rate limiting is eventually consistent across edge locations, making it approximate rather than exact. For strict rate limiting (e.g., payment APIs), use Durable Objects instead, which provide strongly consistent counters scoped to a single location.
BFF Workers (Backend for Frontend)
Each micro frontend (or logical domain) has its own dedicated BFF Worker. This pattern provides:
- Separation of concerns — Each BFF encapsulates the data fetching, transformation, and aggregation logic specific to its MFE.
- Independent deployability — BFF Workers can be updated independently without affecting other parts of the system.
- Optimized data shapes — Each BFF returns exactly the data its MFE needs, avoiding over-fetching.
- Typed RPC interfaces — Using
WorkerEntrypoint, BFF Workers expose type-safe methods that the API Gateway calls directly.
BFF Worker Example: Dashboard
// src/bff-dashboard/index.ts
import { WorkerEntrypoint } from "cloudflare:workers";
export interface DashboardMetrics {
totalUsers: number;
activeToday: number;
revenue: { amount: number; currency: string; percentChange: number };
systemHealth: { status: "healthy" | "degraded" | "down"; uptime: number };
}
export interface ActivityItem {
id: string;
type: "login" | "purchase" | "support_ticket" | "deployment";
description: string;
timestamp: string;
userId: string;
metadata: Record<string, unknown>;
}
interface Env {
// External API configuration
METRICS_API_URL: string;
METRICS_API_KEY: string;
ACTIVITY_API_URL: string;
ACTIVITY_API_KEY: string;
// Cache
CACHE_KV: KVNamespace;
}
export default class BffDashboardWorker extends WorkerEntrypoint<Env> {
/**
* Fetches aggregated dashboard metrics for a user.
* Combines data from multiple upstream APIs and caches the result.
*/
async getMetrics(userId: string): Promise<DashboardMetrics> {
// Check cache first (30-second TTL for dashboard metrics)
const cacheKey = `dashboard-metrics:${userId}`;
const cached = await this.env.CACHE_KV.get(cacheKey, "json") as DashboardMetrics | null;
if (cached) {
return cached;
}
// Fetch from multiple upstream APIs in parallel
const [usersResponse, revenueResponse, healthResponse] = await Promise.all([
fetch(`${this.env.METRICS_API_URL}/users/stats`, {
headers: { Authorization: `Bearer ${this.env.METRICS_API_KEY}` },
}),
fetch(`${this.env.METRICS_API_URL}/revenue/summary`, {
headers: { Authorization: `Bearer ${this.env.METRICS_API_KEY}` },
}),
fetch(`${this.env.METRICS_API_URL}/system/health`, {
headers: { Authorization: `Bearer ${this.env.METRICS_API_KEY}` },
}),
]);
const [usersData, revenueData, healthData] = await Promise.all([
usersResponse.json() as Promise<{ total: number; active_today: number }>,
revenueResponse.json() as Promise<{
amount: number;
currency: string;
percent_change: number;
}>,
healthResponse.json() as Promise<{ status: string; uptime_seconds: number }>,
]);
// Transform and aggregate into the shape the frontend expects
const metrics: DashboardMetrics = {
totalUsers: usersData.total,
activeToday: usersData.active_today,
revenue: {
amount: revenueData.amount,
currency: revenueData.currency,
percentChange: revenueData.percent_change,
},
systemHealth: {
status: healthData.status as DashboardMetrics["systemHealth"]["status"],
uptime: healthData.uptime_seconds,
},
};
// Cache the result
this.ctx.waitUntil(
this.env.CACHE_KV.put(cacheKey, JSON.stringify(metrics), { expirationTtl: 30 }),
);
return metrics;
}
/**
* Fetches recent activity items for a user.
*/
async getRecentActivity(userId: string, limit: number): Promise<ActivityItem[]> {
const response = await fetch(
`${this.env.ACTIVITY_API_URL}/activity?userId=${userId}&limit=${limit}`,
{
headers: { Authorization: `Bearer ${this.env.ACTIVITY_API_KEY}` },
},
);
const rawData = (await response.json()) as Array<{
id: string;
event_type: string;
description: string;
created_at: string;
user_id: string;
meta: Record<string, unknown>;
}>;
// Transform snake_case API response to camelCase frontend shape
return rawData.map((item) => ({
id: item.id,
type: item.event_type as ActivityItem["type"],
description: item.description,
timestamp: item.created_at,
userId: item.user_id,
metadata: item.meta,
}));
}
/**
* Standard fetch handler for HTTP-based access (if needed).
* Service binding RPC calls bypass this entirely.
*/
async fetch(request: Request): Promise<Response> {
return new Response("This Worker is designed for RPC access via service bindings.", {
status: 400,
});
}
}
wrangler.toml for the BFF Dashboard Worker:
# workers/bff-dashboard/wrangler.toml
name = "bff-dashboard"
main = "src/index.ts"
compatibility_date = "2026-02-25"
kv_namespaces = [
{ binding = "CACHE_KV", id = "abc123def456" }
]
[vars]
METRICS_API_URL = "https://internal-metrics.example.com/v1"
ACTIVITY_API_URL = "https://internal-activity.example.com/v1"
Important: Secrets like
METRICS_API_KEYandACTIVITY_API_KEYare not stored inwrangler.toml. They are set viawrangler secret put METRICS_API_KEYand injected at runtime through theEnvinterface.
Version Config Service
The Version Config Service is responsible for managing which version of each micro frontend is currently deployed in each environment. It acts as the source of truth that the shell application reads at runtime to determine which remote entry manifests to load.
Design:
- KV is the read layer — globally replicated, low-latency reads at every edge location (~250ms global p99, ~50ms p95). Sub-millisecond latency (<5ms) applies only to the internal KV Storage Protocol (KVSP), not external access.
- D1 is the write layer — provides a relational store for audit trails, history, and rollback metadata.
- Writes go to both D1 (durable, queryable) and KV (fast, edge-distributed).
// src/version-config/index.ts
import { WorkerEntrypoint } from "cloudflare:workers";
export interface VersionConfigMap {
[mfeName: string]: {
version: string;
manifestUrl: string;
integrity?: string; // SRI hash for the manifest
updatedAt: string;
};
}
export interface VersionChangeRecord {
id: number;
mfeName: string;
version: string;
previousVersion: string | null;
manifestUrl: string;
environment: string;
changedBy: string;
reason: string;
timestamp: string;
}
interface Env {
VERSION_CONFIG_KV: KVNamespace;
VERSION_DB: D1Database;
ENVIRONMENT: string; // "production" | "staging" | "dev"
CDN_BASE_URL: string; // e.g., "https://cdn.example.com"
}
export default class VersionConfigWorker extends WorkerEntrypoint<Env> {
/**
* RPC method: Get the current version config for the active environment.
* Called by the shell application at startup and on navigation.
*/
async getConfig(): Promise<VersionConfigMap> {
const kvKey = `version-config:${this.env.ENVIRONMENT}`;
const config = await this.env.VERSION_CONFIG_KV.get(kvKey, "json") as VersionConfigMap | null;
if (config) {
return config;
}
// Fallback: rebuild from D1 if KV is empty (initial deploy or KV purge)
return this.rebuildConfigFromD1();
}
/**
* RPC method: Update the version for a specific MFE.
* Writes to D1 for persistence, then updates KV for fast reads.
*/
async updateVersion(
mfeName: string,
version: string,
changedBy: string,
reason: string,
): Promise<{ success: boolean; config: VersionConfigMap }> {
const kvKey = `version-config:${this.env.ENVIRONMENT}`;
const manifestUrl = `${this.env.CDN_BASE_URL}/${mfeName}/${version}/mf-manifest.json`;
// Get current config
const currentConfig =
(await this.env.VERSION_CONFIG_KV.get(kvKey, "json") as VersionConfigMap) || {};
const previousVersion = currentConfig[mfeName]?.version ?? null;
// Write audit record to D1
await this.env.VERSION_DB.prepare(
`INSERT INTO version_changes (mfe_name, version, previous_version, manifest_url, environment, changed_by, reason, timestamp)
VALUES (?, ?, ?, ?, ?, ?, ?, datetime('now'))`,
)
.bind(mfeName, version, previousVersion, manifestUrl, this.env.ENVIRONMENT, changedBy, reason)
.run();
// Update the config map
const updatedConfig: VersionConfigMap = {
...currentConfig,
[mfeName]: {
version,
manifestUrl,
updatedAt: new Date().toISOString(),
},
};
// Write to KV (globally distributed, eventually consistent; 30s min cacheTtl, RYOW at same PoP)
await this.env.VERSION_CONFIG_KV.put(kvKey, JSON.stringify(updatedConfig));
return { success: true, config: updatedConfig };
}
/**
* RPC method: Rollback an MFE to a previous version.
*/
async rollback(mfeName: string, targetVersion: string, changedBy: string): Promise<{
success: boolean;
config: VersionConfigMap;
}> {
// Verify the target version exists in the history
const record = await this.env.VERSION_DB.prepare(
`SELECT version, manifest_url FROM version_changes
WHERE mfe_name = ? AND version = ? AND environment = ?
ORDER BY timestamp DESC LIMIT 1`,
)
.bind(mfeName, targetVersion, this.env.ENVIRONMENT)
.first<{ version: string; manifest_url: string }>();
if (!record) {
throw new Error(`Version ${targetVersion} not found in history for ${mfeName}`);
}
return this.updateVersion(mfeName, targetVersion, changedBy, `Rollback to ${targetVersion}`);
}
/**
* RPC method: Get version change history for an MFE.
*/
async getHistory(mfeName: string, limit: number = 50): Promise<VersionChangeRecord[]> {
const { results } = await this.env.VERSION_DB.prepare(
`SELECT id, mfe_name, version, previous_version, manifest_url, environment, changed_by, reason, timestamp
FROM version_changes
WHERE mfe_name = ? AND environment = ?
ORDER BY timestamp DESC
LIMIT ?`,
)
.bind(mfeName, this.env.ENVIRONMENT, limit)
.all<VersionChangeRecord>();
return results ?? [];
}
/**
* HTTP handler for REST API access (used by Admin UI and CI/CD).
*/
async fetch(request: Request): Promise<Response> {
const url = new URL(request.url);
// GET /config — return current version map
if (url.pathname === "/config" && request.method === "GET") {
const config = await this.getConfig();
return Response.json(config, {
headers: { "Cache-Control": "public, max-age=10, s-maxage=30" },
});
}
// PUT /config/:mfeName — update version for an MFE
const updateMatch = url.pathname.match(/^\/config\/([a-z0-9-]+)$/);
if (updateMatch && request.method === "PUT") {
const mfeName = updateMatch[1];
const body = (await request.json()) as {
version: string;
changedBy: string;
reason?: string;
};
const result = await this.updateVersion(
mfeName,
body.version,
body.changedBy,
body.reason ?? "Manual update",
);
return Response.json(result);
}
// GET /config/:mfeName/history — get version history
const historyMatch = url.pathname.match(/^\/config\/([a-z0-9-]+)\/history$/);
if (historyMatch && request.method === "GET") {
const mfeName = historyMatch[1];
const limit = parseInt(url.searchParams.get("limit") ?? "50", 10);
const history = await this.getHistory(mfeName, limit);
return Response.json(history);
}
// POST /config/:mfeName/rollback — rollback to a specific version
const rollbackMatch = url.pathname.match(/^\/config\/([a-z0-9-]+)\/rollback$/);
if (rollbackMatch && request.method === "POST") {
const mfeName = rollbackMatch[1];
const body = (await request.json()) as { targetVersion: string; changedBy: string };
const result = await this.rollback(mfeName, body.targetVersion, body.changedBy);
return Response.json(result);
}
return new Response("Not Found", { status: 404 });
}
/**
* Rebuild the version config from D1.
* Used as fallback when KV is empty.
*/
private async rebuildConfigFromD1(): Promise<VersionConfigMap> {
const { results } = await this.env.VERSION_DB.prepare(
`SELECT DISTINCT mfe_name, version, manifest_url, timestamp
FROM version_changes
WHERE environment = ?
AND timestamp = (
SELECT MAX(timestamp) FROM version_changes vc2
WHERE vc2.mfe_name = version_changes.mfe_name
AND vc2.environment = version_changes.environment
)`,
)
.bind(this.env.ENVIRONMENT)
.all<{ mfe_name: string; version: string; manifest_url: string; timestamp: string }>();
const config: VersionConfigMap = {};
for (const row of results ?? []) {
config[row.mfe_name] = {
version: row.version,
manifestUrl: row.manifest_url,
updatedAt: row.timestamp,
};
}
// Repopulate KV
const kvKey = `version-config:${this.env.ENVIRONMENT}`;
await this.env.VERSION_CONFIG_KV.put(kvKey, JSON.stringify(config));
return config;
}
}
Storage Services
R2 (Object Storage)
R2 is Cloudflare's S3-compatible object storage. It stores all versioned MFE bundles, manifests, and static assets. R2 has zero egress fees, making it ideal for serving frontend assets at scale.
Bucket Structure:
mfe-assets-production/
├── shell/
│ └── 1.0.0/
│ ├── mf-manifest.json
│ ├── index.html
│ ├── shell-abc123.js
│ └── shell-abc123.css
├── dashboard/
│ ├── 2.3.1/
│ │ ├── mf-manifest.json
│ │ ├── remoteEntry-def456.js
│ │ ├── chunk-Dashboard-789abc.js
│ │ ├── chunk-Sidebar-012def.js
│ │ └── styles-345ghi.css
│ └── 2.3.0/
│ ├── mf-manifest.json
│ ├── remoteEntry-aaa111.js
│ └── ...
├── settings/
│ └── 1.5.0/
│ ├── mf-manifest.json
│ ├── remoteEntry-bbb222.js
│ └── ...
└── analytics/
└── 3.0.0/
├── mf-manifest.json
├── remoteEntry-ccc333.js
└── ...
Key naming convention: /{mfe-name}/{version}/{filename}
Every filename (except mf-manifest.json) includes a content hash (fingerprint) in the filename, which enables aggressive caching (see CDN Caching Strategy).
Upload Script (used in CI/CD):
// scripts/upload-mfe-bundle.ts
// Run via: npx tsx scripts/upload-mfe-bundle.ts --mfe dashboard --version 2.3.1
import { S3Client, PutObjectCommand } from "@aws-sdk/client-s3";
import { readdir, readFile } from "node:fs/promises";
import { join, extname } from "node:path";
import { parseArgs } from "node:util";
const { values } = parseArgs({
options: {
mfe: { type: "string" },
version: { type: "string" },
"dist-path": { type: "string", default: "dist" },
environment: { type: "string", default: "production" },
},
});
const { mfe, version, environment } = values;
const distPath = values["dist-path"]!;
if (!mfe || !version) {
console.error("Usage: --mfe <name> --version <version>");
process.exit(1);
}
const CONTENT_TYPE_MAP: Record<string, string> = {
".js": "application/javascript",
".css": "text/css",
".json": "application/json",
".html": "text/html",
".svg": "image/svg+xml",
".png": "image/png",
".woff2": "font/woff2",
};
const BUCKET_NAME = `mfe-assets-${environment}`;
// R2 exposes an S3-compatible API
const s3Client = new S3Client({
region: "auto",
endpoint: `https://${process.env.CF_ACCOUNT_ID}.r2.cloudflarestorage.com`,
credentials: {
accessKeyId: process.env.R2_ACCESS_KEY_ID!,
secretAccessKey: process.env.R2_SECRET_ACCESS_KEY!,
},
});
async function uploadDirectory(dirPath: string, prefix: string): Promise<void> {
const entries = await readdir(dirPath, { withFileTypes: true });
for (const entry of entries) {
const fullPath = join(dirPath, entry.name);
if (entry.isDirectory()) {
await uploadDirectory(fullPath, `${prefix}/${entry.name}`);
continue;
}
const fileContent = await readFile(fullPath);
const ext = extname(entry.name);
const contentType = CONTENT_TYPE_MAP[ext] ?? "application/octet-stream";
const key = `${prefix}/${entry.name}`;
// Determine cache control based on filename
const isFingerprinted = /[.-][a-f0-9]{8,}\./.test(entry.name);
const cacheControl = isFingerprinted
? "public, max-age=31536000, immutable"
: "public, max-age=60, s-maxage=300";
await s3Client.send(
new PutObjectCommand({
Bucket: BUCKET_NAME,
Key: key,
Body: fileContent,
ContentType: contentType,
CacheControl: cacheControl,
}),
);
console.log(`Uploaded: ${key} (${contentType}, ${cacheControl})`);
}
}
async function main(): Promise<void> {
const prefix = `${mfe}/${version}`;
console.log(`Uploading ${distPath} to R2: ${BUCKET_NAME}/${prefix}/`);
await uploadDirectory(distPath, prefix);
console.log(`\nUpload complete. Manifest URL:`);
console.log(` https://cdn.example.com/${prefix}/mf-manifest.json`);
}
main().catch((error) => {
console.error("Upload failed:", error);
process.exit(1);
});
Custom Domain for R2:
R2 buckets can be served directly through a custom domain (e.g., cdn.example.com) using Cloudflare's managed public access feature, or through a Worker that adds custom cache headers and access control.
KV (Key-Value Store)
KV is a globally distributed, eventually consistent key-value store. In this platform, its primary use is storing the version configuration so that every edge location can resolve MFE versions with low latency (~250ms global p99, ~50ms p95 for external reads). Note that sub-millisecond latency (<5ms) figures cited in some Cloudflare documentation refer to the internal KV Storage Protocol (KVSP), not external Worker API access.
Key Structure:
| Key | Value | Purpose |
|---|---|---|
version-config:production | JSON object mapping MFE names to manifest URLs | Production version map |
version-config:staging | Same structure | Staging version map |
version-config:dev | Same structure | Development version map |
Example stored value for version-config:production:
{
"shell": {
"version": "1.0.0",
"manifestUrl": "https://cdn.example.com/shell/1.0.0/mf-manifest.json",
"updatedAt": "2024-09-15T10:30:00Z"
},
"dashboard": {
"version": "2.3.1",
"manifestUrl": "https://cdn.example.com/dashboard/2.3.1/mf-manifest.json",
"updatedAt": "2024-09-15T14:22:00Z"
},
"settings": {
"version": "1.5.0",
"manifestUrl": "https://cdn.example.com/settings/1.5.0/mf-manifest.json",
"updatedAt": "2024-09-14T09:00:00Z"
},
"analytics": {
"version": "3.0.0",
"manifestUrl": "https://cdn.example.com/analytics/3.0.0/mf-manifest.json",
"updatedAt": "2024-09-15T16:45:00Z"
}
}
KV Namespace Binding:
# In wrangler.toml
kv_namespaces = [
{ binding = "VERSION_CONFIG_KV", id = "a1b2c3d4e5f6", preview_id = "f6e5d4c3b2a1" }
]
Read/Write Patterns:
// Reading version config (low-latency at the edge, ~50ms p95)
const config = await env.VERSION_CONFIG_KV.get("version-config:production", "json");
// Writing version config (30s min cacheTtl; RYOW consistent at the same PoP)
await env.VERSION_CONFIG_KV.put(
"version-config:production",
JSON.stringify(updatedConfig),
);
// Reading with metadata (for cache invalidation tracking)
const { value, metadata } = await env.VERSION_CONFIG_KV.getWithMetadata(
"version-config:production",
"json",
);
Consistency Model:
KV is eventually consistent with a minimum cacheTtl of 30 seconds. Reads from the same PoP (Point of Presence) that performed a write benefit from Read-Your-Own-Writes (RYOW) consistency. This means:
- After a version update, other edge locations may serve the old version for at least 30 seconds (the minimum
cacheTtl). - Reads at the PoP that performed the write will see the new value immediately (RYOW consistency).
- This is acceptable for version configuration because MFE version changes are infrequent and a 30-second window is operationally insignificant.
- For scenarios requiring immediate consistency (e.g., emergency rollbacks), the shell can be configured to query the Version Config Service directly via its HTTP endpoint, bypassing the KV cache.
D1 (SQLite Database)
D1 is Cloudflare's serverless SQLite database (GA since April 2024, 1 TB account storage limit). It provides a relational store for data that needs to be queried, joined, or audited — capabilities that KV does not offer. In this platform, D1 stores the audit trail of all version changes.
Schema:
-- migrations/0001_create_version_changes.sql
CREATE TABLE IF NOT EXISTS version_changes (
id INTEGER PRIMARY KEY AUTOINCREMENT,
mfe_name TEXT NOT NULL,
version TEXT NOT NULL,
previous_version TEXT,
manifest_url TEXT NOT NULL,
environment TEXT NOT NULL CHECK (environment IN ('dev', 'staging', 'production')),
changed_by TEXT NOT NULL,
reason TEXT NOT NULL DEFAULT 'Manual update',
timestamp TEXT NOT NULL DEFAULT (datetime('now')),
-- For quick lookups
UNIQUE(mfe_name, version, environment, timestamp)
);
-- Index for common query patterns
CREATE INDEX IF NOT EXISTS idx_version_changes_mfe_env
ON version_changes(mfe_name, environment, timestamp DESC);
CREATE INDEX IF NOT EXISTS idx_version_changes_env_timestamp
ON version_changes(environment, timestamp DESC);
CREATE INDEX IF NOT EXISTS idx_version_changes_changed_by
ON version_changes(changed_by, timestamp DESC);
D1 Binding in wrangler.toml:
[[d1_databases]]
binding = "VERSION_DB"
database_name = "mfe-version-db"
database_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
Common Query Patterns:
// Get the latest version for each MFE in an environment
const { results } = await env.VERSION_DB.prepare(
`SELECT mfe_name, version, manifest_url, timestamp
FROM version_changes
WHERE environment = ?
GROUP BY mfe_name
HAVING timestamp = MAX(timestamp)
ORDER BY mfe_name`,
)
.bind("production")
.all();
// Get full change history for a specific MFE
const { results: history } = await env.VERSION_DB.prepare(
`SELECT id, version, previous_version, changed_by, reason, timestamp
FROM version_changes
WHERE mfe_name = ? AND environment = ?
ORDER BY timestamp DESC
LIMIT ?`,
)
.bind("dashboard", "production", 25)
.all();
// Get all changes made by a specific user
const { results: userChanges } = await env.VERSION_DB.prepare(
`SELECT mfe_name, version, previous_version, environment, reason, timestamp
FROM version_changes
WHERE changed_by = ?
ORDER BY timestamp DESC`,
)
.bind("jane.doe@example.com")
.all();
// Count deployments per MFE in the last 30 days
const { results: deploymentCounts } = await env.VERSION_DB.prepare(
`SELECT mfe_name, COUNT(*) as deploy_count
FROM version_changes
WHERE environment = ?
AND timestamp >= datetime('now', '-30 days')
GROUP BY mfe_name
ORDER BY deploy_count DESC`,
)
.bind("production")
.all();
D1 Migration Rollback Procedures:
D1 does not support automatic migration rollbacks. To handle migration failures safely:
- Always write paired up/down migrations. For each
XXXX_up.sql, maintain a correspondingXXXX_down.sqlthat reverses the schema change. - Test migrations against a local D1 instance first using
wrangler d1 execute --local. - Use transactions for data migrations. Wrap DML (data manipulation) statements in
BEGIN/COMMITso failures leave the database unchanged. - Take a point-in-time backup before applying migrations using
wrangler d1 export <database-name>to dump the current state. - If a migration fails mid-apply, manually run the corresponding down migration:
wrangler d1 execute <database-name> --file=migrations/XXXX_down.sql. - For production, use D1's Time Travel feature (available on paid plans) to restore to a point before the failed migration.
# Pre-migration backup
wrangler d1 export mfe-version-db-production --output=backup-$(date +%Y%m%d%H%M%S).sql
# Apply migration
wrangler d1 execute mfe-version-db-production --file=migrations/0002_add_column.sql
# If migration fails, roll back manually
wrangler d1 execute mfe-version-db-production --file=migrations/0002_down_remove_column.sql
# Or restore from Time Travel (paid plans)
wrangler d1 time-travel restore mfe-version-db-production --timestamp=<before-migration-timestamp>
Service Bindings and RPC
Service bindings are the backbone of inter-Worker communication in this platform. They enable Workers to call each other directly — without going through the public internet, without DNS lookups, without TLS handshakes, and without any HTTP serialization overhead.
How Service Bindings Work
When Worker A has a service binding to Worker B:
- The call is routed within Cloudflare's internal network.
- There is zero additional latency — the call is effectively a function invocation within the same data center.
- There is no additional cost — service binding calls do not count as separate Worker invocations for billing.
- The communication is type-safe when using
WorkerEntrypoint— TypeScript provides full autocomplete and type checking for RPC methods.
ctx.exportsauto-bindings: Workers can usectx.exportsto automatically discover and bind to named exports from other Workers without explicit[[services]]configuration inwrangler.toml. This simplifies multi-Worker setups by reducing boilerplate configuration.Remote bindings are now GA. Remote development bindings (previously experimental) are generally available. You can use
wrangler dev --remoteto test against production bindings without deploying, and configure remote bindings inwrangler.tomlfor preview environments without any experimental flags.
wrangler.toml Configuration
The API Gateway Worker binds to all BFF Workers and the Version Config Service:
# workers/api-gateway/wrangler.toml
name = "api-gateway"
main = "src/index.ts"
compatibility_date = "2026-02-25"
compatibility_flags = ["nodejs_compat"]
# Service bindings to BFF Workers
[[services]]
binding = "BFF_DASHBOARD"
service = "bff-dashboard"
[[services]]
binding = "BFF_SETTINGS"
service = "bff-settings"
[[services]]
binding = "BFF_ANALYTICS"
service = "bff-analytics"
[[services]]
binding = "VERSION_CONFIG"
service = "version-config"
# KV for rate limiting
kv_namespaces = [
{ binding = "RATE_LIMIT_KV", id = "rate-limit-kv-id" }
]
[vars]
ALLOWED_ORIGINS = "https://app.example.com,https://staging.example.com"
JWKS_URL = "https://auth.example.com/.well-known/jwks.json"
RATE_LIMIT_MAX = "100"
RATE_LIMIT_WINDOW_SECONDS = "60"
Type-Safe RPC Pattern
The key to type-safe service bindings is the WorkerEntrypoint base class and the Service<T> type in the Env interface.
Defining the RPC interface (BFF Worker):
// workers/bff-dashboard/src/index.ts
import { WorkerEntrypoint } from "cloudflare:workers";
export default class BffDashboardWorker extends WorkerEntrypoint<Env> {
// Each public method becomes an RPC endpoint
async getMetrics(userId: string): Promise<DashboardMetrics> {
// ... implementation
}
async getRecentActivity(userId: string, limit: number): Promise<ActivityItem[]> {
// ... implementation
}
}
Consuming the RPC interface (API Gateway):
// workers/api-gateway/src/index.ts
import type BffDashboardWorker from "../../bff-dashboard/src/index";
interface Env {
BFF_DASHBOARD: Service<BffDashboardWorker>;
}
export default {
async fetch(request: Request, env: Env): Promise<Response> {
try {
// Direct RPC call — fully typed, zero HTTP overhead
const metrics = await env.BFF_DASHBOARD.getMetrics("user-123");
// ^^^^^^^^^^ TypeScript knows this method exists
// and enforces the parameter types
return Response.json(metrics);
} catch (error) {
// Service binding calls can throw if the target Worker fails or is unreachable
console.error("Service binding RPC error:", error);
return Response.json(
{ error: "Downstream service unavailable" },
{ status: 503 },
);
}
},
} satisfies ExportedHandler<Env>;
RPC with Complex Types
Service bindings support serialization of complex types through the Structured Clone algorithm. This means you can pass and return:
- Primitives (strings, numbers, booleans)
- Plain objects and arrays
Date,Map,Set,RegExpArrayBuffer,Uint8Array, and other typed arrays- Nested combinations of the above
Objects that cannot be passed: functions, class instances with methods, DOM nodes, ReadableStream (use fetch() for streaming).
// Example: passing complex data structures via RPC
interface AnalyticsEvent {
type: string;
properties: Map<string, string | number | boolean>;
timestamp: Date;
tags: Set<string>;
}
// In BFF Analytics Worker
export default class BffAnalyticsWorker extends WorkerEntrypoint<Env> {
async trackEvent(userId: string, event: AnalyticsEvent): Promise<void> {
// event.properties is a Map, event.timestamp is a Date, event.tags is a Set
// All are properly deserialized through the Structured Clone algorithm
}
}
CDN Caching Strategy
Caching is critical for performance and cost efficiency. The platform uses a layered caching strategy with different TTLs based on content mutability.
Cache Rules by Asset Type
| Asset Type | Cache-Control Header | Rationale |
|---|---|---|
| Fingerprinted JS/CSS chunks | public, max-age=31536000, immutable | Content-hashed filenames mean the URL changes when content changes. Safe to cache forever. |
mf-manifest.json | public, max-age=60, s-maxage=300 | Short browser TTL (60s) with longer edge TTL (300s). Allows version updates to propagate within minutes. |
index.html (shell) | public, max-age=0, must-revalidate + ETag | Must always be revalidated to pick up new MFE versions. ETag avoids re-downloading unchanged content. |
| Source maps | private, max-age=0 | Only served to authenticated debugging sessions, never cached publicly. |
Asset-Serving Worker with Cache Logic
// src/asset-server/index.ts
interface Env {
MFE_ASSETS: R2Bucket;
ENVIRONMENT: string;
}
export default {
async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise<Response> {
const url = new URL(request.url);
const path = url.pathname.slice(1); // Remove leading "/"
if (!path) {
return new Response("Not Found", { status: 404 });
}
// Check the Cloudflare Cache API first
const cacheKey = new Request(url.toString(), request);
const cache = caches.default;
let cachedResponse = await cache.match(cacheKey);
if (cachedResponse) {
return cachedResponse;
}
// Fetch from R2
const object = await env.MFE_ASSETS.get(path);
if (!object) {
return new Response("Not Found", { status: 404 });
}
// Determine cache headers based on file type
const headers = new Headers();
headers.set("Content-Type", getContentType(path));
headers.set("ETag", object.httpEtag);
headers.set("Access-Control-Allow-Origin", "*");
if (isFingerprinted(path)) {
// Fingerprinted assets: cache forever
headers.set("Cache-Control", "public, max-age=31536000, immutable");
} else if (path.endsWith("mf-manifest.json")) {
// Manifests: short TTL for version flexibility
headers.set("Cache-Control", "public, max-age=60, s-maxage=300");
} else if (path.endsWith(".html")) {
// HTML: always revalidate
headers.set("Cache-Control", "public, max-age=0, must-revalidate");
} else if (path.endsWith(".map")) {
// Source maps: never cache publicly
headers.set("Cache-Control", "private, max-age=0");
} else {
// Default: moderate caching
headers.set("Cache-Control", "public, max-age=3600, s-maxage=86400");
}
// Handle conditional requests (If-None-Match)
const ifNoneMatch = request.headers.get("If-None-Match");
if (ifNoneMatch && ifNoneMatch === object.httpEtag) {
return new Response(null, { status: 304, headers });
}
const response = new Response(object.body, { headers });
// Store in Cloudflare Cache API for subsequent requests at this edge
// Only cache non-private responses
if (!headers.get("Cache-Control")?.includes("private")) {
ctx.waitUntil(cache.put(cacheKey, response.clone()));
}
return response;
},
} satisfies ExportedHandler<Env>;
function getContentType(path: string): string {
const ext = path.split(".").pop()?.toLowerCase();
const contentTypes: Record<string, string> = {
js: "application/javascript",
mjs: "application/javascript",
css: "text/css",
html: "text/html",
json: "application/json",
svg: "image/svg+xml",
png: "image/png",
jpg: "image/jpeg",
jpeg: "image/jpeg",
webp: "image/webp",
woff: "font/woff",
woff2: "font/woff2",
map: "application/json",
};
return contentTypes[ext ?? ""] ?? "application/octet-stream";
}
function isFingerprinted(path: string): boolean {
// Match patterns like: chunk-Dashboard-789abc.js, styles-a1b2c3d4.css
return /[.-][a-f0-9]{6,16}\.(js|css|mjs|woff2?)$/.test(path);
}
Cache Purging on Version Update
When a new MFE version is deployed, the Version Config Service can trigger targeted cache purging:
// Purge specific URLs after a version update
async function purgeVersionCache(
mfeName: string,
oldVersion: string,
zoneId: string,
apiToken: string,
): Promise<void> {
// Purge the old manifest URL so edges fetch the new one
const urlsToPurge = [
`https://cdn.example.com/${mfeName}/${oldVersion}/mf-manifest.json`,
// Purge the version config endpoint
`https://api.example.com/config`,
];
await fetch(`https://api.cloudflare.com/client/v4/zones/${zoneId}/purge_cache`, {
method: "POST",
headers: {
Authorization: `Bearer ${apiToken}`,
"Content-Type": "application/json",
},
body: JSON.stringify({ files: urlsToPurge }),
});
}
Environment Management
The platform uses three environments — dev, staging, and production — each with isolated resources (KV namespaces, D1 databases, R2 buckets).
Multi-Environment wrangler.toml
# workers/api-gateway/wrangler.toml
name = "api-gateway"
main = "src/index.ts"
compatibility_date = "2026-02-25"
compatibility_flags = ["nodejs_compat"]
# ─── Default (dev) ───────────────────────────────────────────────
[vars]
ENVIRONMENT = "dev"
ALLOWED_ORIGINS = "http://localhost:3000,https://dev.example.com"
JWKS_URL = "https://auth-dev.example.com/.well-known/jwks.json"
RATE_LIMIT_MAX = "1000"
RATE_LIMIT_WINDOW_SECONDS = "60"
kv_namespaces = [
{ binding = "RATE_LIMIT_KV", id = "dev-rate-limit-kv-id", preview_id = "dev-rate-limit-preview-id" }
]
[[services]]
binding = "BFF_DASHBOARD"
service = "bff-dashboard"
[[services]]
binding = "BFF_SETTINGS"
service = "bff-settings"
[[services]]
binding = "BFF_ANALYTICS"
service = "bff-analytics"
[[services]]
binding = "VERSION_CONFIG"
service = "version-config"
# ─── Staging ─────────────────────────────────────────────────────
[env.staging]
name = "api-gateway-staging"
[env.staging.vars]
ENVIRONMENT = "staging"
ALLOWED_ORIGINS = "https://staging.example.com"
JWKS_URL = "https://auth-staging.example.com/.well-known/jwks.json"
RATE_LIMIT_MAX = "500"
RATE_LIMIT_WINDOW_SECONDS = "60"
kv_namespaces = [
{ binding = "RATE_LIMIT_KV", id = "staging-rate-limit-kv-id" }
]
[[env.staging.services]]
binding = "BFF_DASHBOARD"
service = "bff-dashboard-staging"
[[env.staging.services]]
binding = "BFF_SETTINGS"
service = "bff-settings-staging"
[[env.staging.services]]
binding = "BFF_ANALYTICS"
service = "bff-analytics-staging"
[[env.staging.services]]
binding = "VERSION_CONFIG"
service = "version-config-staging"
# ─── Production ──────────────────────────────────────────────────
[env.production]
name = "api-gateway-production"
routes = [
{ pattern = "api.example.com/*", zone_name = "example.com" }
]
[env.production.vars]
ENVIRONMENT = "production"
ALLOWED_ORIGINS = "https://app.example.com"
JWKS_URL = "https://auth.example.com/.well-known/jwks.json"
RATE_LIMIT_MAX = "100"
RATE_LIMIT_WINDOW_SECONDS = "60"
kv_namespaces = [
{ binding = "RATE_LIMIT_KV", id = "prod-rate-limit-kv-id" }
]
[[env.production.services]]
binding = "BFF_DASHBOARD"
service = "bff-dashboard-production"
[[env.production.services]]
binding = "BFF_SETTINGS"
service = "bff-settings-production"
[[env.production.services]]
binding = "BFF_ANALYTICS"
service = "bff-analytics-production"
[[env.production.services]]
binding = "VERSION_CONFIG"
service = "version-config-production"
Version Config Service: Multi-Environment wrangler.toml
# workers/version-config/wrangler.toml
name = "version-config"
main = "src/index.ts"
compatibility_date = "2026-02-25"
[vars]
ENVIRONMENT = "dev"
CDN_BASE_URL = "https://cdn-dev.example.com"
kv_namespaces = [
{ binding = "VERSION_CONFIG_KV", id = "dev-version-kv-id", preview_id = "dev-version-preview-id" }
]
[[d1_databases]]
binding = "VERSION_DB"
database_name = "mfe-version-db-dev"
database_id = "dev-d1-database-id"
# ─── Staging ─────────────────────────────────────────────────────
[env.staging]
name = "version-config-staging"
[env.staging.vars]
ENVIRONMENT = "staging"
CDN_BASE_URL = "https://cdn-staging.example.com"
kv_namespaces = [
{ binding = "VERSION_CONFIG_KV", id = "staging-version-kv-id" }
]
[[env.staging.d1_databases]]
binding = "VERSION_DB"
database_name = "mfe-version-db-staging"
database_id = "staging-d1-database-id"
# ─── Production ──────────────────────────────────────────────────
[env.production]
name = "version-config-production"
[env.production.vars]
ENVIRONMENT = "production"
CDN_BASE_URL = "https://cdn.example.com"
kv_namespaces = [
{ binding = "VERSION_CONFIG_KV", id = "prod-version-kv-id" }
]
[[env.production.d1_databases]]
binding = "VERSION_DB"
database_name = "mfe-version-db-production"
database_id = "prod-d1-database-id"
Secrets Management
Secrets are set per environment using the Wrangler CLI (v4; note that Wrangler v3 reaches EOL in Q1 2026). They are encrypted at rest and injected into the Worker's Env at runtime.
# Set secrets for each environment
wrangler secret put METRICS_API_KEY # dev (default)
wrangler secret put METRICS_API_KEY --env staging # staging
wrangler secret put METRICS_API_KEY --env production # production
wrangler secret put ACTIVITY_API_KEY --env production
wrangler secret put CACHE_PURGE_API_TOKEN --env production
# List secrets for an environment
wrangler secret list --env production
Deployment Commands
# Deploy to dev (default)
wrangler deploy
# Deploy to staging
wrangler deploy --env staging
# Deploy to production
wrangler deploy --env production
# Deploy all Workers (in CI/CD pipeline)
for worker in api-gateway bff-dashboard bff-settings bff-analytics version-config asset-server; do
(cd "workers/$worker" && wrangler deploy --env production)
done
Cost Considerations and Limits
Cloudflare Workers Platform Limits
| Resource | Free Plan | Paid Plan ($5/month) | Enterprise |
|---|---|---|---|
| Workers Requests | 100,000/day | 10 million/month included, $0.50/million after | Custom |
| Workers CPU Time | 10ms per invocation | Configurable up to 5 minutes via cpu_ms setting (default 30ms) | Custom |
| Workers Size | 1 MB after compression | 10 MB after compression | Custom |
| Service Bindings | Free (no additional request cost) | Free (no additional request cost) | Free |
KV Limits
| Operation | Free Plan | Paid Plan |
|---|---|---|
| Reads | 100,000/day | $0.50 per million reads |
| Writes | 1,000/day | $5.00 per million writes |
| Deletes | 1,000/day | $5.00 per million deletes |
| Lists | 1,000/day | $5.00 per million lists |
| Storage | 1 GB | $0.50 per GB-month |
| Value Size | 25 MiB max | 25 MiB max |
| Key Size | 512 bytes max | 512 bytes max |
R2 Limits
| Resource | Free Tier | Paid (beyond free) |
|---|---|---|
| Storage | 10 GB/month | $0.015 per GB-month |
| Class A ops (PUT, POST, LIST) | 1 million/month | $4.50 per million |
| Class B ops (GET, HEAD) | 10 million/month | $0.36 per million |
| Egress | Free (unlimited) | Free (unlimited) |
D1 Limits
| Resource | Free Plan | Paid Plan |
|---|---|---|
| Rows read | 5 million/day | $0.001 per million rows |
| Rows written | 100,000/day | $1.00 per million rows |
| Storage | 5 GB | $0.75 per GB-month |
| Databases | 50,000 per account | 50,000 per account |
| Max DB size | 2 GB (free), 10 GB (paid) | 10 GB |
Durable Objects Limits
| Resource | Pricing |
|---|---|
| Requests | $0.15 per million requests |
| Duration | $12.50 per million GB-seconds |
| Storage (reads) | $0.20 per million reads |
| Storage (writes) | $1.00 per million writes |
| Storage (deletes) | $1.00 per million deletes |
| Stored data | $0.20 per GB-month |
| WebSocket message size | 32 MiB max per message |
DO SQLite billing: Durable Objects SQLite storage billing has been active since January 2026. Usage of the SQLite API within Durable Objects now incurs storage read/write charges as listed above.
Cost Estimate for This Platform
For a moderately sized micro frontends platform (50,000 daily active users, 5 MFEs):
| Service | Estimated Monthly Usage | Estimated Monthly Cost |
|---|---|---|
| Workers Paid Plan | Base plan | $5.00 |
| Workers Requests | ~15M requests (API + assets) | ~$2.50 |
| KV Reads | ~2M reads (version config) | ~$1.00 |
| KV Writes | ~500 writes (version updates) | ~$0.00 |
| R2 Storage | ~5 GB (versioned bundles) | ~$0.08 |
| R2 Class B (GET) | ~10M reads (asset serving) | Free tier |
| D1 Rows Read | ~100K reads (admin queries) | Free tier |
| D1 Rows Written | ~500 writes (version changes) | Free tier |
| Total | ~$8.58/month |
Key cost insight: Service bindings are free. Every call from the API Gateway to a BFF Worker incurs zero additional cost. This makes the "one BFF per MFE" pattern economically viable.
Backup Strategy
R2 Object Versioning
Enable R2 bucket versioning to protect against accidental overwrites or deletions of MFE bundles:
# Enable versioning on the production assets bucket
wrangler r2 bucket update mfe-assets-production --versioning enabled
With versioning enabled, every PUT or DELETE creates a new version rather than overwriting the object. Previous versions can be listed and restored:
// List object versions
const versions = await env.MFE_ASSETS.list({
prefix: "dashboard/2.3.1/",
include: ["httpMetadata", "customMetadata"],
});
// Get a specific version by ID
const previousVersion = await env.MFE_ASSETS.get("dashboard/2.3.1/mf-manifest.json", {
version: "version-id-here",
});
Recommended versioning policy:
- Keep versions for at least 30 days before expiring old versions via a lifecycle rule.
- Use lifecycle rules to limit storage costs:
wrangler r2 bucket lifecycle set mfe-assets-production --expire-versions-after 30d.
D1 Backup Strategy
D1 provides multiple mechanisms for data protection:
-
Automated backups via Time Travel (paid plans): D1 retains a point-in-time recovery window (default 30 days). Restore to any point using:
wrangler d1 time-travel restore mfe-version-db-production --timestamp="2026-02-20T10:00:00Z" -
Manual exports for offline backups:
# Export full database as SQL dump wrangler d1 export mfe-version-db-production --output=backup-$(date +%Y%m%d).sql # Schedule regular exports in CI/CD (e.g., daily cron) -
Cross-region redundancy: D1 automatically replicates read replicas. For additional durability, export backups to R2:
# Export and upload to R2 wrangler d1 export mfe-version-db-production --output=backup.sql wrangler r2 object put mfe-backups/d1/backup-$(date +%Y%m%d).sql --file=backup.sql
References
- Cloudflare Workers Documentation
- Cloudflare Workers Runtime APIs
- Service Bindings - RPC
- WorkerEntrypoint Reference
- Cloudflare R2 Documentation
- Cloudflare KV Documentation
- Cloudflare D1 Documentation
- Cloudflare Durable Objects Documentation
- Wrangler v4 Configuration (wrangler.toml) — Wrangler v3 reaches EOL in Q1 2026
- Cloudflare Workers Pricing
- jose Library (JWT/JWK)
- Module Federation v2 Documentation