Infraestructura Cloudflare
Tabla de contenidos
- Visión general
- Arquitectura de Workers
- Servicios de almacenamiento
- Service Bindings y RPC
- Estrategia de caching CDN
- Gestión de entornos
- Consideraciones de coste y límites
- Estrategia de backup
- Referencias
Visión general
Toda la infraestructura backend de la plataforma de micro frontends se ejecuta sobre Cloudflare Workers, una plataforma de computación en el edge basada en V8 isolates. A diferencia de las plataformas serverless tradicionales (AWS Lambda, Google Cloud Functions) que dependen de contenedores o microVMs, Cloudflare Workers se ejecutan dentro de V8 isolates — el mismo motor JavaScript que impulsa Chrome.
Por qué Cloudflare Workers
Modelo V8 Isolate:
- Sin cold starts. Los V8 isolates arrancan en menos de 5 milisegundos, proporcionando un tiempo de inicio efectivo de 0ms desde la perspectiva del cliente. No hay contenedor que aprovisionar ni runtime que inicializar.
- Deploy global por defecto. Cada Worker se despliega en la red de Cloudflare con más de 300 data centers en todo el mundo. No hay selección de región — el código se ejecuta en el edge más cercano al usuario.
- Overhead sub-milisegundo. Los isolates comparten un único proceso del SO, lo que los hace órdenes de magnitud más ligeros que los contenedores. Miles de isolates pueden ejecutarse dentro de un solo proceso.
- Seguridad mediante aislamiento. Cada request se ejecuta en su propio isolate con un scope global nuevo. No hay estado compartido entre requests a menos que se gestione explícitamente mediante Durable Objects.
Workers en esta plataforma:
| Worker | Propósito |
|---|---|
| API Gateway | Punto de entrada central: CORS, auth, rate limiting, routing |
| BFF Workers | Backends específicos por dominio (uno por micro frontend o dominio lógico) |
| Version Config Service | Gestiona qué versiones de MFE están desplegadas por entorno |
| Asset Serving Worker | Sirve bundles estáticos de MFE desde R2 con lógica de caching |
Toda la comunicación entre Workers utiliza service bindings, que proporcionan llamadas de función con latencia cero y coste cero entre Workers sin pasar por internet público.
Nota sobre la deprecación de Cloudflare Pages: Cloudflare Pages fue deprecado en abril de 2025. Para el hosting de assets estáticos, se recomienda usar Workers Static Assets, que se integra directamente con Workers y soporta el mismo modelo de deployment que utiliza esta plataforma. Todos los proyectos nuevos deben usar Workers Static Assets; los proyectos existentes en Pages deben planificar la migración.
Arquitectura de Workers
API Gateway Worker
El API Gateway Worker es el punto de entrada central para todas las peticiones API del frontend. Está en api.example.com y gestiona las preocupaciones transversales antes de enrutar las peticiones al BFF Worker correspondiente.
Responsabilidades:
- Gestión de CORS — Valida headers
Origin, establece los headers de respuestaAccess-Control-*correspondientes - Rate limiting — Usa KV (o Durable Objects para conteo preciso) para rastrear tasas de peticiones por IP o API key
- Validación de JWT — Verifica access tokens usando la librería
josecontra endpoints JWKS (p. ej., Auth0, Clerk) - Enrutamiento de peticiones — Mapea rutas URL a BFF Workers downstream mediante service bindings
// src/gateway/index.ts
import { jwtVerify, createRemoteJWKSet } from "jose";
export interface Env {
// Service bindings to BFF Workers
BFF_DASHBOARD: Service<BffDashboardWorker>;
BFF_SETTINGS: Service<BffSettingsWorker>;
BFF_ANALYTICS: Service<BffAnalyticsWorker>;
// KV for rate limiting
RATE_LIMIT_KV: KVNamespace;
// Environment variables
ALLOWED_ORIGINS: string; // comma-separated
JWKS_URL: string;
RATE_LIMIT_MAX: string; // requests per window
RATE_LIMIT_WINDOW_SECONDS: string;
}
// JWKS set cached at module scope (persists across requests within the same isolate).
// createRemoteJWKSet handles caching internally and respects HTTP cache headers from the
// JWKS endpoint. For key rotation scenarios, ensure your identity provider sets appropriate
// Cache-Control headers (e.g., max-age=600 for a 10-minute TTL). If a token fails
// verification with a "kid" not found in the cached JWKS, jose will automatically
// re-fetch the JWKS endpoint. For forced invalidation, set jwks = null.
let jwks: ReturnType<typeof createRemoteJWKSet> | null = null;
// TTL-based invalidation: recreate the JWKS set periodically to handle key rotation
let jwksCreatedAt: number = 0;
const JWKS_TTL_MS = 10 * 60 * 1000; // 10 minutes
export default {
async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise<Response> {
// --- CORS Preflight ---
if (request.method === "OPTIONS") {
return handleCorsPreflightRequest(request, env);
}
// --- CORS Origin Validation ---
const origin = request.headers.get("Origin");
const allowedOrigins = env.ALLOWED_ORIGINS.split(",").map((o) => o.trim());
if (origin && !allowedOrigins.includes(origin)) {
return new Response("Forbidden", { status: 403 });
}
// --- Rate Limiting ---
const clientIp = request.headers.get("CF-Connecting-IP") ?? "unknown";
const rateLimitResult = await checkRateLimit(env, clientIp);
if (!rateLimitResult.allowed) {
return new Response("Too Many Requests", {
status: 429,
headers: {
"Retry-After": String(rateLimitResult.retryAfter),
"X-RateLimit-Limit": env.RATE_LIMIT_MAX,
"X-RateLimit-Remaining": "0",
},
});
}
// --- JWT Validation ---
const authHeader = request.headers.get("Authorization");
if (!authHeader?.startsWith("Bearer ")) {
return new Response("Unauthorized", { status: 401 });
}
const token = authHeader.slice(7);
let jwtPayload: Record<string, unknown>;
try {
if (!jwks || Date.now() - jwksCreatedAt > JWKS_TTL_MS) {
jwks = createRemoteJWKSet(new URL(env.JWKS_URL));
jwksCreatedAt = Date.now();
}
const { payload } = await jwtVerify(token, jwks, {
issuer: env.JWKS_URL.replace("/.well-known/jwks.json", ""),
audience: "mfe-platform-api",
});
jwtPayload = payload as Record<string, unknown>;
} catch (error) {
return new Response("Invalid token", { status: 401 });
}
// --- Route to BFF Workers ---
const url = new URL(request.url);
const response = await routeRequest(url.pathname, request, env, jwtPayload);
// --- Attach CORS Headers ---
const corsHeaders = new Headers(response.headers);
if (origin && allowedOrigins.includes(origin)) {
corsHeaders.set("Access-Control-Allow-Origin", origin);
corsHeaders.set("Access-Control-Allow-Credentials", "true");
}
return new Response(response.body, {
status: response.status,
statusText: response.statusText,
headers: corsHeaders,
});
},
} satisfies ExportedHandler<Env>;
// --- Routing Logic ---
interface RouteDefinition {
prefix: string;
handler: (
request: Request,
env: Env,
jwtPayload: Record<string, unknown>,
subpath: string,
) => Promise<Response>;
}
const routes: RouteDefinition[] = [
{
prefix: "/api/dashboard",
handler: async (request, env, jwt, subpath) => {
const userId = jwt.sub as string;
try {
if (subpath === "/metrics") {
const data = await env.BFF_DASHBOARD.getMetrics(userId);
return Response.json(data);
}
if (subpath === "/recent-activity") {
const data = await env.BFF_DASHBOARD.getRecentActivity(userId, 20);
return Response.json(data);
}
} catch (error) {
console.error("BFF_DASHBOARD service binding error:", error);
return Response.json(
{ error: "Service temporarily unavailable" },
{ status: 503 },
);
}
return new Response("Not Found", { status: 404 });
},
},
{
prefix: "/api/settings",
handler: async (request, env, jwt, subpath) => {
const userId = jwt.sub as string;
try {
if (subpath === "/profile" && request.method === "GET") {
const data = await env.BFF_SETTINGS.getProfile(userId);
return Response.json(data);
}
if (subpath === "/profile" && request.method === "PUT") {
const body = await request.json();
const data = await env.BFF_SETTINGS.updateProfile(userId, body);
return Response.json(data);
}
} catch (error) {
console.error("BFF_SETTINGS service binding error:", error);
return Response.json(
{ error: "Service temporarily unavailable" },
{ status: 503 },
);
}
return new Response("Not Found", { status: 404 });
},
},
{
prefix: "/api/analytics",
handler: async (request, env, jwt, subpath) => {
const userId = jwt.sub as string;
try {
if (subpath === "/events" && request.method === "POST") {
const body = await request.json();
await env.BFF_ANALYTICS.trackEvent(userId, body);
return new Response(null, { status: 204 });
}
} catch (error) {
console.error("BFF_ANALYTICS service binding error:", error);
return Response.json(
{ error: "Service temporarily unavailable" },
{ status: 503 },
);
}
return new Response("Not Found", { status: 404 });
},
},
];
async function routeRequest(
pathname: string,
request: Request,
env: Env,
jwtPayload: Record<string, unknown>,
): Promise<Response> {
for (const route of routes) {
if (pathname.startsWith(route.prefix)) {
const subpath = pathname.slice(route.prefix.length) || "/";
return route.handler(request, env, jwtPayload, subpath);
}
}
return new Response("Not Found", { status: 404 });
}
// --- Rate Limiting using KV ---
async function checkRateLimit(
env: Env,
clientIp: string,
): Promise<{ allowed: boolean; retryAfter: number }> {
const windowSeconds = parseInt(env.RATE_LIMIT_WINDOW_SECONDS, 10) || 60;
const maxRequests = parseInt(env.RATE_LIMIT_MAX, 10) || 100;
const key = `rate-limit:${clientIp}`;
const current = await env.RATE_LIMIT_KV.get(key, "json") as {
count: number;
windowStart: number;
} | null;
const now = Math.floor(Date.now() / 1000);
if (!current || now - current.windowStart > windowSeconds) {
// New window
await env.RATE_LIMIT_KV.put(
key,
JSON.stringify({ count: 1, windowStart: now }),
{ expirationTtl: windowSeconds * 2 },
);
return { allowed: true, retryAfter: 0 };
}
if (current.count >= maxRequests) {
const retryAfter = windowSeconds - (now - current.windowStart);
return { allowed: false, retryAfter };
}
await env.RATE_LIMIT_KV.put(
key,
JSON.stringify({ count: current.count + 1, windowStart: current.windowStart }),
{ expirationTtl: windowSeconds * 2 },
);
return { allowed: true, retryAfter: 0 };
}
// --- CORS Preflight ---
function handleCorsPreflightRequest(request: Request, env: Env): Response {
const origin = request.headers.get("Origin") ?? "";
const allowedOrigins = env.ALLOWED_ORIGINS.split(",").map((o) => o.trim());
if (!allowedOrigins.includes(origin)) {
return new Response(null, { status: 403 });
}
return new Response(null, {
status: 204,
headers: {
"Access-Control-Allow-Origin": origin,
"Access-Control-Allow-Methods": "GET, POST, PUT, DELETE, OPTIONS",
"Access-Control-Allow-Headers": "Content-Type, Authorization",
"Access-Control-Allow-Credentials": "true",
"Access-Control-Max-Age": "86400",
},
});
}
Nota sobre la precisión del rate limiting: El rate limiting basado en KV es eventualmente consistente entre ubicaciones edge, lo que lo hace aproximado en vez de exacto. Para rate limiting estricto (p. ej., APIs de pagos), se recomienda usar Durable Objects, que proporcionan contadores fuertemente consistentes en una única ubicación.
BFF Workers (Backend for Frontend)
Cada micro frontend (o dominio lógico) tiene su propio BFF Worker dedicado. Este patrón proporciona:
- Separación de responsabilidades — Cada BFF encapsula la lógica de obtención, transformación y agregación de datos específica de su MFE.
- Desplegabilidad independiente — Los BFF Workers pueden actualizarse de forma independiente sin afectar a otras partes del sistema.
- Formas de datos optimizadas — Cada BFF devuelve exactamente los datos que necesita su MFE, evitando el over-fetching.
- Interfaces RPC tipadas — Mediante
WorkerEntrypoint, los BFF Workers exponen métodos type-safe que el API Gateway invoca directamente.
Ejemplo de BFF Worker: Dashboard
// src/bff-dashboard/index.ts
import { WorkerEntrypoint } from "cloudflare:workers";
export interface DashboardMetrics {
totalUsers: number;
activeToday: number;
revenue: { amount: number; currency: string; percentChange: number };
systemHealth: { status: "healthy" | "degraded" | "down"; uptime: number };
}
export interface ActivityItem {
id: string;
type: "login" | "purchase" | "support_ticket" | "deployment";
description: string;
timestamp: string;
userId: string;
metadata: Record<string, unknown>;
}
interface Env {
// External API configuration
METRICS_API_URL: string;
METRICS_API_KEY: string;
ACTIVITY_API_URL: string;
ACTIVITY_API_KEY: string;
// Cache
CACHE_KV: KVNamespace;
}
export default class BffDashboardWorker extends WorkerEntrypoint<Env> {
/**
* Fetches aggregated dashboard metrics for a user.
* Combines data from multiple upstream APIs and caches the result.
*/
async getMetrics(userId: string): Promise<DashboardMetrics> {
// Check cache first (30-second TTL for dashboard metrics)
const cacheKey = `dashboard-metrics:${userId}`;
const cached = await this.env.CACHE_KV.get(cacheKey, "json") as DashboardMetrics | null;
if (cached) {
return cached;
}
// Fetch from multiple upstream APIs in parallel
const [usersResponse, revenueResponse, healthResponse] = await Promise.all([
fetch(`${this.env.METRICS_API_URL}/users/stats`, {
headers: { Authorization: `Bearer ${this.env.METRICS_API_KEY}` },
}),
fetch(`${this.env.METRICS_API_URL}/revenue/summary`, {
headers: { Authorization: `Bearer ${this.env.METRICS_API_KEY}` },
}),
fetch(`${this.env.METRICS_API_URL}/system/health`, {
headers: { Authorization: `Bearer ${this.env.METRICS_API_KEY}` },
}),
]);
const [usersData, revenueData, healthData] = await Promise.all([
usersResponse.json() as Promise<{ total: number; active_today: number }>,
revenueResponse.json() as Promise<{
amount: number;
currency: string;
percent_change: number;
}>,
healthResponse.json() as Promise<{ status: string; uptime_seconds: number }>,
]);
// Transform and aggregate into the shape the frontend expects
const metrics: DashboardMetrics = {
totalUsers: usersData.total,
activeToday: usersData.active_today,
revenue: {
amount: revenueData.amount,
currency: revenueData.currency,
percentChange: revenueData.percent_change,
},
systemHealth: {
status: healthData.status as DashboardMetrics["systemHealth"]["status"],
uptime: healthData.uptime_seconds,
},
};
// Cache the result
this.ctx.waitUntil(
this.env.CACHE_KV.put(cacheKey, JSON.stringify(metrics), { expirationTtl: 30 }),
);
return metrics;
}
/**
* Fetches recent activity items for a user.
*/
async getRecentActivity(userId: string, limit: number): Promise<ActivityItem[]> {
const response = await fetch(
`${this.env.ACTIVITY_API_URL}/activity?userId=${userId}&limit=${limit}`,
{
headers: { Authorization: `Bearer ${this.env.ACTIVITY_API_KEY}` },
},
);
const rawData = (await response.json()) as Array<{
id: string;
event_type: string;
description: string;
created_at: string;
user_id: string;
meta: Record<string, unknown>;
}>;
// Transform snake_case API response to camelCase frontend shape
return rawData.map((item) => ({
id: item.id,
type: item.event_type as ActivityItem["type"],
description: item.description,
timestamp: item.created_at,
userId: item.user_id,
metadata: item.meta,
}));
}
/**
* Standard fetch handler for HTTP-based access (if needed).
* Service binding RPC calls bypass this entirely.
*/
async fetch(request: Request): Promise<Response> {
return new Response("This Worker is designed for RPC access via service bindings.", {
status: 400,
});
}
}
wrangler.toml para el BFF Dashboard Worker:
# workers/bff-dashboard/wrangler.toml
name = "bff-dashboard"
main = "src/index.ts"
compatibility_date = "2026-02-25"
kv_namespaces = [
{ binding = "CACHE_KV", id = "abc123def456" }
]
[vars]
METRICS_API_URL = "https://internal-metrics.example.com/v1"
ACTIVITY_API_URL = "https://internal-activity.example.com/v1"
Importante: Los secrets como
METRICS_API_KEYyACTIVITY_API_KEYno se almacenan enwrangler.toml. Se configuran mediantewrangler secret put METRICS_API_KEYy se inyectan en runtime a través de la interfazEnv.
Version Config Service
El Version Config Service gestiona qué versión de cada micro frontend está desplegada en cada entorno. Actúa como la fuente de verdad que la aplicación shell lee en runtime para determinar qué manifiestos de remote entry cargar.
Diseño:
- KV es la capa de lectura — replicada globalmente, lecturas de baja latencia en cada ubicación edge (~250ms p99 global, ~50ms p95). La latencia sub-milisegundo (<5ms) aplica solo al protocolo interno KV Storage Protocol (KVSP), no al acceso externo.
- D1 es la capa de escritura — proporciona un almacén relacional para audit trails, historial y metadatos de rollback.
- Las escrituras van tanto a D1 (durable, consultable) como a KV (rápido, distribuido en el edge).
// src/version-config/index.ts
import { WorkerEntrypoint } from "cloudflare:workers";
export interface VersionConfigMap {
[mfeName: string]: {
version: string;
manifestUrl: string;
integrity?: string; // SRI hash for the manifest
updatedAt: string;
};
}
export interface VersionChangeRecord {
id: number;
mfeName: string;
version: string;
previousVersion: string | null;
manifestUrl: string;
environment: string;
changedBy: string;
reason: string;
timestamp: string;
}
interface Env {
VERSION_CONFIG_KV: KVNamespace;
VERSION_DB: D1Database;
ENVIRONMENT: string; // "production" | "staging" | "dev"
CDN_BASE_URL: string; // e.g., "https://cdn.example.com"
}
export default class VersionConfigWorker extends WorkerEntrypoint<Env> {
/**
* RPC method: Get the current version config for the active environment.
* Called by the shell application at startup and on navigation.
*/
async getConfig(): Promise<VersionConfigMap> {
const kvKey = `version-config:${this.env.ENVIRONMENT}`;
const config = await this.env.VERSION_CONFIG_KV.get(kvKey, "json") as VersionConfigMap | null;
if (config) {
return config;
}
// Fallback: rebuild from D1 if KV is empty (initial deploy or KV purge)
return this.rebuildConfigFromD1();
}
/**
* RPC method: Update the version for a specific MFE.
* Writes to D1 for persistence, then updates KV for fast reads.
*/
async updateVersion(
mfeName: string,
version: string,
changedBy: string,
reason: string,
): Promise<{ success: boolean; config: VersionConfigMap }> {
const kvKey = `version-config:${this.env.ENVIRONMENT}`;
const manifestUrl = `${this.env.CDN_BASE_URL}/${mfeName}/${version}/mf-manifest.json`;
// Get current config
const currentConfig =
(await this.env.VERSION_CONFIG_KV.get(kvKey, "json") as VersionConfigMap) || {};
const previousVersion = currentConfig[mfeName]?.version ?? null;
// Write audit record to D1
await this.env.VERSION_DB.prepare(
`INSERT INTO version_changes (mfe_name, version, previous_version, manifest_url, environment, changed_by, reason, timestamp)
VALUES (?, ?, ?, ?, ?, ?, ?, datetime('now'))`,
)
.bind(mfeName, version, previousVersion, manifestUrl, this.env.ENVIRONMENT, changedBy, reason)
.run();
// Update the config map
const updatedConfig: VersionConfigMap = {
...currentConfig,
[mfeName]: {
version,
manifestUrl,
updatedAt: new Date().toISOString(),
},
};
// Write to KV (globally distributed, eventually consistent; 30s min cacheTtl, RYOW at same PoP)
await this.env.VERSION_CONFIG_KV.put(kvKey, JSON.stringify(updatedConfig));
return { success: true, config: updatedConfig };
}
/**
* RPC method: Rollback an MFE to a previous version.
*/
async rollback(mfeName: string, targetVersion: string, changedBy: string): Promise<{
success: boolean;
config: VersionConfigMap;
}> {
// Verify the target version exists in the history
const record = await this.env.VERSION_DB.prepare(
`SELECT version, manifest_url FROM version_changes
WHERE mfe_name = ? AND version = ? AND environment = ?
ORDER BY timestamp DESC LIMIT 1`,
)
.bind(mfeName, targetVersion, this.env.ENVIRONMENT)
.first<{ version: string; manifest_url: string }>();
if (!record) {
throw new Error(`Version ${targetVersion} not found in history for ${mfeName}`);
}
return this.updateVersion(mfeName, targetVersion, changedBy, `Rollback to ${targetVersion}`);
}
/**
* RPC method: Get version change history for an MFE.
*/
async getHistory(mfeName: string, limit: number = 50): Promise<VersionChangeRecord[]> {
const { results } = await this.env.VERSION_DB.prepare(
`SELECT id, mfe_name, version, previous_version, manifest_url, environment, changed_by, reason, timestamp
FROM version_changes
WHERE mfe_name = ? AND environment = ?
ORDER BY timestamp DESC
LIMIT ?`,
)
.bind(mfeName, this.env.ENVIRONMENT, limit)
.all<VersionChangeRecord>();
return results ?? [];
}
/**
* HTTP handler for REST API access (used by Admin UI and CI/CD).
*/
async fetch(request: Request): Promise<Response> {
const url = new URL(request.url);
// GET /config — return current version map
if (url.pathname === "/config" && request.method === "GET") {
const config = await this.getConfig();
return Response.json(config, {
headers: { "Cache-Control": "public, max-age=10, s-maxage=30" },
});
}
// PUT /config/:mfeName — update version for an MFE
const updateMatch = url.pathname.match(/^\/config\/([a-z0-9-]+)$/);
if (updateMatch && request.method === "PUT") {
const mfeName = updateMatch[1];
const body = (await request.json()) as {
version: string;
changedBy: string;
reason?: string;
};
const result = await this.updateVersion(
mfeName,
body.version,
body.changedBy,
body.reason ?? "Manual update",
);
return Response.json(result);
}
// GET /config/:mfeName/history — get version history
const historyMatch = url.pathname.match(/^\/config\/([a-z0-9-]+)\/history$/);
if (historyMatch && request.method === "GET") {
const mfeName = historyMatch[1];
const limit = parseInt(url.searchParams.get("limit") ?? "50", 10);
const history = await this.getHistory(mfeName, limit);
return Response.json(history);
}
// POST /config/:mfeName/rollback — rollback to a specific version
const rollbackMatch = url.pathname.match(/^\/config\/([a-z0-9-]+)\/rollback$/);
if (rollbackMatch && request.method === "POST") {
const mfeName = rollbackMatch[1];
const body = (await request.json()) as { targetVersion: string; changedBy: string };
const result = await this.rollback(mfeName, body.targetVersion, body.changedBy);
return Response.json(result);
}
return new Response("Not Found", { status: 404 });
}
/**
* Rebuild the version config from D1.
* Used as fallback when KV is empty.
*/
private async rebuildConfigFromD1(): Promise<VersionConfigMap> {
const { results } = await this.env.VERSION_DB.prepare(
`SELECT DISTINCT mfe_name, version, manifest_url, timestamp
FROM version_changes
WHERE environment = ?
AND timestamp = (
SELECT MAX(timestamp) FROM version_changes vc2
WHERE vc2.mfe_name = version_changes.mfe_name
AND vc2.environment = version_changes.environment
)`,
)
.bind(this.env.ENVIRONMENT)
.all<{ mfe_name: string; version: string; manifest_url: string; timestamp: string }>();
const config: VersionConfigMap = {};
for (const row of results ?? []) {
config[row.mfe_name] = {
version: row.version,
manifestUrl: row.manifest_url,
updatedAt: row.timestamp,
};
}
// Repopulate KV
const kvKey = `version-config:${this.env.ENVIRONMENT}`;
await this.env.VERSION_CONFIG_KV.put(kvKey, JSON.stringify(config));
return config;
}
}
Servicios de almacenamiento
R2 (Object Storage)
R2 es el object storage compatible con S3 de Cloudflare. Almacena todos los bundles versionados de MFE, manifiestos y assets estáticos. R2 no tiene costes de egress, lo que lo hace ideal para servir assets de frontend a escala.
Estructura del bucket:
mfe-assets-production/
├── shell/
│ └── 1.0.0/
│ ├── mf-manifest.json
│ ├── index.html
│ ├── shell-abc123.js
│ └── shell-abc123.css
├── dashboard/
│ ├── 2.3.1/
│ │ ├── mf-manifest.json
│ │ ├── remoteEntry-def456.js
│ │ ├── chunk-Dashboard-789abc.js
│ │ ├── chunk-Sidebar-012def.js
│ │ └── styles-345ghi.css
│ └── 2.3.0/
│ ├── mf-manifest.json
│ ├── remoteEntry-aaa111.js
│ └── ...
├── settings/
│ └── 1.5.0/
│ ├── mf-manifest.json
│ ├── remoteEntry-bbb222.js
│ └── ...
└── analytics/
└── 3.0.0/
├── mf-manifest.json
├── remoteEntry-ccc333.js
└── ...
Convención de nombres de claves: /{mfe-name}/{version}/{filename}
Cada nombre de fichero (excepto mf-manifest.json) incluye un content hash (fingerprint) en el nombre, lo que permite un caching agresivo (ver Estrategia de caching CDN).
Script de subida (usado en CI/CD):
// scripts/upload-mfe-bundle.ts
// Run via: npx tsx scripts/upload-mfe-bundle.ts --mfe dashboard --version 2.3.1
import { S3Client, PutObjectCommand } from "@aws-sdk/client-s3";
import { readdir, readFile } from "node:fs/promises";
import { join, extname } from "node:path";
import { parseArgs } from "node:util";
const { values } = parseArgs({
options: {
mfe: { type: "string" },
version: { type: "string" },
"dist-path": { type: "string", default: "dist" },
environment: { type: "string", default: "production" },
},
});
const { mfe, version, environment } = values;
const distPath = values["dist-path"]!;
if (!mfe || !version) {
console.error("Usage: --mfe <name> --version <version>");
process.exit(1);
}
const CONTENT_TYPE_MAP: Record<string, string> = {
".js": "application/javascript",
".css": "text/css",
".json": "application/json",
".html": "text/html",
".svg": "image/svg+xml",
".png": "image/png",
".woff2": "font/woff2",
};
const BUCKET_NAME = `mfe-assets-${environment}`;
// R2 exposes an S3-compatible API
const s3Client = new S3Client({
region: "auto",
endpoint: `https://${process.env.CF_ACCOUNT_ID}.r2.cloudflarestorage.com`,
credentials: {
accessKeyId: process.env.R2_ACCESS_KEY_ID!,
secretAccessKey: process.env.R2_SECRET_ACCESS_KEY!,
},
});
async function uploadDirectory(dirPath: string, prefix: string): Promise<void> {
const entries = await readdir(dirPath, { withFileTypes: true });
for (const entry of entries) {
const fullPath = join(dirPath, entry.name);
if (entry.isDirectory()) {
await uploadDirectory(fullPath, `${prefix}/${entry.name}`);
continue;
}
const fileContent = await readFile(fullPath);
const ext = extname(entry.name);
const contentType = CONTENT_TYPE_MAP[ext] ?? "application/octet-stream";
const key = `${prefix}/${entry.name}`;
// Determine cache control based on filename
const isFingerprinted = /[.-][a-f0-9]{8,}\./.test(entry.name);
const cacheControl = isFingerprinted
? "public, max-age=31536000, immutable"
: "public, max-age=60, s-maxage=300";
await s3Client.send(
new PutObjectCommand({
Bucket: BUCKET_NAME,
Key: key,
Body: fileContent,
ContentType: contentType,
CacheControl: cacheControl,
}),
);
console.log(`Uploaded: ${key} (${contentType}, ${cacheControl})`);
}
}
async function main(): Promise<void> {
const prefix = `${mfe}/${version}`;
console.log(`Uploading ${distPath} to R2: ${BUCKET_NAME}/${prefix}/`);
await uploadDirectory(distPath, prefix);
console.log(`\nUpload complete. Manifest URL:`);
console.log(` https://cdn.example.com/${prefix}/mf-manifest.json`);
}
main().catch((error) => {
console.error("Upload failed:", error);
process.exit(1);
});
Dominio personalizado para R2:
Los buckets de R2 pueden servirse directamente a través de un dominio personalizado (p. ej., cdn.example.com) usando la funcionalidad de acceso público gestionado de Cloudflare, o a través de un Worker que añade headers de cache personalizados y control de acceso.
KV (Key-Value Store)
KV es un almacén key-value distribuido globalmente y eventualmente consistente. En esta plataforma, su uso principal es almacenar la configuración de versiones para que cada ubicación edge pueda resolver versiones de MFE con baja latencia (~250ms p99 global, ~50ms p95 para lecturas externas). La latencia sub-milisegundo (<5ms) que se cita en alguna documentación de Cloudflare se refiere al protocolo interno KV Storage Protocol (KVSP), no al acceso externo desde la API de Workers.
Estructura de claves:
| Clave | Valor | Propósito |
|---|---|---|
version-config:production | Objeto JSON que mapea nombres de MFE a URLs de manifiestos | Mapa de versiones de producción |
version-config:staging | Misma estructura | Mapa de versiones de staging |
version-config:dev | Misma estructura | Mapa de versiones de desarrollo |
Ejemplo de valor almacenado para version-config:production:
{
"shell": {
"version": "1.0.0",
"manifestUrl": "https://cdn.example.com/shell/1.0.0/mf-manifest.json",
"updatedAt": "2024-09-15T10:30:00Z"
},
"dashboard": {
"version": "2.3.1",
"manifestUrl": "https://cdn.example.com/dashboard/2.3.1/mf-manifest.json",
"updatedAt": "2024-09-15T14:22:00Z"
},
"settings": {
"version": "1.5.0",
"manifestUrl": "https://cdn.example.com/settings/1.5.0/mf-manifest.json",
"updatedAt": "2024-09-14T09:00:00Z"
},
"analytics": {
"version": "3.0.0",
"manifestUrl": "https://cdn.example.com/analytics/3.0.0/mf-manifest.json",
"updatedAt": "2024-09-15T16:45:00Z"
}
}
KV Namespace Binding:
# In wrangler.toml
kv_namespaces = [
{ binding = "VERSION_CONFIG_KV", id = "a1b2c3d4e5f6", preview_id = "f6e5d4c3b2a1" }
]
Patrones de lectura/escritura:
// Reading version config (low-latency at the edge, ~50ms p95)
const config = await env.VERSION_CONFIG_KV.get("version-config:production", "json");
// Writing version config (30s min cacheTtl; RYOW consistent at the same PoP)
await env.VERSION_CONFIG_KV.put(
"version-config:production",
JSON.stringify(updatedConfig),
);
// Reading with metadata (for cache invalidation tracking)
const { value, metadata } = await env.VERSION_CONFIG_KV.getWithMetadata(
"version-config:production",
"json",
);
Modelo de consistencia:
KV es eventualmente consistente con un cacheTtl mínimo de 30 segundos. Las lecturas desde el mismo PoP (Point of Presence) que realizó una escritura se benefician de consistencia Read-Your-Own-Writes (RYOW). Esto significa:
- Tras una actualización de versión, otras ubicaciones edge pueden servir la versión antigua durante al menos 30 segundos (el
cacheTtlmínimo). - Las lecturas en el PoP que realizó la escritura verán el nuevo valor inmediatamente (consistencia RYOW).
- Esto es aceptable para la configuración de versiones porque los cambios de versión de MFE son infrecuentes y una ventana de 30 segundos es operativamente insignificante.
- Para escenarios que requieran consistencia inmediata (p. ej., rollbacks de emergencia), el shell puede configurarse para consultar el Version Config Service directamente mediante su endpoint HTTP, evitando la cache de KV.
D1 (Base de datos SQLite)
D1 es la base de datos serverless SQLite de Cloudflare (GA desde abril de 2024, límite de 1 TB de almacenamiento por cuenta). Proporciona un almacén relacional para datos que necesitan ser consultados, cruzados o auditados — capacidades que KV no ofrece. En esta plataforma, D1 almacena el audit trail de todos los cambios de versión.
Esquema:
-- migrations/0001_create_version_changes.sql
CREATE TABLE IF NOT EXISTS version_changes (
id INTEGER PRIMARY KEY AUTOINCREMENT,
mfe_name TEXT NOT NULL,
version TEXT NOT NULL,
previous_version TEXT,
manifest_url TEXT NOT NULL,
environment TEXT NOT NULL CHECK (environment IN ('dev', 'staging', 'production')),
changed_by TEXT NOT NULL,
reason TEXT NOT NULL DEFAULT 'Manual update',
timestamp TEXT NOT NULL DEFAULT (datetime('now')),
-- For quick lookups
UNIQUE(mfe_name, version, environment, timestamp)
);
-- Index for common query patterns
CREATE INDEX IF NOT EXISTS idx_version_changes_mfe_env
ON version_changes(mfe_name, environment, timestamp DESC);
CREATE INDEX IF NOT EXISTS idx_version_changes_env_timestamp
ON version_changes(environment, timestamp DESC);
CREATE INDEX IF NOT EXISTS idx_version_changes_changed_by
ON version_changes(changed_by, timestamp DESC);
D1 Binding en wrangler.toml:
[[d1_databases]]
binding = "VERSION_DB"
database_name = "mfe-version-db"
database_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
Patrones de consulta habituales:
// Get the latest version for each MFE in an environment
const { results } = await env.VERSION_DB.prepare(
`SELECT mfe_name, version, manifest_url, timestamp
FROM version_changes
WHERE environment = ?
GROUP BY mfe_name
HAVING timestamp = MAX(timestamp)
ORDER BY mfe_name`,
)
.bind("production")
.all();
// Get full change history for a specific MFE
const { results: history } = await env.VERSION_DB.prepare(
`SELECT id, version, previous_version, changed_by, reason, timestamp
FROM version_changes
WHERE mfe_name = ? AND environment = ?
ORDER BY timestamp DESC
LIMIT ?`,
)
.bind("dashboard", "production", 25)
.all();
// Get all changes made by a specific user
const { results: userChanges } = await env.VERSION_DB.prepare(
`SELECT mfe_name, version, previous_version, environment, reason, timestamp
FROM version_changes
WHERE changed_by = ?
ORDER BY timestamp DESC`,
)
.bind("jane.doe@example.com")
.all();
// Count deployments per MFE in the last 30 days
const { results: deploymentCounts } = await env.VERSION_DB.prepare(
`SELECT mfe_name, COUNT(*) as deploy_count
FROM version_changes
WHERE environment = ?
AND timestamp >= datetime('now', '-30 days')
GROUP BY mfe_name
ORDER BY deploy_count DESC`,
)
.bind("production")
.all();
Procedimientos de rollback de migraciones D1:
D1 no soporta rollbacks automáticos de migraciones. Para gestionar fallos de migración de forma segura:
- Escribir siempre migraciones up/down en par. Por cada
XXXX_up.sql, mantener unXXXX_down.sqlcorrespondiente que revierta el cambio de esquema. - Probar las migraciones contra una instancia local de D1 primero usando
wrangler d1 execute --local. - Usar transacciones para migraciones de datos. Envolver las sentencias DML (data manipulation) en
BEGIN/COMMITpara que los fallos dejen la base de datos sin cambios. - Tomar un backup point-in-time antes de aplicar migraciones usando
wrangler d1 export <database-name>para volcar el estado actual. - Si una migración falla a mitad de aplicación, ejecutar manualmente la migración down correspondiente:
wrangler d1 execute <database-name> --file=migrations/XXXX_down.sql. - Para producción, usar la funcionalidad Time Travel de D1 (disponible en planes de pago) para restaurar a un punto anterior a la migración fallida.
# Pre-migration backup
wrangler d1 export mfe-version-db-production --output=backup-$(date +%Y%m%d%H%M%S).sql
# Apply migration
wrangler d1 execute mfe-version-db-production --file=migrations/0002_add_column.sql
# If migration fails, roll back manually
wrangler d1 execute mfe-version-db-production --file=migrations/0002_down_remove_column.sql
# Or restore from Time Travel (paid plans)
wrangler d1 time-travel restore mfe-version-db-production --timestamp=<before-migration-timestamp>
Service Bindings y RPC
Los service bindings son la columna vertebral de la comunicación entre Workers en esta plataforma. Permiten que los Workers se llamen directamente entre sí — sin pasar por internet público, sin resolución DNS, sin handshakes TLS y sin overhead de serialización HTTP.
Cómo funcionan los Service Bindings
Cuando el Worker A tiene un service binding al Worker B:
- La llamada se enruta dentro de la red interna de Cloudflare.
- Hay latencia adicional cero — la llamada es efectivamente una invocación de función dentro del mismo data center.
- No hay coste adicional — las llamadas por service binding no cuentan como invocaciones separadas de Worker para facturación.
- La comunicación es type-safe al usar
WorkerEntrypoint— TypeScript proporciona autocompletado completo y verificación de tipos para los métodos RPC.
Auto-bindings con
ctx.exports: Los Workers pueden usarctx.exportspara descubrir y vincularse automáticamente a named exports de otros Workers sin configuración explícita de[[services]]enwrangler.toml. Esto simplifica las configuraciones multi-Worker al reducir la configuración repetitiva.Los remote bindings ya son GA. Los bindings de desarrollo remoto (antes experimentales) están disponibles de forma general. Se puede usar
wrangler dev --remotepara probar contra bindings de producción sin desplegar, y configurar remote bindings enwrangler.tomlpara entornos de preview sin flags experimentales.
Configuración de wrangler.toml
El API Gateway Worker se vincula a todos los BFF Workers y al Version Config Service:
# workers/api-gateway/wrangler.toml
name = "api-gateway"
main = "src/index.ts"
compatibility_date = "2026-02-25"
compatibility_flags = ["nodejs_compat"]
# Service bindings to BFF Workers
[[services]]
binding = "BFF_DASHBOARD"
service = "bff-dashboard"
[[services]]
binding = "BFF_SETTINGS"
service = "bff-settings"
[[services]]
binding = "BFF_ANALYTICS"
service = "bff-analytics"
[[services]]
binding = "VERSION_CONFIG"
service = "version-config"
# KV for rate limiting
kv_namespaces = [
{ binding = "RATE_LIMIT_KV", id = "rate-limit-kv-id" }
]
[vars]
ALLOWED_ORIGINS = "https://app.example.com,https://staging.example.com"
JWKS_URL = "https://auth.example.com/.well-known/jwks.json"
RATE_LIMIT_MAX = "100"
RATE_LIMIT_WINDOW_SECONDS = "60"
Patrón RPC Type-Safe
La clave para los service bindings type-safe es la clase base WorkerEntrypoint y el tipo Service<T> en la interfaz Env.
Definición de la interfaz RPC (BFF Worker):
// workers/bff-dashboard/src/index.ts
import { WorkerEntrypoint } from "cloudflare:workers";
export default class BffDashboardWorker extends WorkerEntrypoint<Env> {
// Each public method becomes an RPC endpoint
async getMetrics(userId: string): Promise<DashboardMetrics> {
// ... implementation
}
async getRecentActivity(userId: string, limit: number): Promise<ActivityItem[]> {
// ... implementation
}
}
Consumo de la interfaz RPC (API Gateway):
// workers/api-gateway/src/index.ts
import type BffDashboardWorker from "../../bff-dashboard/src/index";
interface Env {
BFF_DASHBOARD: Service<BffDashboardWorker>;
}
export default {
async fetch(request: Request, env: Env): Promise<Response> {
try {
// Direct RPC call — fully typed, zero HTTP overhead
const metrics = await env.BFF_DASHBOARD.getMetrics("user-123");
// ^^^^^^^^^^ TypeScript knows this method exists
// and enforces the parameter types
return Response.json(metrics);
} catch (error) {
// Service binding calls can throw if the target Worker fails or is unreachable
console.error("Service binding RPC error:", error);
return Response.json(
{ error: "Downstream service unavailable" },
{ status: 503 },
);
}
},
} satisfies ExportedHandler<Env>;
RPC con tipos complejos
Los service bindings soportan serialización de tipos complejos mediante el algoritmo Structured Clone. Esto significa que se pueden pasar y devolver:
- Primitivos (strings, numbers, booleans)
- Objetos planos y arrays
Date,Map,Set,RegExpArrayBuffer,Uint8Arrayy otros typed arrays- Combinaciones anidadas de los anteriores
Objetos que no se pueden pasar: funciones, instancias de clase con métodos, nodos DOM, ReadableStream (usar fetch() para streaming).
// Example: passing complex data structures via RPC
interface AnalyticsEvent {
type: string;
properties: Map<string, string | number | boolean>;
timestamp: Date;
tags: Set<string>;
}
// In BFF Analytics Worker
export default class BffAnalyticsWorker extends WorkerEntrypoint<Env> {
async trackEvent(userId: string, event: AnalyticsEvent): Promise<void> {
// event.properties is a Map, event.timestamp is a Date, event.tags is a Set
// All are properly deserialized through the Structured Clone algorithm
}
}
Estrategia de caching CDN
El caching es fundamental para el rendimiento y la eficiencia de costes. La plataforma usa una estrategia de caching por capas con distintos TTLs según la mutabilidad del contenido.
Reglas de cache por tipo de asset
| Tipo de asset | Header Cache-Control | Justificación |
|---|---|---|
| Chunks JS/CSS con fingerprint | public, max-age=31536000, immutable | Los nombres de fichero con content hash implican que la URL cambia cuando el contenido cambia. Se puede cachear indefinidamente. |
mf-manifest.json | public, max-age=60, s-maxage=300 | TTL corto en navegador (60s) con TTL más largo en edge (300s). Permite que las actualizaciones de versión se propaguen en minutos. |
index.html (shell) | public, max-age=0, must-revalidate + ETag | Debe revalidarse siempre para detectar nuevas versiones de MFE. El ETag evita re-descargar contenido sin cambios. |
| Source maps | private, max-age=0 | Solo se sirven a sesiones de depuración autenticadas, nunca se cachean públicamente. |
Worker de servicio de assets con lógica de cache
// src/asset-server/index.ts
interface Env {
MFE_ASSETS: R2Bucket;
ENVIRONMENT: string;
}
export default {
async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise<Response> {
const url = new URL(request.url);
const path = url.pathname.slice(1); // Remove leading "/"
if (!path) {
return new Response("Not Found", { status: 404 });
}
// Check the Cloudflare Cache API first
const cacheKey = new Request(url.toString(), request);
const cache = caches.default;
let cachedResponse = await cache.match(cacheKey);
if (cachedResponse) {
return cachedResponse;
}
// Fetch from R2
const object = await env.MFE_ASSETS.get(path);
if (!object) {
return new Response("Not Found", { status: 404 });
}
// Determine cache headers based on file type
const headers = new Headers();
headers.set("Content-Type", getContentType(path));
headers.set("ETag", object.httpEtag);
headers.set("Access-Control-Allow-Origin", "*");
if (isFingerprinted(path)) {
// Fingerprinted assets: cache forever
headers.set("Cache-Control", "public, max-age=31536000, immutable");
} else if (path.endsWith("mf-manifest.json")) {
// Manifests: short TTL for version flexibility
headers.set("Cache-Control", "public, max-age=60, s-maxage=300");
} else if (path.endsWith(".html")) {
// HTML: always revalidate
headers.set("Cache-Control", "public, max-age=0, must-revalidate");
} else if (path.endsWith(".map")) {
// Source maps: never cache publicly
headers.set("Cache-Control", "private, max-age=0");
} else {
// Default: moderate caching
headers.set("Cache-Control", "public, max-age=3600, s-maxage=86400");
}
// Handle conditional requests (If-None-Match)
const ifNoneMatch = request.headers.get("If-None-Match");
if (ifNoneMatch && ifNoneMatch === object.httpEtag) {
return new Response(null, { status: 304, headers });
}
const response = new Response(object.body, { headers });
// Store in Cloudflare Cache API for subsequent requests at this edge
// Only cache non-private responses
if (!headers.get("Cache-Control")?.includes("private")) {
ctx.waitUntil(cache.put(cacheKey, response.clone()));
}
return response;
},
} satisfies ExportedHandler<Env>;
function getContentType(path: string): string {
const ext = path.split(".").pop()?.toLowerCase();
const contentTypes: Record<string, string> = {
js: "application/javascript",
mjs: "application/javascript",
css: "text/css",
html: "text/html",
json: "application/json",
svg: "image/svg+xml",
png: "image/png",
jpg: "image/jpeg",
jpeg: "image/jpeg",
webp: "image/webp",
woff: "font/woff",
woff2: "font/woff2",
map: "application/json",
};
return contentTypes[ext ?? ""] ?? "application/octet-stream";
}
function isFingerprinted(path: string): boolean {
// Match patterns like: chunk-Dashboard-789abc.js, styles-a1b2c3d4.css
return /[.-][a-f0-9]{6,16}\.(js|css|mjs|woff2?)$/.test(path);
}
Purga de cache en actualización de versión
Cuando se despliega una nueva versión de MFE, el Version Config Service puede lanzar una purga de cache dirigida:
// Purge specific URLs after a version update
async function purgeVersionCache(
mfeName: string,
oldVersion: string,
zoneId: string,
apiToken: string,
): Promise<void> {
// Purge the old manifest URL so edges fetch the new one
const urlsToPurge = [
`https://cdn.example.com/${mfeName}/${oldVersion}/mf-manifest.json`,
// Purge the version config endpoint
`https://api.example.com/config`,
];
await fetch(`https://api.cloudflare.com/client/v4/zones/${zoneId}/purge_cache`, {
method: "POST",
headers: {
Authorization: `Bearer ${apiToken}`,
"Content-Type": "application/json",
},
body: JSON.stringify({ files: urlsToPurge }),
});
}
Gestión de entornos
La plataforma usa tres entornos — dev, staging y production — cada uno con recursos aislados (KV namespaces, bases de datos D1, buckets R2).
wrangler.toml multi-entorno
# workers/api-gateway/wrangler.toml
name = "api-gateway"
main = "src/index.ts"
compatibility_date = "2026-02-25"
compatibility_flags = ["nodejs_compat"]
# ─── Default (dev) ───────────────────────────────────────────────
[vars]
ENVIRONMENT = "dev"
ALLOWED_ORIGINS = "http://localhost:3000,https://dev.example.com"
JWKS_URL = "https://auth-dev.example.com/.well-known/jwks.json"
RATE_LIMIT_MAX = "1000"
RATE_LIMIT_WINDOW_SECONDS = "60"
kv_namespaces = [
{ binding = "RATE_LIMIT_KV", id = "dev-rate-limit-kv-id", preview_id = "dev-rate-limit-preview-id" }
]
[[services]]
binding = "BFF_DASHBOARD"
service = "bff-dashboard"
[[services]]
binding = "BFF_SETTINGS"
service = "bff-settings"
[[services]]
binding = "BFF_ANALYTICS"
service = "bff-analytics"
[[services]]
binding = "VERSION_CONFIG"
service = "version-config"
# ─── Staging ─────────────────────────────────────────────────────
[env.staging]
name = "api-gateway-staging"
[env.staging.vars]
ENVIRONMENT = "staging"
ALLOWED_ORIGINS = "https://staging.example.com"
JWKS_URL = "https://auth-staging.example.com/.well-known/jwks.json"
RATE_LIMIT_MAX = "500"
RATE_LIMIT_WINDOW_SECONDS = "60"
kv_namespaces = [
{ binding = "RATE_LIMIT_KV", id = "staging-rate-limit-kv-id" }
]
[[env.staging.services]]
binding = "BFF_DASHBOARD"
service = "bff-dashboard-staging"
[[env.staging.services]]
binding = "BFF_SETTINGS"
service = "bff-settings-staging"
[[env.staging.services]]
binding = "BFF_ANALYTICS"
service = "bff-analytics-staging"
[[env.staging.services]]
binding = "VERSION_CONFIG"
service = "version-config-staging"
# ─── Production ──────────────────────────────────────────────────
[env.production]
name = "api-gateway-production"
routes = [
{ pattern = "api.example.com/*", zone_name = "example.com" }
]
[env.production.vars]
ENVIRONMENT = "production"
ALLOWED_ORIGINS = "https://app.example.com"
JWKS_URL = "https://auth.example.com/.well-known/jwks.json"
RATE_LIMIT_MAX = "100"
RATE_LIMIT_WINDOW_SECONDS = "60"
kv_namespaces = [
{ binding = "RATE_LIMIT_KV", id = "prod-rate-limit-kv-id" }
]
[[env.production.services]]
binding = "BFF_DASHBOARD"
service = "bff-dashboard-production"
[[env.production.services]]
binding = "BFF_SETTINGS"
service = "bff-settings-production"
[[env.production.services]]
binding = "BFF_ANALYTICS"
service = "bff-analytics-production"
[[env.production.services]]
binding = "VERSION_CONFIG"
service = "version-config-production"
Version Config Service: wrangler.toml multi-entorno
# workers/version-config/wrangler.toml
name = "version-config"
main = "src/index.ts"
compatibility_date = "2026-02-25"
[vars]
ENVIRONMENT = "dev"
CDN_BASE_URL = "https://cdn-dev.example.com"
kv_namespaces = [
{ binding = "VERSION_CONFIG_KV", id = "dev-version-kv-id", preview_id = "dev-version-preview-id" }
]
[[d1_databases]]
binding = "VERSION_DB"
database_name = "mfe-version-db-dev"
database_id = "dev-d1-database-id"
# ─── Staging ─────────────────────────────────────────────────────
[env.staging]
name = "version-config-staging"
[env.staging.vars]
ENVIRONMENT = "staging"
CDN_BASE_URL = "https://cdn-staging.example.com"
kv_namespaces = [
{ binding = "VERSION_CONFIG_KV", id = "staging-version-kv-id" }
]
[[env.staging.d1_databases]]
binding = "VERSION_DB"
database_name = "mfe-version-db-staging"
database_id = "staging-d1-database-id"
# ─── Production ──────────────────────────────────────────────────
[env.production]
name = "version-config-production"
[env.production.vars]
ENVIRONMENT = "production"
CDN_BASE_URL = "https://cdn.example.com"
kv_namespaces = [
{ binding = "VERSION_CONFIG_KV", id = "prod-version-kv-id" }
]
[[env.production.d1_databases]]
binding = "VERSION_DB"
database_name = "mfe-version-db-production"
database_id = "prod-d1-database-id"
Gestión de secrets
Los secrets se configuran por entorno mediante la CLI de Wrangler (v4; tener en cuenta que Wrangler v3 llega a EOL en Q1 2026). Se cifran en reposo y se inyectan en el Env del Worker en runtime.
# Set secrets for each environment
wrangler secret put METRICS_API_KEY # dev (default)
wrangler secret put METRICS_API_KEY --env staging # staging
wrangler secret put METRICS_API_KEY --env production # production
wrangler secret put ACTIVITY_API_KEY --env production
wrangler secret put CACHE_PURGE_API_TOKEN --env production
# List secrets for an environment
wrangler secret list --env production
Comandos de deployment
# Deploy to dev (default)
wrangler deploy
# Deploy to staging
wrangler deploy --env staging
# Deploy to production
wrangler deploy --env production
# Deploy all Workers (in CI/CD pipeline)
for worker in api-gateway bff-dashboard bff-settings bff-analytics version-config asset-server; do
(cd "workers/$worker" && wrangler deploy --env production)
done
Consideraciones de coste y límites
Límites de la plataforma Cloudflare Workers
| Recurso | Plan gratuito | Plan de pago ($5/mes) | Enterprise |
|---|---|---|---|
| Workers Requests | 100.000/día | 10 millones/mes incluidos, $0,50/millón adicional | Personalizado |
| Workers CPU Time | 10ms por invocación | Configurable hasta 5 minutos mediante ajuste cpu_ms (por defecto 30ms) | Personalizado |
| Workers Size | 1 MB tras compresión | 10 MB tras compresión | Personalizado |
| Service Bindings | Gratis (sin coste adicional por request) | Gratis (sin coste adicional por request) | Gratis |
Límites de KV
| Operación | Plan gratuito | Plan de pago |
|---|---|---|
| Reads | 100.000/día | $0,50 por millón de lecturas |
| Writes | 1.000/día | $5,00 por millón de escrituras |
| Deletes | 1.000/día | $5,00 por millón de borrados |
| Lists | 1.000/día | $5,00 por millón de listados |
| Storage | 1 GB | $0,50 por GB-mes |
| Value Size | 25 MiB máx | 25 MiB máx |
| Key Size | 512 bytes máx | 512 bytes máx |
Límites de R2
| Recurso | Tier gratuito | De pago (más allá del gratuito) |
|---|---|---|
| Storage | 10 GB/mes | $0,015 por GB-mes |
| Class A ops (PUT, POST, LIST) | 1 millón/mes | $4,50 por millón |
| Class B ops (GET, HEAD) | 10 millones/mes | $0,36 por millón |
| Egress | Gratis (ilimitado) | Gratis (ilimitado) |
Límites de D1
| Recurso | Plan gratuito | Plan de pago |
|---|---|---|
| Rows read | 5 millones/día | $0,001 por millón de filas |
| Rows written | 100.000/día | $1,00 por millón de filas |
| Storage | 5 GB | $0,75 por GB-mes |
| Databases | 50.000 por cuenta | 50.000 por cuenta |
| Max DB size | 2 GB (gratuito), 10 GB (pago) | 10 GB |
Límites de Durable Objects
| Recurso | Precio |
|---|---|
| Requests | $0,15 por millón de requests |
| Duration | $12,50 por millón de GB-segundos |
| Storage (reads) | $0,20 por millón de lecturas |
| Storage (writes) | $1,00 por millón de escrituras |
| Storage (deletes) | $1,00 por millón de borrados |
| Stored data | $0,20 por GB-mes |
| WebSocket message size | 32 MiB máx por mensaje |
Facturación de DO SQLite: La facturación del almacenamiento SQLite de Durable Objects está activa desde enero de 2026. El uso de la API SQLite dentro de Durable Objects ahora genera cargos de lectura/escritura de almacenamiento como se indica arriba.
Estimación de costes para esta plataforma
Para una plataforma de micro frontends de tamaño moderado (50.000 usuarios activos diarios, 5 MFEs):
| Servicio | Uso mensual estimado | Coste mensual estimado |
|---|---|---|
| Workers Plan de pago | Plan base | $5,00 |
| Workers Requests | ~15M requests (API + assets) | ~$2,50 |
| KV Reads | ~2M lecturas (version config) | ~$1,00 |
| KV Writes | ~500 escrituras (actualizaciones de versión) | ~$0,00 |
| R2 Storage | ~5 GB (bundles versionados) | ~$0,08 |
| R2 Class B (GET) | ~10M lecturas (servicio de assets) | Tier gratuito |
| D1 Rows Read | ~100K lecturas (consultas de admin) | Tier gratuito |
| D1 Rows Written | ~500 escrituras (cambios de versión) | Tier gratuito |
| Total | ~$8,58/mes |
Dato clave de costes: Los service bindings son gratuitos. Cada llamada del API Gateway a un BFF Worker no genera coste adicional. Esto hace que el patrón "un BFF por MFE" sea económicamente viable.
Estrategia de backup
Versionado de objetos en R2
Activar el versionado del bucket R2 para proteger contra sobreescrituras o borrados accidentales de bundles MFE:
# Enable versioning on the production assets bucket
wrangler r2 bucket update mfe-assets-production --versioning enabled
Con el versionado activado, cada PUT o DELETE crea una nueva versión en vez de sobreescribir el objeto. Las versiones anteriores pueden listarse y restaurarse:
// List object versions
const versions = await env.MFE_ASSETS.list({
prefix: "dashboard/2.3.1/",
include: ["httpMetadata", "customMetadata"],
});
// Get a specific version by ID
const previousVersion = await env.MFE_ASSETS.get("dashboard/2.3.1/mf-manifest.json", {
version: "version-id-here",
});
Política de versionado recomendada:
- Mantener las versiones durante al menos 30 días antes de expirar versiones antiguas mediante una lifecycle rule.
- Usar lifecycle rules para limitar los costes de almacenamiento:
wrangler r2 bucket lifecycle set mfe-assets-production --expire-versions-after 30d.
Estrategia de backup de D1
D1 proporciona múltiples mecanismos de protección de datos:
-
Backups automatizados mediante Time Travel (planes de pago): D1 mantiene una ventana de recuperación point-in-time (por defecto 30 días). Restaurar a cualquier punto usando:
wrangler d1 time-travel restore mfe-version-db-production --timestamp="2026-02-20T10:00:00Z" -
Exportaciones manuales para backups offline:
# Export full database as SQL dump wrangler d1 export mfe-version-db-production --output=backup-$(date +%Y%m%d).sql # Schedule regular exports in CI/CD (e.g., daily cron) -
Redundancia cross-region: D1 replica automáticamente read replicas. Para mayor durabilidad, exportar backups a R2:
# Export and upload to R2 wrangler d1 export mfe-version-db-production --output=backup.sql wrangler r2 object put mfe-backups/d1/backup-$(date +%Y%m%d).sql --file=backup.sql
Referencias
- Cloudflare Workers Documentation
- Cloudflare Workers Runtime APIs
- Service Bindings - RPC
- WorkerEntrypoint Reference
- Cloudflare R2 Documentation
- Cloudflare KV Documentation
- Cloudflare D1 Documentation
- Cloudflare Durable Objects Documentation
- Wrangler v4 Configuration (wrangler.toml) — Wrangler v3 reaches EOL in Q1 2026
- Cloudflare Workers Pricing
- jose Library (JWT/JWK)
- Module Federation v2 Documentation