Authentication
Table of Contents
- Overview
- AuthKit Integration in Shell App
- Auth Flow: Step by Step
- API Gateway JWT Validation
- Session Management
- Organization-Based Multi-Tenancy
- Auth Context for MFEs
- Security Considerations
- References
Overview
The platform uses WorkOS AuthKit as its authentication provider. AuthKit provides a hosted, embeddable authentication UI backed by WorkOS's identity infrastructure, eliminating the need to build and maintain login flows, password reset logic, or SSO integrations from scratch.
Why WorkOS
WorkOS was selected over alternatives (Auth0, Clerk, Firebase Auth) for the following reasons:
- Enterprise SSO out of the box: Native support for SAML 2.0 and OIDC federation. When a customer requires SSO with their identity provider (Okta, Azure AD, Google Workspace), it is a configuration change in the WorkOS dashboard rather than a code change.
- Directory Sync (SCIM): Automatic user provisioning and deprovisioning from customer identity providers. When an employee is removed from a customer's Okta directory, their access is revoked automatically.
- Generous free tier: 1 million monthly active users at no cost. This removes authentication cost as a scaling concern during early growth.
- Edge-compatible JWT validation: WorkOS issues standard JWTs with a publicly available JWKS endpoint. Tokens can be validated entirely at the edge using the Web Crypto API, with no round-trip to WorkOS servers required after the initial key fetch.
- Multi-organization support: First-class support for users belonging to multiple organizations, which maps directly to our multi-tenant architecture.
Authentication Model
The authentication flow follows the AuthKit hosted OAuth pattern:
AuthKit Hosted Login --> Authorization Code --> BFF Worker Token Exchange --> JWT --> HttpOnly Cookie Session
The browser never handles raw tokens. All token exchange and storage is managed server-side by BFF (Backend for Frontend) Workers running on Cloudflare. The client receives only an opaque, encrypted session cookie.
AuthKit Integration in Shell App
AuthKitProvider Setup
The shell application is the single point of authentication initialization. A single AuthKitProvider wraps the entire application tree, including all dynamically loaded micro frontends. This ensures that every MFE inherits the same authentication context without needing its own provider.
// apps/shell/src/App.tsx
// package.json: "@workos-inc/authkit-react": "^1.3.0"
import { AuthKitProvider } from '@workos-inc/authkit-react';
import { RouterProvider } from 'react-router-dom';
import { router } from './router';
function App() {
return (
<AuthKitProvider
clientId={import.meta.env.RSBUILD_PUBLIC_WORKOS_CLIENT_ID}
apiHostname="auth.example.com" // Custom domain pointing to WorkOS
redirectUri={import.meta.env.RSBUILD_PUBLIC_AUTH_REDIRECT_URI}
onRedirectCallback={(state) => {
// Navigate to the route the user originally requested
window.history.replaceState({}, '', state?.returnTo || '/');
}}
>
<RouterProvider router={router} />
</AuthKitProvider>
);
}
export default App;
Configuration notes:
| Environment Variable | Example Value | Purpose |
|---|---|---|
RSBUILD_PUBLIC_WORKOS_CLIENT_ID | client_01H... | WorkOS project client ID |
RSBUILD_PUBLIC_AUTH_REDIRECT_URI | https://app.example.com/auth/callback | OAuth callback URL registered in WorkOS dashboard |
The apiHostname is set to a custom domain (auth.example.com) that CNAMEs to WorkOS. This keeps the authentication flow on the example.com domain, which is important for cookie sharing and user trust.
useAuth Hook
The shell app accesses authentication state via the useAuth() hook provided by the AuthKit React SDK. This hook exposes the following:
| Property / Method | Type | Description |
|---|---|---|
user | User | null | The authenticated user object, or null if not authenticated |
isLoading | boolean | true while the initial auth state is being resolved |
getAccessToken() | () => Promise<string> | Returns a fresh access token (handles refresh transparently) |
signIn() | () => void | Redirects to AuthKit hosted login |
signOut() | () => Promise<void> | Ends the session and redirects to login |
The shell creates a lightweight auth context that it passes down to MFEs:
// apps/shell/src/contexts/AuthContext.tsx
import { createContext, useContext, useMemo } from 'react';
// package.json: "@workos-inc/authkit-react": "^1.3.0"
import { useAuth } from '@workos-inc/authkit-react';
export interface AuthContextValue {
user: {
id: string;
email: string;
firstName: string | null;
lastName: string | null;
profilePictureUrl: string | null;
organizationId: string | null;
} | null;
isAuthenticated: boolean;
isLoading: boolean;
signIn: () => void;
signOut: () => Promise<void>;
}
const AuthContext = createContext<AuthContextValue | null>(null);
export function ShellAuthProvider({ children }: { children: React.ReactNode }) {
const { user, isLoading, signIn, signOut } = useAuth();
const value = useMemo<AuthContextValue>(
() => ({
user: user
? {
id: user.id,
email: user.email,
firstName: user.firstName,
lastName: user.lastName,
profilePictureUrl: user.profilePictureUrl,
organizationId: user.organizationId,
}
: null,
isAuthenticated: !!user,
isLoading,
signIn,
signOut,
}),
[user, isLoading, signIn, signOut]
);
return <AuthContext.Provider value={value}>{children}</AuthContext.Provider>;
}
export function useShellAuth(): AuthContextValue {
const ctx = useContext(AuthContext);
if (!ctx) {
throw new Error('useShellAuth must be used within ShellAuthProvider');
}
return ctx;
}
Protected Routes
The shell wraps MFE routes with an authentication guard that redirects unauthenticated users to the AuthKit login page. Routes that should be publicly accessible (marketing pages, login callback) are excluded from the guard.
// apps/shell/src/components/ProtectedRoute.tsx
import { Navigate, useLocation } from 'react-router-dom';
import { useShellAuth } from '../contexts/AuthContext';
interface ProtectedRouteProps {
children: React.ReactNode;
requiredPermissions?: string[];
}
export function ProtectedRoute({
children,
requiredPermissions = [],
}: ProtectedRouteProps) {
const { isAuthenticated, isLoading, user, signIn } = useShellAuth();
const location = useLocation();
// Show nothing while auth state is resolving to prevent flash
if (isLoading) {
return <LoadingSkeleton />;
}
// Not authenticated: redirect to AuthKit login
if (!isAuthenticated) {
// Store the current path so we can redirect back after login
sessionStorage.setItem('auth:returnTo', location.pathname + location.search);
signIn();
return null;
}
// Check permissions if required
if (requiredPermissions.length > 0) {
const userPermissions = (user as any)?.permissions ?? [];
const hasAll = requiredPermissions.every((p) => userPermissions.includes(p));
if (!hasAll) {
return <Navigate to="/unauthorized" replace />;
}
}
return <>{children}</>;
}
function LoadingSkeleton() {
return (
<div className="flex items-center justify-center h-screen">
<div className="animate-pulse space-y-4 w-full max-w-md">
<div className="h-4 bg-neutral-200 rounded w-3/4" />
<div className="h-4 bg-neutral-200 rounded w-1/2" />
<div className="h-4 bg-neutral-200 rounded w-5/6" />
</div>
</div>
);
}
Usage in the router configuration:
// apps/shell/src/router.tsx
import { createBrowserRouter } from 'react-router-dom';
import { ProtectedRoute } from './components/ProtectedRoute';
import { AppLayout } from './layouts/AppLayout';
import { AuthCallback } from './pages/AuthCallback';
import { lazy } from 'react';
const DashboardMFE = lazy(() => import('dashboard/App'));
const SettingsMFE = lazy(() => import('settings/App'));
const BillingMFE = lazy(() => import('billing/App'));
export const router = createBrowserRouter([
// Public route: OAuth callback
{ path: '/auth/callback', element: <AuthCallback /> },
// Protected routes: all MFEs
{
path: '/',
element: (
<ProtectedRoute>
<AppLayout />
</ProtectedRoute>
),
children: [
{ index: true, element: <DashboardMFE /> },
{ path: 'settings/*', element: <SettingsMFE /> },
{
path: 'billing/*',
element: (
<ProtectedRoute requiredPermissions={['billing:read']}>
<BillingMFE />
</ProtectedRoute>
),
},
],
},
]);
Auth Flow: Step by Step
The following diagram shows the complete authentication flow from an unauthenticated user visiting the app to a fully authenticated session:
Browser Shell App BFF Worker WorkOS AuthKit
| | | |
| 1. GET app.example.com | |
|--------------------->| | |
| | | |
| 2. No session cookie detected | |
| Shell calls signIn() | |
| | | |
| 3. Redirect to AuthKit hosted login | |
|---------------------------------------------------------------> |
| | | |
| 4. User authenticates (email/pass, SSO, Google, etc.) |
| | | |
| 5. AuthKit redirects back with authorization code |
|<--------------------------------------------------------------- |
| Location: app.example.com/auth/callback?code=auth_code_xxx |
| | | |
| 6. Shell sends code to BFF Worker | |
|--------------------->|--------------------->| |
| | POST /auth/token | |
| | { code: "xxx" } | |
| | | |
| | 7. BFF exchanges code for tokens |
| | |--------------------->|
| | | POST /sso/token |
| | | { code, client_id, |
| | | client_secret } |
| | |<---------------------|
| | | { access_token, |
| | | refresh_token } |
| | | |
| 8. BFF sets HttpOnly, Secure, SameSite=Strict cookie |
|<-----------------------------------------------------------------|
| Set-Cookie: session=<encrypted>; HttpOnly; Secure; SameSite=Strict
| | | |
| 9. Subsequent requests include cookie automatically |
|--------------------->|--------------------->| |
| Cookie: session=<encrypted> | |
| | | |
| 10. API Gateway validates JWT from cookie on every request |
| | | |
Key security decisions in this flow:
- Code exchange happens server-side (step 7): The authorization code is exchanged for tokens by the BFF Worker, never by client-side JavaScript. The
client_secretnever leaves the Worker. - Tokens never reach the browser (step 8): The browser receives only an encrypted session cookie. The actual JWT and refresh token are either encrypted into the cookie value or stored in Worker KV with the cookie containing only a session ID.
- Cookie-based transport (step 9): Using cookies rather than Authorization headers means authentication is automatic. Every request to
*.example.comincludes the cookie without any client-side code needing to manage headers.
API Gateway JWT Validation
Using jose Library
Cloudflare Workers run on the V8 JavaScript engine without access to Node.js built-in modules such as crypto. The jose library (pinned to ^6.1.3) is specifically designed for this constraint: it uses the Web Crypto API exclusively, making it fully compatible with Cloudflare Workers, Deno, and browser environments.
jose v6 migration notes:
createRemoteJWKSetuses nativefetchinstead ofnode:httpfor key retrieval, making it fully compatible with edge runtimes without polyfills.- Key objects are
CryptoKeyinstances (Web Crypto API) rather than Node.jsKeyObject. This is a breaking change from v4 -- any code that type-checks forKeyObjectmust be updated. importJWKreturnsCryptoKeydirectly, eliminating the need foras KeyLikecasts in most cases.
The API Gateway Worker validates the JWT extracted from the session cookie on every inbound request:
// workers/api-gateway/src/auth/jwt.ts
// package.json: "jose": "^6.1.3"
import { jwtVerify, createRemoteJWKSet, type JWTPayload } from 'jose';
// createRemoteJWKSet with error handling for network failures.
// In jose v6, this uses native fetch internally. Network errors (DNS
// resolution failures, timeouts, connection resets) will surface as
// fetch-level exceptions that must be caught at the call site.
let JWKS: ReturnType<typeof createRemoteJWKSet>;
function getJWKS(): ReturnType<typeof createRemoteJWKSet> {
if (!JWKS) {
JWKS = createRemoteJWKSet(
new URL(`https://api.workos.com/sso/jwks/${WORKOS_CLIENT_ID}`),
{
cacheMaxAge: 600_000, // Cache JWKS for 10 minutes
cooldownDuration: 30_000, // Wait 30s before re-fetching after a failure
}
);
}
return JWKS;
}
export interface WorkOSJWTPayload extends JWTPayload {
sub: string; // User ID
org_id: string; // Organization ID
role: string; // User role within the organization
permissions: string[]; // Granted permissions
}
export async function validateToken(token: string): Promise<WorkOSJWTPayload> {
try {
const { payload } = await jwtVerify(token, getJWKS(), {
issuer: 'https://api.workos.com',
audience: WORKOS_CLIENT_ID,
});
// Validate required claims are present
if (!payload.sub || !payload.org_id) {
throw new Error('JWT missing required claims: sub, org_id');
}
return payload as WorkOSJWTPayload;
} catch (error) {
// Distinguish network errors from validation errors so callers
// can decide whether to retry or reject immediately.
if (error instanceof TypeError && error.message.includes('fetch')) {
// Network-level failure: DNS resolution, timeout, connection reset.
// The JWKS endpoint is unreachable. Log and rethrow with context.
console.error('JWKS fetch failed (network error):', error.message);
throw new JWKSNetworkError(
`Unable to reach JWKS endpoint: ${error.message}`,
{ cause: error }
);
}
throw error; // Re-throw validation errors (expired, bad signature, etc.)
}
}
/**
* Custom error class for JWKS network failures. Callers can check
* `instanceof JWKSNetworkError` to distinguish transient network issues
* from permanent validation failures.
*/
export class JWKSNetworkError extends Error {
constructor(message: string, options?: ErrorOptions) {
super(message, options);
this.name = 'JWKSNetworkError';
}
}
The middleware that applies this validation to every API request:
// workers/api-gateway/src/middleware/auth.ts
import { validateToken, type WorkOSJWTPayload } from '../auth/jwt';
import { decryptSessionCookie } from '../auth/session';
export interface AuthenticatedRequest extends Request {
auth: WorkOSJWTPayload;
}
export async function authMiddleware(
request: Request,
env: Env
): Promise<AuthenticatedRequest | Response> {
// Extract session cookie
const cookieHeader = request.headers.get('Cookie') ?? '';
const sessionCookie = parseCookie(cookieHeader, 'session');
if (!sessionCookie) {
return new Response(JSON.stringify({ error: 'Authentication required' }), {
status: 401,
headers: { 'Content-Type': 'application/json' },
});
}
try {
// Decrypt the session cookie to extract the JWT
const token = await decryptSessionCookie(sessionCookie, env.SESSION_SECRET);
// Validate the JWT against WorkOS JWKS
const payload = await validateToken(token);
// Attach auth context to the request for downstream handlers
const authenticatedRequest = request as AuthenticatedRequest;
authenticatedRequest.auth = payload;
return authenticatedRequest;
} catch (error) {
console.error('Auth validation failed:', error);
return new Response(JSON.stringify({ error: 'Invalid or expired token' }), {
status: 401,
headers: { 'Content-Type': 'application/json' },
});
}
}
function parseCookie(cookieHeader: string, name: string): string | null {
const match = cookieHeader.match(new RegExp(`(?:^|;\\s*)${name}=([^;]*)`));
return match ? decodeURIComponent(match[1]) : null;
}
JWKS Caching in KV
By default, createRemoteJWKSet fetches the JWKS from WorkOS on the first validation call and caches it in memory. However, Cloudflare Workers have no persistent in-memory cache across isolates. Each request may run in a different isolate, causing repeated JWKS fetches and adding 100-300ms of latency per cold validation.
The solution is to cache the JWKS keys in Cloudflare KV with a short TTL:
// workers/api-gateway/src/auth/jwks-cache.ts
// package.json: "jose": "^6.1.3"
import {
importJWK,
jwtVerify,
type JWK,
type JWTPayload,
} from 'jose';
const JWKS_CACHE_KEY = 'workos:jwks';
const JWKS_CACHE_TTL_SECONDS = 300; // 5 minutes
interface CachedJWKS {
keys: JWK[];
cachedAt: number;
}
export async function validateTokenWithCache(
token: string,
env: Env
): Promise<JWTPayload> {
// Step 1: Try to validate with cached JWKS
const cached = await getCachedJWKS(env);
if (cached) {
try {
return await validateWithKeys(token, cached.keys, env);
} catch (error) {
// Cached key might be rotated. Fall through to re-fetch.
console.warn('Cached JWKS validation failed, re-fetching:', error);
}
}
// Step 2: Fetch fresh JWKS from WorkOS
const freshKeys = await fetchAndCacheJWKS(env);
return await validateWithKeys(token, freshKeys, env);
}
async function getCachedJWKS(env: Env): Promise<CachedJWKS | null> {
const raw = await env.AUTH_KV.get(JWKS_CACHE_KEY, 'json');
if (!raw) return null;
const cached = raw as CachedJWKS;
const ageSeconds = (Date.now() - cached.cachedAt) / 1000;
// Return null if cache is expired (even though KV has its own TTL,
// we check here for extra safety)
if (ageSeconds > JWKS_CACHE_TTL_SECONDS) return null;
return cached;
}
async function fetchAndCacheJWKS(env: Env): Promise<JWK[]> {
let response: Response;
try {
response = await fetch(
`https://api.workos.com/sso/jwks/${env.WORKOS_CLIENT_ID}`,
{ signal: AbortSignal.timeout(5_000) } // 5-second timeout
);
} catch (error) {
// Network-level failure: DNS resolution, timeout, connection reset.
// Attempt to return stale cached keys as a fallback.
const staleCache = await env.AUTH_KV.get(JWKS_CACHE_KEY, 'json');
if (staleCache) {
console.warn(
'JWKS fetch failed, falling back to stale cached keys:',
(error as Error).message
);
return (staleCache as CachedJWKS).keys;
}
throw new Error(
`JWKS endpoint unreachable and no cached keys available: ${(error as Error).message}`
);
}
if (!response.ok) {
// HTTP error (4xx/5xx). Fall back to stale cache if available.
const staleCache = await env.AUTH_KV.get(JWKS_CACHE_KEY, 'json');
if (staleCache) {
console.warn(
`JWKS fetch returned ${response.status}, falling back to stale cached keys`
);
return (staleCache as CachedJWKS).keys;
}
throw new Error(`Failed to fetch JWKS: ${response.status}`);
}
const jwks = (await response.json()) as { keys: JWK[] };
// Cache in KV with TTL
const cacheEntry: CachedJWKS = {
keys: jwks.keys,
cachedAt: Date.now(),
};
await env.AUTH_KV.put(JWKS_CACHE_KEY, JSON.stringify(cacheEntry), {
expirationTtl: JWKS_CACHE_TTL_SECONDS + 60, // KV TTL slightly longer than logical TTL
});
return jwks.keys;
}
async function validateWithKeys(
token: string,
keys: JWK[],
env: Env
): Promise<JWTPayload> {
// Extract the key ID from the JWT header to find the matching key
const headerPayload = token.split('.')[0];
const header = JSON.parse(atob(headerPayload));
const kid = header.kid;
const matchingKey = keys.find((k) => k.kid === kid);
if (!matchingKey) {
throw new Error(`No matching key found for kid: ${kid}`);
}
// In jose v6, importJWK returns CryptoKey (Web Crypto API) instead of KeyObject
const publicKey: CryptoKey = (await importJWK(matchingKey, 'RS256')) as CryptoKey;
const { payload } = await jwtVerify(token, publicKey, {
issuer: 'https://api.workos.com',
audience: env.WORKOS_CLIENT_ID,
});
return payload;
}
Performance impact of JWKS caching:
| Scenario | Latency |
|---|---|
| No cache (cold start, first request) | 150-300ms (network fetch to WorkOS) |
| Cached in KV (typical request) | 1-2ms (KV read + crypto verification) |
| Cache miss due to key rotation | 150-300ms (one-time re-fetch, then cached) |
After the first cache population, JWT validation adds less than 2ms to request handling. Key rotation is handled gracefully: if the cached key does not match the token's kid, a fresh JWKS fetch is triggered and the cache is updated.
Session Management
BFF-Managed Sessions
A core security principle of this architecture is that the browser never directly handles authentication tokens. No access tokens, refresh tokens, or JWTs are stored in localStorage, sessionStorage, or accessible JavaScript variables. All token lifecycle management happens in the BFF (Backend for Frontend) Worker.
// workers/bff/src/auth/session.ts
const SESSION_COOKIE_NAME = 'session';
// IMPORTANT: Session cookie Max-Age must be aligned with the refresh token
// lifetime, NOT the access token lifetime. Access tokens are short-lived
// (15-30 min) and are refreshed transparently by the BFF. The cookie must
// survive long enough for the refresh token to be used. WorkOS refresh
// tokens default to 30 days, so we set the cookie to 30 days to match.
// If the cookie expires before the refresh token, the user is logged out
// prematurely. If the cookie outlives the refresh token, the BFF will
// detect the expired refresh token and return 401, triggering re-login.
const SESSION_MAX_AGE = 60 * 60 * 24 * 30; // 30 days (aligned with refresh token lifetime)
interface SessionData {
accessToken: string;
refreshToken: string;
expiresAt: number; // Unix timestamp (seconds)
userId: string;
organizationId: string;
}
/**
* Encrypts session data and returns a Set-Cookie header value.
* Uses AES-GCM with a 256-bit key derived from the SESSION_SECRET.
*/
export async function createSessionCookie(
session: SessionData,
secret: string
): Promise<string> {
const plaintext = JSON.stringify(session);
const iv = crypto.getRandomValues(new Uint8Array(12));
const key = await deriveKey(secret);
const ciphertext = await crypto.subtle.encrypt(
{ name: 'AES-GCM', iv },
key,
new TextEncoder().encode(plaintext)
);
// Combine IV + ciphertext and base64url-encode
const combined = new Uint8Array(iv.length + new Uint8Array(ciphertext).length);
combined.set(iv);
combined.set(new Uint8Array(ciphertext), iv.length);
const encoded = btoa(String.fromCharCode(...combined))
.replace(/\+/g, '-')
.replace(/\//g, '_')
.replace(/=+$/, '');
return [
`${SESSION_COOKIE_NAME}=${encoded}`,
`HttpOnly`,
`Secure`,
`SameSite=Strict`,
`Path=/`,
`Domain=.example.com`,
`Max-Age=${SESSION_MAX_AGE}`,
].join('; ');
}
/**
* Decrypts a session cookie value and returns the session data.
*/
export async function decryptSession(
cookieValue: string,
secret: string
): Promise<SessionData> {
// Base64url decode
const padded = cookieValue.replace(/-/g, '+').replace(/_/g, '/');
const binary = atob(padded);
const combined = new Uint8Array(binary.length);
for (let i = 0; i < binary.length; i++) {
combined[i] = binary.charCodeAt(i);
}
const iv = combined.slice(0, 12);
const ciphertext = combined.slice(12);
const key = await deriveKey(secret);
const plaintext = await crypto.subtle.decrypt(
{ name: 'AES-GCM', iv },
key,
ciphertext
);
return JSON.parse(new TextDecoder().decode(plaintext));
}
async function deriveKey(secret: string): Promise<CryptoKey> {
const keyMaterial = await crypto.subtle.importKey(
'raw',
new TextEncoder().encode(secret),
'HKDF',
false,
['deriveKey']
);
return crypto.subtle.deriveKey(
{
name: 'HKDF',
hash: 'SHA-256',
salt: new TextEncoder().encode('authkit-session'),
info: new TextEncoder().encode('session-encryption'),
},
keyMaterial,
{ name: 'AES-GCM', length: 256 },
false,
['encrypt', 'decrypt']
);
}
The token exchange endpoint on the BFF Worker:
// workers/bff/src/routes/auth.ts
import { createSessionCookie } from '../auth/session';
export async function handleTokenExchange(
request: Request,
env: Env
): Promise<Response> {
const { code } = (await request.json()) as { code: string };
if (!code) {
return new Response(JSON.stringify({ error: 'Missing authorization code' }), {
status: 400,
headers: { 'Content-Type': 'application/json' },
});
}
// Exchange authorization code for tokens (server-side only)
const tokenResponse = await fetch('https://api.workos.com/sso/token', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
client_id: env.WORKOS_CLIENT_ID,
client_secret: env.WORKOS_CLIENT_SECRET, // Never exposed to browser
grant_type: 'authorization_code',
code,
}),
});
if (!tokenResponse.ok) {
const error = await tokenResponse.text();
console.error('Token exchange failed:', error);
return new Response(JSON.stringify({ error: 'Authentication failed' }), {
status: 401,
headers: { 'Content-Type': 'application/json' },
});
}
const tokens = (await tokenResponse.json()) as {
access_token: string;
refresh_token: string;
expires_in: number;
user: { id: string; organization_id: string };
};
// Create encrypted session cookie
const cookie = await createSessionCookie(
{
accessToken: tokens.access_token,
refreshToken: tokens.refresh_token,
expiresAt: Math.floor(Date.now() / 1000) + tokens.expires_in,
userId: tokens.user.id,
organizationId: tokens.user.organization_id,
},
env.SESSION_SECRET
);
return new Response(JSON.stringify({ success: true }), {
status: 200,
headers: {
'Content-Type': 'application/json',
'Set-Cookie': cookie,
},
});
}
Token Refresh Flow
Access tokens issued by WorkOS have a short lifetime (typically 15-30 minutes). The BFF Worker transparently handles token refresh so that the client never encounters an expired token.
// workers/bff/src/middleware/refresh.ts
import { decryptSession, createSessionCookie, type SessionData } from '../auth/session';
const REFRESH_BUFFER_SECONDS = 60; // Refresh 60 seconds before actual expiry
export async function refreshMiddleware(
request: Request,
env: Env
): Promise<{ session: SessionData; newCookie?: string }> {
const cookieHeader = request.headers.get('Cookie') ?? '';
const sessionCookie = parseCookie(cookieHeader, 'session');
if (!sessionCookie) {
throw new AuthError('No session cookie', 401);
}
let session: SessionData;
try {
session = await decryptSession(sessionCookie, env.SESSION_SECRET);
} catch {
throw new AuthError('Invalid session', 401);
}
const now = Math.floor(Date.now() / 1000);
const isExpiringSoon = session.expiresAt - now < REFRESH_BUFFER_SECONDS;
if (!isExpiringSoon) {
// Token is still valid, proceed
return { session };
}
// Token is expired or expiring soon, attempt refresh
try {
const refreshResponse = await fetch('https://api.workos.com/user_management/authenticate', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
client_id: env.WORKOS_CLIENT_ID,
client_secret: env.WORKOS_CLIENT_SECRET,
grant_type: 'refresh_token',
refresh_token: session.refreshToken,
}),
});
if (!refreshResponse.ok) {
// Refresh token is also expired or revoked
throw new AuthError('Session expired, please log in again', 401);
}
const tokens = (await refreshResponse.json()) as {
access_token: string;
refresh_token: string;
expires_in: number;
};
const newSession: SessionData = {
...session,
accessToken: tokens.access_token,
refreshToken: tokens.refresh_token,
expiresAt: now + tokens.expires_in,
};
const newCookie = await createSessionCookie(newSession, env.SESSION_SECRET);
return { session: newSession, newCookie };
} catch (error) {
if (error instanceof AuthError) throw error;
console.error('Token refresh failed:', error);
throw new AuthError('Session expired, please log in again', 401);
}
}
class AuthError extends Error {
constructor(
message: string,
public status: number
) {
super(message);
}
}
function parseCookie(cookieHeader: string, name: string): string | null {
const match = cookieHeader.match(new RegExp(`(?:^|;\\s*)${name}=([^;]*)`));
return match ? decodeURIComponent(match[1]) : null;
}
Applying the refresh middleware in the BFF Worker's fetch handler:
// workers/bff/src/index.ts
import { refreshMiddleware } from './middleware/refresh';
export default {
async fetch(request: Request, env: Env): Promise<Response> {
const url = new URL(request.url);
// Public endpoints (no auth required)
if (url.pathname === '/auth/token') {
return handleTokenExchange(request, env);
}
// All other endpoints require a valid session
try {
const { session, newCookie } = await refreshMiddleware(request, env);
// Route to the appropriate handler with auth context
let response = await routeRequest(request, env, session);
// If the token was refreshed, set the updated cookie on the response
if (newCookie) {
response = new Response(response.body, response);
response.headers.append('Set-Cookie', newCookie);
}
return response;
} catch (error) {
if (error instanceof AuthError) {
return new Response(JSON.stringify({ error: error.message }), {
status: error.status,
headers: { 'Content-Type': 'application/json' },
});
}
throw error;
}
},
};
Shared Domain for Cookies
For cookie-based authentication to work seamlessly across the shell app, MFEs, and backend Workers, all services must share a common parent domain. The cookie is set with Domain=.example.com, which makes it available to all subdomains.
example.com (root domain)
|
+-- app.example.com Shell app (served by Cloudflare Pages / Worker)
| - Hosts the shell SPA
| - Loads MFE bundles from cdn.example.com
|
+-- cdn.example.com CDN for static assets
| - MFE JavaScript bundles
| - Shared design system assets
| - Served by Cloudflare R2 + CDN
|
+-- api.example.com API Gateway Worker
| - Validates JWT from session cookie
| - Routes to domain-specific service Workers
|
+-- auth.example.com Custom domain for WorkOS AuthKit
| - CNAME to WorkOS
| - Hosts the login/signup UI
|
+-- bff.example.com BFF Worker
- Token exchange
- Session management
- Token refresh
Cookie attributes and their purpose:
| Attribute | Value | Reason |
|---|---|---|
HttpOnly | (set) | Prevents JavaScript access to the cookie, mitigating XSS token theft |
Secure | (set) | Cookie only sent over HTTPS |
SameSite | Strict | Cookie not sent on cross-site requests, mitigating CSRF |
Domain | .example.com | Shared across all subdomains |
Path | / | Available on all paths |
Max-Age | 2592000 (30 days) | Aligned with refresh token lifetime; access tokens are refreshed transparently by the BFF |
Organization-Based Multi-Tenancy
WorkOS Organizations
WorkOS provides first-class support for multi-tenancy through its Organizations model. Each customer (tenant) of the platform maps to a WorkOS Organization. Key characteristics:
- Users can belong to multiple organizations: A consultant might have access to three different customer accounts. They authenticate once and can switch between organizations without re-authenticating.
- JWT includes
org_idclaim: Every access token contains the active organization, enabling data isolation at the API layer. - Organization-scoped roles and permissions: A user might be an
adminin one organization and amemberin another.
The API Gateway uses the org_id from the JWT to enforce data isolation:
// workers/api-gateway/src/middleware/tenant.ts
import type { AuthenticatedRequest } from './auth';
/**
* Extracts the organization context from the authenticated request
* and injects it into downstream service requests.
*/
export function tenantMiddleware(request: AuthenticatedRequest): Headers {
const headers = new Headers(request.headers);
// Inject tenant context for downstream services
headers.set('X-Tenant-ID', request.auth.org_id);
headers.set('X-User-ID', request.auth.sub);
headers.set('X-User-Role', request.auth.role);
return headers;
}
Downstream service Workers use the X-Tenant-ID header to scope all database queries:
// workers/projects-service/src/db.ts
export async function getProjects(tenantId: string, db: D1Database) {
// Every query is scoped to the organization
const result = await db
.prepare('SELECT * FROM projects WHERE org_id = ? ORDER BY updated_at DESC')
.bind(tenantId)
.all();
return result.results;
}
Organization Switching
The shell app provides an organization switcher in the global navigation. When a user switches organizations, the session is updated and all MFEs are notified.
// apps/shell/src/components/OrgSwitcher.tsx
import { useState, useEffect } from 'react';
import { useShellAuth } from '../contexts/AuthContext';
interface Organization {
id: string;
name: string;
slug: string;
logoUrl: string | null;
}
export function OrgSwitcher() {
const { user } = useShellAuth();
const [organizations, setOrganizations] = useState<Organization[]>([]);
const [activeOrgId, setActiveOrgId] = useState<string | null>(
user?.organizationId ?? null
);
useEffect(() => {
// Fetch user's organizations from BFF
fetch('/api/user/organizations', { credentials: 'include' })
.then((res) => res.json())
.then((data) => setOrganizations(data.organizations));
}, []);
async function switchOrganization(orgId: string) {
// Tell the BFF to update the session with the new org context
const response = await fetch('/api/auth/switch-org', {
method: 'POST',
credentials: 'include',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ organizationId: orgId }),
});
if (!response.ok) {
console.error('Failed to switch organization');
return;
}
setActiveOrgId(orgId);
// Notify all MFEs about the organization change via CustomEvent
window.dispatchEvent(
new CustomEvent('shell:org-changed', {
detail: {
organizationId: orgId,
organization: organizations.find((o) => o.id === orgId),
},
})
);
// Refresh the current page to reload data for the new org
window.location.reload();
}
const activeOrg = organizations.find((o) => o.id === activeOrgId);
return (
<div className="relative">
<button
className="flex items-center gap-2 px-3 py-2 rounded-md hover:bg-neutral-100"
aria-label="Switch organization"
>
{activeOrg?.logoUrl && (
<img
src={activeOrg.logoUrl}
alt=""
className="w-6 h-6 rounded-full"
/>
)}
<span className="text-sm font-medium">{activeOrg?.name ?? 'Select org'}</span>
</button>
{/* Dropdown with org list */}
<ul className="absolute top-full left-0 mt-1 w-64 bg-white border rounded-lg shadow-lg">
{organizations.map((org) => (
<li key={org.id}>
<button
className={`w-full text-left px-4 py-2 hover:bg-neutral-50 ${
org.id === activeOrgId ? 'bg-neutral-100 font-semibold' : ''
}`}
onClick={() => switchOrganization(org.id)}
>
{org.name}
</button>
</li>
))}
</ul>
</div>
);
}
The BFF Worker endpoint that handles organization switching:
// workers/bff/src/routes/auth.ts
export async function handleOrgSwitch(
request: Request,
env: Env,
session: SessionData
): Promise<Response> {
const { organizationId } = (await request.json()) as { organizationId: string };
// Request a new access token scoped to the target organization
const response = await fetch('https://api.workos.com/user_management/authenticate', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
client_id: env.WORKOS_CLIENT_ID,
client_secret: env.WORKOS_CLIENT_SECRET,
grant_type: 'refresh_token',
refresh_token: session.refreshToken,
organization_id: organizationId,
}),
});
if (!response.ok) {
return new Response(JSON.stringify({ error: 'Failed to switch organization' }), {
status: 403,
headers: { 'Content-Type': 'application/json' },
});
}
const tokens = (await response.json()) as {
access_token: string;
refresh_token: string;
expires_in: number;
};
const newSession: SessionData = {
accessToken: tokens.access_token,
refreshToken: tokens.refresh_token,
expiresAt: Math.floor(Date.now() / 1000) + tokens.expires_in,
userId: session.userId,
organizationId,
};
const cookie = await createSessionCookie(newSession, env.SESSION_SECRET);
return new Response(JSON.stringify({ success: true }), {
status: 200,
headers: {
'Content-Type': 'application/json',
'Set-Cookie': cookie,
},
});
}
Auth Context for MFEs
Passing Auth to Remote Components
There are three patterns for making authentication context available to micro frontends. Each has tradeoffs.
Option 1: React Context (Shell provides auth context via a shared provider)
The shell wraps each MFE in an auth context provider. MFEs import the context type and consume it via a hook.
// packages/shared-types/src/auth.ts
export interface MFEAuthContext {
user: {
id: string;
email: string;
firstName: string | null;
lastName: string | null;
organizationId: string | null;
} | null;
permissions: string[];
isAuthenticated: boolean;
}
// apps/shell/src/components/MFEWrapper.tsx
import { createContext, useContext } from 'react';
import { useShellAuth } from '../contexts/AuthContext';
import type { MFEAuthContext } from '@platform/shared-types';
const MFEAuthContext = createContext<MFEAuthContext | null>(null);
export function MFEWrapper({ children }: { children: React.ReactNode }) {
const { user, isAuthenticated } = useShellAuth();
const permissions = usePermissions(); // Fetched from BFF
return (
<MFEAuthContext.Provider value={{ user, permissions, isAuthenticated }}>
{children}
</MFEAuthContext.Provider>
);
}
// Used inside any MFE:
export function useMFEAuth(): MFEAuthContext {
const ctx = useContext(MFEAuthContext);
if (!ctx) throw new Error('useMFEAuth must be used within MFEWrapper');
return ctx;
}
Tradeoff: Requires shared context type agreement between shell and MFEs. Tightly couples MFEs to the shell's auth implementation.
Option 2: Props passed to MFE root component
The shell passes auth data as props when rendering the MFE's root component.
// apps/shell/src/components/MFELoader.tsx
import { Suspense, lazy } from 'react';
import { useShellAuth } from '../contexts/AuthContext';
const DashboardApp = lazy(() => import('dashboard/App'));
export function DashboardMFE() {
const { user, isAuthenticated } = useShellAuth();
const permissions = usePermissions();
return (
<Suspense fallback={<LoadingSkeleton />}>
<DashboardApp
user={user}
permissions={permissions}
isAuthenticated={isAuthenticated}
/>
</Suspense>
);
}
// Inside the dashboard MFE:
// apps/dashboard/src/App.tsx
interface DashboardAppProps {
user: { id: string; email: string } | null;
permissions: string[];
isAuthenticated: boolean;
}
export default function DashboardApp({ user, permissions }: DashboardAppProps) {
// MFE has auth data via props, no need for its own auth provider
return <DashboardLayout user={user} permissions={permissions} />;
}
Tradeoff: Simple and explicit. Each MFE declares what auth data it needs. However, every API call from the MFE still needs to pass tokens somehow.
Option 3: BFF handles auth implicitly via cookies (Recommended)
MFEs make API calls directly to the BFF Worker. The session cookie is included automatically by the browser (because all services share .example.com). MFEs do not need to know about tokens at all.
// Inside any MFE — no auth code needed
// apps/dashboard/src/api/projects.ts
export async function fetchProjects(): Promise<Project[]> {
const response = await fetch('https://api.example.com/v1/projects', {
credentials: 'include', // Sends the session cookie automatically
});
if (response.status === 401) {
// Session expired — dispatch event for shell to handle
window.dispatchEvent(new CustomEvent('shell:auth-expired'));
throw new Error('Session expired');
}
if (!response.ok) {
throw new Error(`API error: ${response.status}`);
}
return response.json();
}
The shell listens for the shell:auth-expired event and redirects to login:
// apps/shell/src/hooks/useAuthExpiredListener.ts
import { useEffect } from 'react';
import { useShellAuth } from '../contexts/AuthContext';
export function useAuthExpiredListener() {
const { signIn } = useShellAuth();
useEffect(() => {
function handleExpired() {
// Store current URL for post-login redirect
sessionStorage.setItem(
'auth:returnTo',
window.location.pathname + window.location.search
);
signIn();
}
window.addEventListener('shell:auth-expired', handleExpired);
return () => window.removeEventListener('shell:auth-expired', handleExpired);
}, [signIn]);
}
Recommended approach: Option 3 is the recommended pattern. MFEs are entirely auth-unaware for API calls. The session cookie flows automatically with every request to *.example.com. The shell still passes basic user info (name, email, avatar) and permissions via props (Option 2) for UI rendering, but MFEs never handle tokens.
MFE-Specific Permissions
WorkOS supports role-based access control (RBAC) with custom roles and permissions. The shell fetches the user's permissions for their current organization and passes them to MFEs.
// packages/shared-utils/src/permissions.ts
/**
* Permission check helper used by MFEs to conditionally render UI elements.
*/
export function hasPermission(
userPermissions: string[],
required: string | string[]
): boolean {
const requiredList = Array.isArray(required) ? required : [required];
return requiredList.every((p) => userPermissions.includes(p));
}
export function hasAnyPermission(
userPermissions: string[],
required: string[]
): boolean {
return required.some((p) => userPermissions.includes(p));
}
A React component for permission-based rendering:
// packages/shared-ui/src/components/Authorize.tsx
import { type ReactNode } from 'react';
import { hasPermission, hasAnyPermission } from '@platform/shared-utils';
interface AuthorizeProps {
/** User's current permissions */
permissions: string[];
/** Required permission(s). All must be present unless `mode` is 'any'. */
required: string | string[];
/** If 'any', at least one permission must match. Default: 'all'. */
mode?: 'all' | 'any';
/** Content to render if authorized */
children: ReactNode;
/** Optional fallback if unauthorized */
fallback?: ReactNode;
}
export function Authorize({
permissions,
required,
mode = 'all',
children,
fallback = null,
}: AuthorizeProps) {
const requiredList = Array.isArray(required) ? required : [required];
const authorized =
mode === 'any'
? hasAnyPermission(permissions, requiredList)
: hasPermission(permissions, requiredList);
return authorized ? <>{children}</> : <>{fallback}</>;
}
Usage inside an MFE:
// apps/settings/src/pages/TeamSettings.tsx
import { Authorize } from '@platform/shared-ui';
interface TeamSettingsProps {
permissions: string[];
}
export function TeamSettings({ permissions }: TeamSettingsProps) {
return (
<div>
<h1>Team Settings</h1>
{/* Visible to all authenticated users */}
<TeamMemberList />
{/* Only visible to users with invite permission */}
<Authorize permissions={permissions} required="team:invite">
<InviteMemberButton />
</Authorize>
{/* Only visible to admins */}
<Authorize
permissions={permissions}
required={['team:manage', 'team:delete']}
mode="all"
fallback={<p className="text-neutral-500">Contact an admin to manage roles.</p>}
>
<RoleManagementPanel />
</Authorize>
{/* Visible to users with either billing or admin permission */}
<Authorize
permissions={permissions}
required={['billing:read', 'org:admin']}
mode="any"
>
<BillingOverviewCard />
</Authorize>
</div>
);
}
Platform-defined permissions (registered in WorkOS dashboard):
| Permission | Description |
|---|---|
org:admin | Full organization admin access |
team:read | View team members |
team:invite | Invite new team members |
team:manage | Manage roles and remove members |
team:delete | Delete team members |
billing:read | View billing information |
billing:manage | Update payment methods and plans |
projects:read | View projects |
projects:write | Create and edit projects |
projects:delete | Delete projects |
settings:read | View organization settings |
settings:write | Modify organization settings |
Security Considerations
CSRF Protection
SameSite=Strict cookies provide strong CSRF protection by default: the browser will not send the cookie on any cross-site request (including top-level navigations from external links). For additional defense-in-depth, the API Gateway requires a custom header on all state-changing requests:
// workers/api-gateway/src/middleware/csrf.ts
export function csrfMiddleware(request: Request): Response | null {
// Only apply to state-changing methods
const method = request.method.toUpperCase();
if (['GET', 'HEAD', 'OPTIONS'].includes(method)) {
return null; // Pass through
}
// Require a custom header that cannot be set by cross-origin forms
const csrfHeader = request.headers.get('X-Requested-With');
if (csrfHeader !== 'XMLHttpRequest') {
return new Response(JSON.stringify({ error: 'CSRF validation failed' }), {
status: 403,
headers: { 'Content-Type': 'application/json' },
});
}
// Verify Origin header matches allowed origins
const origin = request.headers.get('Origin');
const allowedOrigins = ['https://app.example.com', 'https://bff.example.com'];
if (origin && !allowedOrigins.includes(origin)) {
return new Response(JSON.stringify({ error: 'Origin not allowed' }), {
status: 403,
headers: { 'Content-Type': 'application/json' },
});
}
return null; // Pass through
}
XSS Mitigation
- HttpOnly cookies: Tokens are stored in cookies with the
HttpOnlyflag, making them inaccessible to JavaScript. Even if an XSS vulnerability exists, the attacker cannot exfiltrate the session token. - Content Security Policy: The shell app serves a strict CSP header that prevents inline scripts and restricts script sources.
// workers/shell/src/middleware/csp.ts
export function cspHeaders(): Record<string, string> {
return {
'Content-Security-Policy': [
"default-src 'self'",
"script-src 'self' https://cdn.example.com",
"style-src 'self' 'unsafe-inline' https://cdn.example.com", // Inline styles for CSS-in-JS
"img-src 'self' https://cdn.example.com https://*.workos.com data:",
"connect-src 'self' https://api.example.com https://bff.example.com https://*.workos.com",
"frame-ancestors 'none'",
"base-uri 'self'",
"form-action 'self'",
].join('; '),
'X-Content-Type-Options': 'nosniff',
'X-Frame-Options': 'DENY',
'Referrer-Policy': 'strict-origin-when-cross-origin',
};
}
Token Storage Policy
Tokens are never stored in:
localStorage(persists across sessions, accessible to any JS on the domain)sessionStorage(accessible to any JS on the domain)- JavaScript variables in client-side code (accessible via XSS)
- URL parameters or fragments (logged in server logs, browser history, referrer headers)
Tokens are only stored in:
- Encrypted
HttpOnlycookies (inaccessible to JavaScript) - Worker KV (server-side session store, if cookie size exceeds limits)
CORS Configuration
The API Gateway enforces a strict CORS policy:
// workers/api-gateway/src/middleware/cors.ts
const ALLOWED_ORIGINS = new Set([
'https://app.example.com',
'https://bff.example.com',
]);
export function corsHeaders(request: Request): Record<string, string> {
const origin = request.headers.get('Origin') ?? '';
if (!ALLOWED_ORIGINS.has(origin)) {
return {}; // No CORS headers = browser blocks the response
}
return {
'Access-Control-Allow-Origin': origin,
'Access-Control-Allow-Methods': 'GET, POST, PUT, DELETE, PATCH, OPTIONS',
'Access-Control-Allow-Headers': 'Content-Type, X-Requested-With',
'Access-Control-Allow-Credentials': 'true', // Required for cookie-based auth
'Access-Control-Max-Age': '86400', // Cache preflight for 24 hours
};
}
Rate Limiting
Authentication endpoints are rate-limited to prevent brute-force attacks:
// workers/bff/src/middleware/rate-limit.ts
interface RateLimitEntry {
count: number;
resetAt: number;
}
const AUTH_RATE_LIMITS = {
'/auth/token': { maxRequests: 10, windowSeconds: 60 },
'/auth/switch-org': { maxRequests: 20, windowSeconds: 60 },
'/auth/refresh': { maxRequests: 30, windowSeconds: 60 },
};
export async function rateLimitMiddleware(
request: Request,
env: Env
): Promise<Response | null> {
const url = new URL(request.url);
const limit = AUTH_RATE_LIMITS[url.pathname as keyof typeof AUTH_RATE_LIMITS];
if (!limit) return null; // No rate limit for this endpoint
const clientIp = request.headers.get('CF-Connecting-IP') ?? 'unknown';
const key = `rate-limit:${url.pathname}:${clientIp}`;
const entry = await env.AUTH_KV.get<RateLimitEntry>(key, 'json');
const now = Math.floor(Date.now() / 1000);
if (entry && entry.resetAt > now && entry.count >= limit.maxRequests) {
return new Response(JSON.stringify({ error: 'Too many requests' }), {
status: 429,
headers: {
'Content-Type': 'application/json',
'Retry-After': String(entry.resetAt - now),
},
});
}
// Update counter
const newEntry: RateLimitEntry = {
count: entry && entry.resetAt > now ? entry.count + 1 : 1,
resetAt: entry && entry.resetAt > now ? entry.resetAt : now + limit.windowSeconds,
};
await env.AUTH_KV.put(key, JSON.stringify(newEntry), {
expirationTtl: limit.windowSeconds + 10,
});
return null; // Allow the request
}
Audit Logging
All authentication events are logged to a D1 database for compliance and debugging:
// workers/bff/src/audit/log.ts
export interface AuthAuditEvent {
eventType:
| 'login_success'
| 'login_failure'
| 'logout'
| 'token_refresh'
| 'org_switch'
| 'session_expired';
userId: string | null;
organizationId: string | null;
ipAddress: string;
userAgent: string;
metadata: Record<string, string>;
timestamp: string;
}
export async function logAuthEvent(
event: AuthAuditEvent,
db: D1Database
): Promise<void> {
await db
.prepare(
`INSERT INTO auth_audit_log (event_type, user_id, organization_id, ip_address, user_agent, metadata, timestamp)
VALUES (?, ?, ?, ?, ?, ?, ?)`
)
.bind(
event.eventType,
event.userId,
event.organizationId,
event.ipAddress,
event.userAgent,
JSON.stringify(event.metadata),
event.timestamp
)
.run();
}
The audit log table schema:
-- migrations/0003_auth_audit_log.sql
CREATE TABLE auth_audit_log (
id INTEGER PRIMARY KEY AUTOINCREMENT,
event_type TEXT NOT NULL,
user_id TEXT,
organization_id TEXT,
ip_address TEXT NOT NULL,
user_agent TEXT,
metadata TEXT DEFAULT '{}',
timestamp TEXT NOT NULL,
created_at TEXT DEFAULT (datetime('now'))
);
CREATE INDEX idx_audit_user_id ON auth_audit_log(user_id);
CREATE INDEX idx_audit_org_id ON auth_audit_log(organization_id);
CREATE INDEX idx_audit_event_type ON auth_audit_log(event_type);
CREATE INDEX idx_audit_timestamp ON auth_audit_log(timestamp);
JWKS Key Rotation Strategy
WorkOS may rotate its signing keys at any time. When this happens, JWTs signed with the new key will fail validation against cached copies of the old JWKS. The system must handle key rotation gracefully without causing authentication outages.
Key rotation detection:
The JWT header contains a kid (Key ID) field that identifies which key was used to sign the token. When a token arrives with a kid that does not match any key in the cached JWKS, this signals a potential key rotation event.
Strategy: Try-cached, then re-fetch, then fail
1. Receive JWT with kid="key_NEW"
2. Look up kid="key_NEW" in cached JWKS
3. If found → validate with cached key → done
4. If NOT found → fetch fresh JWKS from WorkOS endpoint
5. Look up kid="key_NEW" in fresh JWKS
6. If found → validate with fresh key → update cache → done
7. If NOT found in fresh JWKS → reject token (genuinely invalid)
JWKS cache TTL recommendations:
| Cache Layer | TTL | Rationale |
|---|---|---|
In-memory (createRemoteJWKSet built-in cache) | 10 minutes | Fast path; avoids KV reads for hot Workers |
| Cloudflare KV | 5 minutes (logical) + 6 minutes (KV expiration) | Shared across isolates; KV TTL slightly longer to allow stale fallback |
| Stale fallback (on fetch failure) | Unlimited (until next successful fetch) | Prevents outages when WorkOS endpoint is temporarily unreachable |
Monitoring key rotation events:
Log every cache miss that triggers a JWKS re-fetch. A sudden spike in re-fetches indicates a key rotation event (or a misconfiguration). The audit log should record these events:
// workers/api-gateway/src/auth/jwks-cache.ts (addition to existing code)
async function logKeyRotationEvent(
oldKids: string[],
newKids: string[],
env: Env
): Promise<void> {
const addedKeys = newKids.filter((kid) => !oldKids.includes(kid));
const removedKeys = oldKids.filter((kid) => !newKids.includes(kid));
if (addedKeys.length > 0 || removedKeys.length > 0) {
console.warn('JWKS key rotation detected', {
addedKeys,
removedKeys,
timestamp: new Date().toISOString(),
});
}
}
Session Encryption Key Rotation
The session cookie is encrypted with a key derived from SESSION_SECRET. When this secret must be rotated (e.g., due to a security incident or as part of regular key hygiene), all active sessions encrypted with the old key would become unreadable, effectively logging out every user.
Strategy: Decrypt-with-any, Encrypt-with-latest
Maintain an ordered list of encryption keys. Always encrypt new sessions with the latest key. When decrypting, try each key in order until one succeeds.
// workers/bff/src/auth/session-keys.ts
interface SessionKeyConfig {
/** Unique identifier for this key version */
version: number;
/** The secret used to derive the encryption key */
secret: string;
/** If true, this key is used for encrypting new sessions */
active: boolean;
}
/**
* Environment variable format (JSON array):
* SESSION_KEYS='[
* {"version": 2, "secret": "new-secret-here", "active": true},
* {"version": 1, "secret": "old-secret-here", "active": false}
* ]'
*
* Rotation procedure:
* 1. Add a new key with active: true
* 2. Set the old key to active: false (keep it in the list)
* 3. Deploy the change
* 4. After the session cookie Max-Age has elapsed (30 days), all sessions
* encrypted with the old key will have expired naturally
* 5. Remove the old key from the list
*/
function parseSessionKeys(keysJson: string): SessionKeyConfig[] {
const keys = JSON.parse(keysJson) as SessionKeyConfig[];
// Sort by version descending so the latest key is tried first
return keys.sort((a, b) => b.version - a.version);
}
function getActiveKey(keys: SessionKeyConfig[]): SessionKeyConfig {
const active = keys.find((k) => k.active);
if (!active) {
throw new Error('No active session encryption key configured');
}
return active;
}
/**
* Encrypt a session using the latest active key.
* The key version is prepended to the ciphertext so that decryption
* knows which key was used without trial-and-error.
*/
export async function encryptSession(
session: SessionData,
keysJson: string
): Promise<string> {
const keys = parseSessionKeys(keysJson);
const activeKey = getActiveKey(keys);
const key = await deriveKey(activeKey.secret);
const plaintext = JSON.stringify(session);
const iv = crypto.getRandomValues(new Uint8Array(12));
const ciphertext = await crypto.subtle.encrypt(
{ name: 'AES-GCM', iv },
key,
new TextEncoder().encode(plaintext)
);
// Prefix with key version (1 byte, supports up to 255 versions)
const combined = new Uint8Array(1 + iv.length + new Uint8Array(ciphertext).length);
combined[0] = activeKey.version;
combined.set(iv, 1);
combined.set(new Uint8Array(ciphertext), 1 + iv.length);
return base64UrlEncode(combined);
}
/**
* Decrypt a session. First tries the key version encoded in the ciphertext.
* Falls back to trying all keys if version lookup fails (handles legacy
* sessions encrypted before versioning was introduced).
*/
export async function decryptSession(
cookieValue: string,
keysJson: string
): Promise<SessionData> {
const combined = base64UrlDecode(cookieValue);
const keys = parseSessionKeys(keysJson);
// Try the versioned key first
const version = combined[0];
const versionedKey = keys.find((k) => k.version === version);
if (versionedKey) {
try {
return await decryptWithKey(combined.slice(1), versionedKey.secret);
} catch {
// Version byte might be coincidental in legacy sessions; fall through
}
}
// Fallback: try all keys (for legacy sessions without version prefix)
for (const keyConfig of keys) {
try {
return await decryptWithKey(combined, keyConfig.secret);
} catch {
continue; // Try next key
}
}
throw new Error('Session decryption failed with all available keys');
}
Rotation timeline:
| Day | Action |
|---|---|
| Day 0 | Add new key (version N+1, active: true), set old key to active: false. Deploy. |
| Day 0+ | All new sessions are encrypted with key N+1. Existing sessions still decrypt with key N. |
| Day 30+ | All sessions encrypted with key N have expired (cookie Max-Age = 30 days). |
| Day 31 | Remove key N from the configuration. Deploy. |
Token Refresh Race Condition Prevention
When multiple browser tabs are open or multiple API requests fire simultaneously, each may independently detect that the access token is expiring and attempt a refresh. This causes multiple concurrent refresh requests to WorkOS, which can lead to:
- Refresh token reuse rejection: Some identity providers invalidate a refresh token after its first use. If two requests use the same refresh token concurrently, one succeeds and the other fails, logging the user out.
- Wasted network calls: Multiple redundant refresh requests add latency and consume rate limit budget.
- Cookie race condition: Multiple responses each set a new session cookie; the browser applies the last one, potentially overwriting a newer token with an older one.
Strategy: Distributed mutex via KV with Durable Objects fallback
// workers/bff/src/middleware/refresh-mutex.ts
const REFRESH_LOCK_TTL_SECONDS = 10; // Lock expires after 10 seconds
interface RefreshLockEntry {
lockedAt: number;
lockedBy: string; // Request ID for debugging
}
/**
* Acquires a refresh lock for the given session. Returns true if the lock
* was acquired, false if another request is already refreshing.
*/
async function acquireRefreshLock(
sessionId: string,
requestId: string,
env: Env
): Promise<boolean> {
const lockKey = `refresh-lock:${sessionId}`;
const existing = await env.AUTH_KV.get<RefreshLockEntry>(lockKey, 'json');
if (existing) {
const lockAge = (Date.now() - existing.lockedAt) / 1000;
if (lockAge < REFRESH_LOCK_TTL_SECONDS) {
// Another request holds the lock
return false;
}
// Lock expired, safe to take over
}
const entry: RefreshLockEntry = {
lockedAt: Date.now(),
lockedBy: requestId,
};
await env.AUTH_KV.put(lockKey, JSON.stringify(entry), {
expirationTtl: REFRESH_LOCK_TTL_SECONDS,
});
return true;
}
/**
* Releases the refresh lock after a successful or failed refresh.
*/
async function releaseRefreshLock(
sessionId: string,
env: Env
): Promise<void> {
await env.AUTH_KV.delete(`refresh-lock:${sessionId}`);
}
/**
* Enhanced refresh middleware with mutex to prevent concurrent refreshes.
*/
export async function refreshWithMutex(
request: Request,
env: Env,
session: SessionData
): Promise<{ session: SessionData; newCookie?: string }> {
const now = Math.floor(Date.now() / 1000);
const isExpiringSoon = session.expiresAt - now < REFRESH_BUFFER_SECONDS;
if (!isExpiringSoon) {
return { session };
}
const requestId = crypto.randomUUID();
const sessionId = session.userId; // Use userId as session identifier
const lockAcquired = await acquireRefreshLock(sessionId, requestId, env);
if (!lockAcquired) {
// Another request is refreshing. Wait briefly, then re-read the session.
// The other request will update the cookie, so use the current token
// (which is still valid for REFRESH_BUFFER_SECONDS).
console.log('Refresh lock held by another request, using current token');
return { session };
}
try {
const refreshedResult = await performTokenRefresh(session, env);
return refreshedResult;
} finally {
await releaseRefreshLock(sessionId, env);
}
}
Client-side deduplication (for multi-tab scenarios):
When the shell app uses BroadcastChannel to coordinate across tabs, only one tab performs the refresh and notifies the others:
// apps/shell/src/auth/refresh-coordinator.ts
const REFRESH_CHANNEL = 'auth:token-refresh';
let refreshInProgress = false;
let refreshPromise: Promise<void> | null = null;
const channel = new BroadcastChannel(REFRESH_CHANNEL);
channel.addEventListener('message', (event) => {
if (event.data.type === 'refresh-complete') {
// Another tab completed the refresh. The new cookie is already set
// by the BFF response in that tab. Our next request will use it.
refreshInProgress = false;
refreshPromise = null;
}
});
export async function coordinatedRefresh(
refreshFn: () => Promise<void>
): Promise<void> {
if (refreshInProgress && refreshPromise) {
// Deduplicate: wait for the in-flight refresh
return refreshPromise;
}
refreshInProgress = true;
refreshPromise = refreshFn()
.then(() => {
channel.postMessage({ type: 'refresh-complete' });
})
.finally(() => {
refreshInProgress = false;
refreshPromise = null;
});
return refreshPromise;
}
Dynamic CORS for Preview Environments
In CI/CD pipelines that produce preview/staging deployments with dynamic URLs (e.g., https://pr-1234.preview.example.com or https://deploy-abc123.vercel.app), a static CORS allowlist is insufficient. Each preview deployment has a unique URL that cannot be pre-registered.
Strategy: Pattern-based origin validation
// workers/api-gateway/src/middleware/cors.ts
// Static allowlist for production and staging
const STATIC_ALLOWED_ORIGINS = new Set([
'https://app.example.com',
'https://bff.example.com',
'https://staging.example.com',
]);
// Regex patterns for dynamic preview environments.
// Each pattern must be carefully scoped to prevent overly permissive matching.
const DYNAMIC_ORIGIN_PATTERNS: RegExp[] = [
// Cloudflare Pages preview deployments: pr-<number>.preview.example.com
/^https:\/\/pr-\d+\.preview\.example\.com$/,
// Vercel preview deployments: deploy-<hash>.vercel.app
/^https:\/\/deploy-[a-z0-9]+\.vercel\.app$/,
// Branch-based previews: <branch-slug>.preview.example.com
/^https:\/\/[a-z0-9-]+\.preview\.example\.com$/,
];
function isAllowedOrigin(origin: string): boolean {
// Check static allowlist first (fastest path)
if (STATIC_ALLOWED_ORIGINS.has(origin)) {
return true;
}
// Check dynamic patterns for preview environments
return DYNAMIC_ORIGIN_PATTERNS.some((pattern) => pattern.test(origin));
}
export function corsHeaders(request: Request): Record<string, string> {
const origin = request.headers.get('Origin') ?? '';
if (!isAllowedOrigin(origin)) {
return {}; // No CORS headers = browser blocks the response
}
return {
'Access-Control-Allow-Origin': origin,
'Access-Control-Allow-Methods': 'GET, POST, PUT, DELETE, PATCH, OPTIONS',
'Access-Control-Allow-Headers': 'Content-Type, X-Requested-With',
'Access-Control-Allow-Credentials': 'true',
'Access-Control-Max-Age': '86400',
};
}
Security considerations for dynamic CORS:
- Never use wildcard (
*) with credentials:Access-Control-Allow-Credentials: trueis incompatible withAccess-Control-Allow-Origin: *. The origin must always be explicitly echoed. - Scope regex patterns tightly: A pattern like
/^https:\/\/.*\.example\.com$/is dangerously broad. An attacker could registerevil.example.comif the DNS is not locked down. Use specific prefixes and constrain character sets. - Log rejected origins: Logging origins that fail the CORS check helps detect misconfigured preview URLs and potential attack attempts.
- Consider a preview registry: For maximum security, have the CI/CD pipeline register each preview URL in KV or a database, and validate against that registry instead of regex patterns. This adds latency (KV read) but eliminates regex bypass risks.
// Alternative: KV-based preview origin registry
// CI/CD pipeline writes: await KV.put(`preview-origin:${previewUrl}`, '1', { expirationTtl: 86400 * 7 })
async function isAllowedPreviewOrigin(
origin: string,
env: Env
): Promise<boolean> {
if (STATIC_ALLOWED_ORIGINS.has(origin)) return true;
const registered = await env.AUTH_KV.get(`preview-origin:${origin}`);
return registered !== null;
}
Updating the CSRF middleware for preview environments:
The CSRF middleware (documented earlier) must also accept the dynamic origins. Update allowedOrigins to use the same isAllowedOrigin function:
// workers/api-gateway/src/middleware/csrf.ts (updated)
export function csrfMiddleware(request: Request): Response | null {
const method = request.method.toUpperCase();
if (['GET', 'HEAD', 'OPTIONS'].includes(method)) {
return null;
}
const csrfHeader = request.headers.get('X-Requested-With');
if (csrfHeader !== 'XMLHttpRequest') {
return new Response(JSON.stringify({ error: 'CSRF validation failed' }), {
status: 403,
headers: { 'Content-Type': 'application/json' },
});
}
// Use the same origin validation as CORS
const origin = request.headers.get('Origin');
if (origin && !isAllowedOrigin(origin)) {
return new Response(JSON.stringify({ error: 'Origin not allowed' }), {
status: 403,
headers: { 'Content-Type': 'application/json' },
});
}
return null;
}
References
- WorkOS AuthKit Documentation -- Setup guides, API reference, and integration patterns for AuthKit.
- WorkOS AuthKit React SDK -- The
@workos-inc/authkit-reactpackage used in the shell app. - jose Library (JWT/JWS/JWE for Edge Runtimes) -- Web Crypto API-based JWT validation used in Cloudflare Workers.
- WorkOS JWKS Endpoint -- Public key endpoint for JWT signature verification.
- WorkOS Organizations -- Multi-tenancy model documentation.
- WorkOS Roles and Permissions -- RBAC configuration for fine-grained access control.
- OWASP Session Management Cheat Sheet -- Industry best practices for secure session management.
- OWASP Authentication Cheat Sheet -- Security guidelines for authentication implementations.
- Cloudflare Workers Web Crypto API -- Crypto primitives available in the Workers runtime.