A TypeScript/ESM client library for injecting network chaos (latency, failures, throttling, etc.) into fetch requests. Inspired by chaos-proxy, but designed for programmatic use and composable middleware.
- Simple configuration via JavaScript/TypeScript
- Programmatic API for fetch interception
- Built-in middleware primitives:
latency,latencyRange,fail,failRandomly,failNth,rateLimit,throttle,mock - Extensible registry for custom middleware
- Route matching by method and path
- Built on Koa components (
@koa/routerandkoa-compose), it supports both request and response interception/modification - Robust short-circuiting: middleware can halt further processing
npm install @fetchkit/chaos-fetchimport {
createClient,
registerMiddleware,
replaceGlobalFetch,
restoreGlobalFetch,
} from '@fetchkit/chaos-fetch';
// Register a custom middleware (optional)
registerMiddleware('customDelay', (opts) => async (ctx, next) => {
await new Promise(res => setTimeout(res, opts.ms));
await next();
});
const chaosFetch = createClient({
global: [ // Global rules
{ customDelay: { ms: 50 } }, // Use custom middleware
{ failRandomly: { rate: 0.1, status: 503 } }, // 10% random failures
],
routes: {
// Route keys are method + path only (no domain)
'GET /users/:id': [ // Specific route rules
{ failNth: { n: 3, status: 500 } }, // Fail every 3rd request with status 500
],
},
});
// Use as a drop-in replacement for fetch
const res = await chaosFetch('https://api.example.com/users/123');
// Same route rule also matches other domains with the same path
await chaosFetch('https://staging.example.net/users/123');
// Or replace global fetch
replaceGlobalFetch(chaosFetch);
fetch('https://api.example.com/users/123'); // now goes through chaosFetch
restoreGlobalFetch(); // to restore original fetchglobal: Ordered array of middleware nodes applied to every requestroutes: Map of method+path to ordered array of middleware nodes- Both
globalandroutesare optional. If omitted, no global or route-specific middleware will be applied. - Middleware node:
{ latency: 100 },{ failRandomly: { rate: 0.1, status: 503 } }, etc.
chaos-fetch uses @koa/router for path matching, supporting named parameters (e.g., /users/:id), wildcards (e.g., *), and regex routes.
- Example:
"GET /api/*"matches any GET request under/api/. - Example:
"GET /users/:id"matches GET requests like/users/123.
Supported Route Patterns:
- Named parameters:
/users/:id— Matches any path like/users/123. - Wildcards:
/api/*— Matches any path under/api/. - Regex:
/files/(.*)— Matches any path under/files/.
Note: route parameters are used internally for matching; they are not currently exposed on ctx for middleware consumption.
Rule inheritance:
- Domains are not considered in route matching, only the method and path. This simplification is a tradeoff: it reduces configuration complexity but means you cannot target rules to specific domains. If you need domain-specific behavior, consider using separate clients or custom middleware.
- There is no inheritance between global and route-specific middleware.
- Global middlewares apply to every request.
- Route middlewares only apply to requests matching that route.
- If a request matches a route, only the middlewares for that route (plus global) are applied. Route rules do not inherit or merge from parent routes or wildcards.
- If multiple routes match, the first matching route configuration is used.
- If no route matches, only global middlewares are applied.
- Order of middleware execution: global middlewares run first, followed by route-specific middlewares in the order they are defined. Example: If you have a global latency of 100ms and a route-specific failNth, a request to that route will first incur the 100ms latency, then be subject to the failNth logic.
- Routes can be defined with or without HTTP methods. If a method is specified (e.g.,
GET /path), the rule only applies to that method. If no method is specified (e.g.,/path), the rule applies to all methods for that path.
Relative URLs:
- If you use relative URLs (e.g.,
/api/data), the client will resolve them againstglobalThis.location.originin browsers and JSDOM. In Node, Bun, or Deno, you have to provide a full absolute URL; otherwise, they will throw an error.
latency(ms)- delay every request withmslatencyRange({ minMs, maxMs })- random delay betweenminMsandmaxMsmsfail({ status, body })- always fail sendingstatusandbodymock({ status, body })- always sendstatusandbody.statusdefaults to 200, andbodydefaults to an empty string. Use this to mock responses without making actual network requests.failRandomly({ rate, status, body })- fail with probability sendingstatusandbodyfailNth({ n, status, body })- fail every nth request withstatusandbodyrateLimit({ limit, windowMs, key })- rate limit tolimitrequests perwindowMsmilliseconds.keycan be a header name (string), a custom function(req) => string, or omitted (all requests share one bucket). Responds with 429 if limit exceededthrottle({ rate, chunkSize })- limit response bandwidth toratebytes per second, chunking responses bychunkSizebytes.
The rateLimit middleware restricts how many requests a client can make in a given time window. It uses an internal cache to track requests per key.
limit: Maximum number of requests allowed per window (e.g., 100)windowMs: Time window in milliseconds (e.g., 60000 for 1 minute)key: How to bucket requests. Options:- omitted — all requests share one bucket (
'unknown') - string — treated as a header name; the header's value is the bucket key. If the header is absent the bucket key falls back to
'unknown' - function
(req: Request) => string— full control; return any string as the bucket key
- omitted — all requests share one bucket (
How it works:
- Each incoming request is assigned a key via the
keyoption. - The middleware tracks how many requests each key has made in the current window (fixed window, resets from first request in that window).
- If the number of requests exceeds
limit, further requests from that key receive a429 Too Many Requestsresponse until the window resets.
The throttle middleware simulates slow network conditions by limiting the bandwidth of responses. It works by chunking the response body and introducing delays between chunks, based on the configured rate. If streaming is not supported in the runtime, it falls back to delaying the entire response.
rate(required): Maximum bandwidth in bytes per second (e.g.,1024for 1KB/sec).chunkSize(optional): Size of each chunk in bytes (default:16384).
How it works:
- If the response body is a stream (Node.js
Streamor browser/edgeReadableStream), the middleware splits it into chunks and delays each chunk to match the specified rate. - If the response body is not a stream (e.g., string, buffer), the middleware calculates the total delay needed to simulate the bandwidth and delays the response accordingly.
- The middleware uses feature detection to choose the best throttling strategy for the current runtime.
Limitations:
- True stream throttling is only available in runtimes that support streaming APIs (Node.js, browser, edge).
- In runtimes without streaming support, only total response delay is simulated, not progressive delivery.
- The accuracy of throttling may vary depending on the runtime and timer precision.
- Not intended for production use; designed for local development and testing.
Register custom middleware:
registerMiddleware('myMiddleware', (opts) => async (ctx, next) => {
// custom logic
await next();
});Under the hood, chaos-fetch uses Koa components (@koa/router and koa-compose), so your custom middleware can leverage the full Koa middleware pattern. Middleware functions are async and take (ctx, next) parameters. Read more in the Koa docs.
chaos-fetch includes an optional OpenTelemetry middleware and a local observability stack for development.
What is included:
- Request-level tracing middleware (
otel) with W3C Trace Context propagation (traceparent) - OTLP HTTP export to an OpenTelemetry Collector
- Jaeger for trace search and inspection
- Prometheus for spanmetrics
- Grafana with a pre-provisioned dashboard (
chaos-fetch-observability)
This is entirely optional. If you do not configure otel, chaos-fetch runs without telemetry overhead.
Prerequisites:
- Docker Desktop (or equivalent Docker Engine + Compose)
- Dependencies installed (
npm install)
Start the local stack:
npm run obs:upOther useful commands:
- Validate compose config:
npm run obs:validate - Follow logs:
npm run obs:logs - Stop stack:
npm run obs:down - Full reset (including volumes):
npm run obs:reset
Local endpoints:
- Grafana:
http://localhost:3000 - Prometheus:
http://localhost:9090 - Jaeger:
http://localhost:16686 - OTLP ingest (collector):
http://localhost:4318(HTTP),localhost:4317(gRPC)
Enable telemetry by adding an otel block to createClient:
import { createClient } from '@fetchkit/chaos-fetch';
const chaosFetch = createClient({
otel: {
serviceName: 'checkout-web',
endpoint: 'http://localhost:4318',
flushIntervalMs: 1000,
maxBatchSize: 20,
maxQueueSize: 1000,
headers: {
'x-tenant-id': 'local-dev',
},
},
global: [
{ latencyRange: { minMs: 20, maxMs: 120 } },
{ failRandomly: { rate: 0.1, status: 503 } },
],
});
await chaosFetch('https://api.example.com/users/123');otel options:
serviceName(required): service label used in traces/metricsendpoint(required): OTLP base endpoint (for examplehttp://localhost:4318)flushIntervalMs(optional): export timer interval; default5000maxBatchSize(optional): export batch size; default100maxQueueSize(optional): max queued spans before dropping oldest; default1000headers(optional): additional OTLP HTTP headers
Notes:
- The middleware marks spans as error when HTTP status is
>= 400or if middleware throws. - Trace context is extracted from inbound
traceparentif present; otherwise a new trace is started.
The provisioned dashboard is named Chaos Fetch Observability (UID: chaos-fetch-observability).
Panels included:
- Latency Percentiles (ms): p50 / p90 / p95 (stat values)
- Request Rate: requests/sec from
calls_total - Error Rate: cumulative ratio of 5xx
calls_totalto totalcalls_total(since process start) - Calls by Route: grouped by
http_method+http_target
If traffic is sparse, percentile and rate panels may appear flat or delayed until enough samples are present.
If Grafana shows no/empty data:
- Confirm containers are up:
npm run obs:ps - Confirm collector target is healthy in Prometheus:
http://localhost:9090/targets - Confirm traces appear in Jaeger (
http://localhost:16686) for yourserviceName - Confirm Grafana datasource points to Prometheus at
http://prometheus:9090 - Hard refresh Grafana after dashboard changes (
Ctrl+Shift+R)
- Run tests:
npm run test - Check coverage:
npm run test:ci
- Intended for local/dev/test only
- Not intended for stress testing
- Does not proxy or forward requests; wraps fetch only
Have questions, want to discuss features, or share examples? Join the Fetch-Kit Discord server:
MIT