node bin/promptinel.js addYou'll be prompted for:
- Prompt text (the prompt you want to monitor)
- Provider (mock, ollama, etc.)
- Model (mock-default, mock-fast, mock-quality)
- Drift threshold (0-1, default 0.3)
Example:
- ID: prompt_1234567890_abc123
- Provider: mock
- Model: mock-default
- Threshold: 0.3
✓ Prompt added successfully!
ID: prompt_1234567890_abc123
Provider: [MOCK]
Model: mock-default
Threshold: 0.3
Run 'node bin/promptinel.js check prompt_1234567890_abc123' to test it.
node bin/promptinel.js check prompt_1234567890_abc123Output:
🔍 Executing Prompt
Prompt ID: prompt_1234567890_abc123
⚠ Mock Mode: Using simulated responses
ℹ Configure a real provider for actual LLMs
✓ Execution complete!
──────────────────────────────────────────────────
Snapshot ID: snap_1234567890_xyz789
Provider: [MOCK]
Model: mock-default
Timestamp: 2026-03-26T12:00:00.000Z
──────────────────────────────────────────────────
Output:
Mock provider generated this deterministic output. [Model: mock-default, Hash: 123456]
node bin/promptinel.js baseline prompt_1234567890_abc123 --latestOutput:
✅ Baseline updated for prompt: prompt_1234567890_abc123
Snapshot: snap_1234567890_xyz789
node bin/promptinel.js watchOutput:
⚠ Mock Mode: Using simulated responses
ℹ Configure a real provider for actual LLMs
✓ Executed 1 prompt(s)
──────────────────────────────────────────────────────────────────── 📊 prompt_1234567890_abc123 Provider: [MOCK]
Model: mock-default
📝 BEHAVIOR CHANGE:
The model response now includes a refusal or
hedging that was absent in the original output.
Output: Mock provider generated this deterministic output. [Model: mock-default...
Drift: 0.450 ⚠️ DRIFT ───────────────────────────────────────────────
node bin/promptinel.js reportOutput:
📊 Drift Report
ℹ Total prompts: 1
──────────────────────────────────────────────────
📝 prompt_1234567890_abc123
Prompt: What is artificial intelligence?...
Provider: [MOCK] / mock-default
Threshold: 0.3
Baseline: snap_1234567890_xyz789
Snapshots: 2
Latest drift: 0.000 ✓ OK
If you want to quickly see how much a prompt has drifted from its baseline:
node bin/promptinel.js diff <prompt-id>Compare outputs side-by-side with drift score:
node bin/promptinel.js diff snap_1234567890_abc snap_1234567891_defOutput:
================================================================================
🔍 SNAPSHOT COMPARISON
================================================================================
Snapshot 1: snap_1234567890_abc
Prompt: prompt_1234567890_abc123
Time: 2026-03-26T10:00:00.000Z
Provider: mock / mock-default
Snapshot 2: snap_1234567891_def
Prompt: prompt_1234567890_abc123
Time: 2026-03-26T11:00:00.000Z
Provider: mock / mock-default
Drift Score: 0.250
Status: 🟡 WARNING (moderate drift)
================================================================================
OUTPUT COMPARISON
================================================================================
┌──────────────────────────────────────┬──────────────────────────────────────┐
│SNAPSHOT 1 │SNAPSHOT 2 │
├──────────────────────────────────────┼──────────────────────────────────────┤
│ AI is artificial intelligence. │← AI stands for artificial │
│ │← intelligence. │
└──────────────────────────────────────┴──────────────────────────────────────┘
Get machine-readable comparison output:
node bin/promptinel.js diff snap_123 snap_456 --format jsonThe report command supports various filters and formats:
# Filter by tag
node bin/promptinel.js report --tags production,critical
# Filter by prompt ID
node bin/promptinel.js report --prompt my-prompt-id
# Custom formats
node bin/promptinel.js report --format csv
node bin/promptinel.js report --format json
# Export to file
node bin/promptinel.js report --output ./reports/weekly-drift.txtnode bin/promptinel.js baseline prompt_123 --snapshot snap_456You can run watch on a schedule using the --schedule flag (cron syntax):
# Run every hour
node bin/promptinel.js watch --schedule "0 * * * *"Remove old snapshots according to retention policy:
# Use retention policy from config (default: 30 days)
node bin/promptinel.js cleanup
# Keep only last 10 snapshots per prompt
node bin/promptinel.js cleanup --keep-last 10
# Keep only snapshots from last 7 days
node bin/promptinel.js cleanup --keep-days 7Start the local dashboard to inspect drift trends and history visually:
# Default port 3000
node bin/promptinel.js dashboard
# Custom port
node bin/promptinel.js dashboard --port 4000Output:
Cleaning up old snapshots...
prompt_123: Deleted 15 snapshot(s)
prompt_456: Deleted 8 snapshot(s)
Cleanup complete! Deleted 23 snapshot(s) total.
Baseline snapshots were preserved.
Note: Baseline snapshots are always preserved during cleanup.
Promptinel works out of the box with zero configuration using the Mock provider:
- No API keys required
- Deterministic outputs (same prompt = same output)
- Perfect for testing and demos
- Simulates drift detection
The Mock provider generates responses based on a hash of the prompt text, ensuring consistent behavior for testing.
Promptinel uses mock as the default provider for zero-friction setup.
For cloud adapters (OpenAI/Anthropic/Mistral), there are three testing levels:
- Unit tests (no API, no network): already mocked via
global.fetchin provider tests. - Replay mode (no API, no network): run against previously recorded HTTP fixtures.
- Record mode (requires API once): capture fixtures to replay later offline.
live(default): real API callsrecord: real API calls + save fixturesreplay: offline mode using saved fixtures only
Environment variables:
PROMPTINEL_HTTP_MODE=live|record|replayPROMPTINEL_FIXTURES_DIR=.promptinel/fixtures(optional custom path)
PowerShell (Windows):
$env:PROMPTINEL_HTTP_MODE="record"
$env:OPENAI_API_KEY="sk-..."
node bin/promptinel.js check <prompt-id>Then offline:
Remove-Item Env:OPENAI_API_KEY -ErrorAction SilentlyContinue
$env:PROMPTINEL_HTTP_MODE="replay"
node bin/promptinel.js check <prompt-id>If a fixture is missing in replay mode, Promptinel returns an explicit error telling you to run once in record mode.
After using Promptinel, you'll see:
.promptinel/
├── watchlist.json # Your monitored prompts
└── snapshots/
└── prompt_123/
├── 1234567890_snap_abc.json
└── 1234567891_snap_def.json
All data is stored locally in JSON files, no database required.
Built and maintained by @diegosantdev