Intel Lint is a local-first analyzer for CTI report review (claims extraction, evidence checks, bias signals). It is designed for single-user local processing of sensitive reports.
- Install Ollama: https://ollama.com/download
- Download IntelLint from GitHub Releases for your platform.
- Run IntelLint.
- The app binds to
127.0.0.1and opens the local web UI automatically.
- The app binds to
- In the Setup page, click Pull
FenkoHQ/Foundation-Sec-8B. - Use the app from the local UI.
Local storage (user data dir):
- Windows:
%LOCALAPPDATA%\\intel-lint - macOS:
~/Library/Application Support/intel-lint - Linux:
${XDG_DATA_HOME:-~/.local/share}/intel-lint
The user data dir stores local config, outputs, logs, and cache.
- In
ENGINE=ollamamode, model inference is local through your Ollama service. - No telemetry is required for normal local operation.
- This is intended for sensitive CTI reporting workflows where data should remain local.
python -m pip install -e ".[dev]"
python scripts/run.py setup --frontend
python scripts/run.py app
python scripts/build_release.pyNotes:
npm installis dev-only and is handled bypython scripts/run.py setup --frontend.- The dev UI is
http://127.0.0.1:5173and proxies API calls to the local backend.
Placeholder engine is for CI/tests/smoke checks only; output quality is limited versus LLM mode.
python -m pytest- Close IntelLint.
- Delete the
intel-lintuser data directory for your OS path above.
.env.exampleis optional for development overrides.