Skip to content

mdmoore25404/casedd

Repository files navigation

CASEDD — Case Display Daemon

A lightweight, high-performance Python daemon that drives a small USB framebuffer display mounted inside a PC case, while simultaneously serving the same content over WebSocket and HTTP for remote viewing.

Target hardware: Any Linux framebuffer-compatible display device
OS: Ubuntu 24.04 (headless)
Stack: Python 3.12, FastAPI, uvicorn, Pillow, Pydantic v2, psutil, PyYAML


Licensing

casedd is released under the Business Source License 1.1.

  • Free for personal, hobbyist, and home-lab use (including AI workstations and single-user setups).
  • Commercial use, white-labeling, enterprise deployments, or bundling with hardware requires a paid commercial license.

See LICENSE and license-commercial.md for full details.

Interested in commercial use or white-label rights? Feel free to reach out.

Features

  • Dual output — push rendered images to a Linux framebuffer device (for example /dev/fb0, /dev/fb1) AND a browser via WebSocket simultaneously
  • Custom layout engine — declare layouts in .casedd YAML files using CSS Grid Template Areas syntax; widget tree supports unlimited nesting via type: panel
  • Extended widget set — includes system widgets plus plex_now_playing and plex_recently_added table widgets for media dashboards
  • Live data getters — CPU, fan telemetry (CPU/system/GPU), NVIDIA GPU (including multi-GPU keys), RAM, disk, network, system uptime/host, speedtest, Ollama API runtime state, UPS telemetry, Plex server/session/library telemetry
  • Template policy engine — rotate templates, schedule templates by time/day, and trigger template overrides from data-store conditions
  • Speedtest integration — optional Ookla CLI getter (default every 30 min) with plan-relative metrics and status keys
  • External data push — accept JSON updates via Unix domain socket or REST POST; values cached in RAM and used on next render
  • Write-endpoint auth and throttling — protect POST /api/update and POST /update with X-API-Key, HTTP Basic Auth, and optional per-IP rate limiting
  • Template-aware polling — getters run only when their key namespaces are referenced by the active template
  • Operational health/api/health reports ok, degraded, or per-getter inactive / starting / ok / error states; /api/metrics exports Prometheus-friendly metrics
  • CLI control surfacecasedd-ctl wraps the HTTP API for status, health, templates, metrics, snapshots, data dumps, and reloads
  • Dev-friendlyCASEDD_NO_FB=1 disables framebuffer for dev; browser WebSocket view is the primary dev display
  • Emergency tty recovery — pressing ESC or Q on a local keyboard requests a clean daemon shutdown so the base OS regains tty control
  • Multiple deployment modes — plain Python, systemd service, Docker Compose

Quick start

1. Clone and create the venv

git clone https://github.com/mdmoore25404/casedd.git
cd casedd
python3.12 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt

2. Configure

cp .env.example .env
# Edit .env — at minimum set CASEDD_NO_FB=1 if no framebuffer hardware

3. Run (dev mode)

./dev.sh start
# Open http://localhost:8080 for the lightweight live viewer
# Open http://localhost:8080/app for the advanced app (Vite dev mode)
./dev.sh logs      # tail the log
./dev.sh status    # check daemon health
./dev.sh stop

Development workflow

./dev.sh start      # start daemon + advanced app (Vite hot-reload) in background
./dev.sh stop       # stop daemon + advanced app cleanly
./dev.sh restart    # stop + start
./dev.sh status     # check daemon/app PIDs + last log lines
./dev.sh logs       # tail -f the log file
./dev.sh lint       # ruff check + mypy --strict (must be zero errors)
./dev.sh test       # pytest with coverage
./dev.sh test --fast # pytest without coverage for quicker iteration
./dev.sh docs       # generate API docs to docs/api.json (local only)

Emergency tty recovery

When CASEDD is running on a local tty, pressing ESC or Q on an attached keyboard requests a clean daemon shutdown. This allows quick recovery when you need the base OS login prompt immediately.

Optional environment controls:

  • CASEDD_EMERGENCY_EXIT_KEYS=0 disables this watcher.
  • CASEDD_EMERGENCY_INPUT_GLOB=/dev/input/event* overrides the input-event device glob (useful for integration tests).
  • The daemon user must have read access to /dev/input/event* (typically by being in Linux group input); otherwise ESC/Q key-exit cannot trigger.

When CASEDD exits and releases the framebuffer, the login prompt should be restored on the display. You can either type your username and press Enter to reach the password prompt, or switch virtual terminals with Ctrl+Alt+F2 (or Ctrl+Alt+F3..F6) to get a login prompt immediately. This avoids running the daemon as root while still allowing easy local recovery.

Advanced React app (Vite)

./dev.sh start already launches the advanced app in Vite development mode for hot-reload editing. You can still run it manually when needed:

cd web
npm install
npm run dev

The advanced app is built with React + Vite + Bootstrap + FontAwesome. It targets the CASEDD API for template overrides, test mode, and simulation.

Linting (must be clean before any commit)

source .venv/bin/activate
ruff check .
mypy --strict casedd/

CLI (casedd-ctl)

casedd-ctl is installed as a console script via pyproject.toml, and the repo also ships a local wrapper at ./casedd-ctl so the command works directly from the checkout without an install step.

./casedd-ctl status
./casedd-ctl health
./casedd-ctl templates list
./casedd-ctl templates set htop
./casedd-ctl metrics
./casedd-ctl data --prefix cpu
./casedd-ctl snapshot --output /tmp/casedd.jpg
./casedd-ctl reload --pid-file run/casedd-dev.pid
./casedd-ctl help
./casedd-ctl help templates
./casedd-ctl help templates set
./casedd-ctl --json health

Use --url to target a non-default daemon address.

./casedd-ctl --url http://localhost:18080 health

Architecture

┌─────────────────────────────────────────────────────────┐
│                        daemon.py                        │
│  async event loop — orchestrates all subsystems         │
└────┬──────────┬──────────┬───────────────┬──────────────┘
     │          │           │               │
 getters/   template/   renderer/       outputs/
 (pollers)  (layout)    (PIL image)     ├─ framebuffer.py
                                        ├─ websocket.py
                                        └─ http_viewer.py
                                    ingestion/
                                    ├─ unix_socket.py
                                    └─ rest.py

Data flow

  1. Getters poll system APIs at their own interval and push values into the data store (in-RAM key/value, dotted keys: cpu.temperature, memory.percent, etc.)
  2. External processes can push values the same way via Unix socket (/run/casedd/casedd.sock) or REST POST (POST /update)
  3. Every render cycle, the renderer reads the active template (a .casedd YAML file) and the data store, and produces a PIL.Image
  4. The image is distributed to all output backends — framebuffer, WebSocket, and any additional backends configured under outputs: in casedd.yaml

Multi-output configuration

Additional output backends can be declared in casedd.yaml under the outputs: key. Each backend runs independently with its own port, rotation, or resolution. All backends share the same data-collection pipeline — getters are never polled more than once per cycle regardless of how many outputs are active.

# casedd.yaml — multiple output backends
outputs:
  usb_panel:
    type: framebuffer
    device: /dev/fb1
    enabled: true

  web_view:
    type: websocket
    port: 8765
    enabled: true

  # Low-res secondary panel on a second USB device
  small_panel:
    type: framebuffer
    device: /dev/fb2
    width: 480
    height: 320
    rotation: 90
    enabled: true

The width and height fields control the resolution passed to each backend. When set, the rendered frame is scaled to those dimensions before dispatch (no separate render pass — one frame, resized per sink). See docs/output-architecture.md for full architecture diagrams.


Template format (.casedd)

Templates are YAML files in templates/. See docs/template_format.md for the full specification. Getter key reference lives at docs/getters.md.

Quick example:

name: simple_stats
aspect_ratio: "5:3"
layout_mode: fit
background: "#1a1a2e"
refresh_rate: 2.0

grid:
  template_areas: |
    "cpu  gpu  ram"
    "disk disk net"
  columns: "1fr 1fr 1fr"
  rows: "1fr 1fr"

widgets:
  cpu:
    type: gauge
    source: cpu.percent
    label: "CPU"
  gpu:
    type: gauge
    source: nvidia.percent
    label: "GPU"
  ram:
    type: bar
    source: memory.percent
    label: "RAM"
  disk:     # spans 2 columns because "disk disk" in template_areas
    type: bar
    source: disk.percent
    label: "Disk"
  net:
    type: panel
    direction: column
    children:
      - type: sparkline
        source: net.bytes_recv_rate
        label: ""
      - type: sparkline
        source: net.bytes_sent_rate
        label: ""

Data push via Unix socket

echo '{"update": {"outside_temp_f": 72.0}}' | nc -U /run/casedd/casedd.sock

Or via REST:

curl -X POST http://localhost:8080/api/update \
  -H "Content-Type: application/json" \
  -d '{"update": {"outside_temp_f": 72.0}}'

Update API auth and rate limiting

The write endpoints POST /api/update and POST /update can be protected with either an API key, HTTP Basic Auth, or both.

Environment variables:

CASEDD_API_KEY=
CASEDD_API_BASIC_USER=
CASEDD_API_BASIC_PASSWORD=
CASEDD_API_RATE_LIMIT=0

Examples:

curl -X POST http://localhost:8080/api/update \
  -H "Content-Type: application/json" \
  -H "X-API-Key: your-shared-secret" \
  -d '{"update": {"outside_temp_f": 72.0}}'

curl -X POST http://localhost:8080/api/update \
  -H "Content-Type: application/json" \
  -u devuser:devpass \
  -d '{"update": {"outside_temp_f": 72.0}}'

When CASEDD_API_RATE_LIMIT is greater than 0, writes over the configured per-minute quota for a source IP return 429 Too Many Requests.

Health and metrics endpoints

GET /api/health returns daemon uptime, render count, active panel templates, and per-getter health. Getters that are not needed by the currently active template policy report inactive instead of starting.

GET /api/metrics exposes Prometheus-style metrics including uptime, render count, getter error totals, getter up/down state, and store key count.

The lightweight viewer intentionally stays minimal (live state + panel picker). Use the advanced app for data push/testing/simulation workflows.

Push demo template

An example template is provided at templates/push_demo.casedd. It visualizes externally pushed values like outside_temp_f and custom.note.

To try it:

# 1) Set the active template
export CASEDD_TEMPLATE=push_demo
./dev.sh restart

# 2) Push demo values
./scripts/push_demo.sh 72.0 "Patio sensor online"

Fan telemetry template

An example fan dashboard is provided at templates/fans.casedd.

To try it:

export CASEDD_TEMPLATE=fans
./dev.sh restart

This template visualizes:

  • CPU fan count / avg / max
  • system fan count / avg / max
  • GPU fan count / avg / max (percent when sourced from nvidia-smi)

Htop-style process template

A single-widget process table template is provided at templates/htop.casedd.

export CASEDD_TEMPLATE=htop
./dev.sh restart

The htop widget shows top processes sorted by CPU utilization.

Plex media dashboard template

A Plex dashboard template is provided at templates/plex_dashboard.casedd.

export CASEDD_TEMPLATE=plex_dashboard
export CASEDD_PLEX_BASE_URL=http://localhost:32400
export CASEDD_PLEX_TOKEN=your-plex-token
./dev.sh restart

Plex config variables:

CASEDD_PLEX_BASE_URL=http://localhost:32400
CASEDD_PLEX_TOKEN=
CASEDD_PLEX_CLIENT_IDENTIFIER=casedd
CASEDD_PLEX_PRODUCT=CASEDD
CASEDD_PLEX_INTERVAL=5
CASEDD_PLEX_TIMEOUT=4
CASEDD_PLEX_VERIFY_TLS=1
CASEDD_PLEX_MAX_SESSIONS=6
CASEDD_PLEX_MAX_RECENT=6
CASEDD_PLEX_PRIVACY_FILTER_REGEX=
CASEDD_PLEX_PRIVACY_FILTER_LIBRARIES=
CASEDD_PLEX_PRIVACY_REDACTION_TEXT=[hidden]

Privacy options:

  • Set CASEDD_PLEX_PRIVACY_FILTER_REGEX to hide matching names/titles/libraries.
  • Set CASEDD_PLEX_PRIVACY_FILTER_LIBRARIES for explicit library-name redaction.
  • Set CASEDD_PLEX_PRIVACY_REDACTION_TEXT to control replacement text.
  • For widget-level hiding, plex_now_playing and plex_recently_added also support filter_regex.

API references:

Weather templates (NWS + external provider example)

Weather templates are provided at:

NWS mode (official US APIs):

export CASEDD_TEMPLATE=weather_nws
export CASEDD_WEATHER_PROVIDER=nws
export CASEDD_WEATHER_ZIPCODE=20852
./dev.sh restart

External provider example (Open-Meteo):

export CASEDD_TEMPLATE=weather_external
export CASEDD_WEATHER_PROVIDER=open-meteo
export CASEDD_WEATHER_LAT=38.9856
export CASEDD_WEATHER_LON=-77.0947
./dev.sh restart

Both providers emit the same weather.* keys so the same widgets/templates can be reused without NWS-specific rendering logic.

Immediate speedtest push helper

Run an on-demand Ookla speedtest and push the result into CASEDD via REST:

./scripts/speedtest_push.sh

This writes speedtest.* keys including down/up Mb/s, ping, jitter, plan-relative percentages, status, and summary text.


Deployment

systemd

sudo ./deploy/install/install.sh
sudo systemctl status casedd
sudo ./deploy/install/uninstall.sh

Notes:

  • The installer uses the clone you run it from; it does not copy the repo into /opt.
  • It creates or updates the repo-local .venv, installs /etc/casedd/casedd.env if needed, and renders /etc/systemd/system/casedd.service with the current repo path.
  • If you move the repository later, rerun sudo ./deploy/install/install.sh from the new path.
  • Uninstall preserves /etc/casedd/casedd.env, logs, and the repo .venv by default. Use sudo ./deploy/install/uninstall.sh --purge-env --purge-logs --remove-venv for a full cleanup.

Docker Compose

cp .env.example .env  # configure as needed
docker compose up -d
docker compose logs -f

Compose starts two services:

  • casedd backend (HTTP/WS)
  • casedd-web advanced app (Vite dev mode with hot reload)

For host metric visibility, the casedd container bind-mounts host Linux runtime filesystems read-only:

  • /proc -> /host/proc
  • /sys -> /host/sys
  • /run -> /host/run

CASEDD_PROCFS_PATH is set to /host/proc in Compose so psutil-based getters read host process/system views instead of container-local procfs.

Default Docker URLs:

  • lightweight viewer: http://localhost:18080/
  • advanced app redirect entry: http://localhost:18080/app
  • direct advanced app: http://localhost:15173/

API

Interactive API docs are available at http://localhost:8080/docs when the daemon is running.

Key runtime endpoints:

  • GET /api/panels — panel metadata and current/forced template state
  • GET /image?panel=<name> — latest PNG for a specific panel
  • POST /api/template/override — force/clear per-panel template override
  • GET/POST /api/test-mode — global getter-disable test mode
  • POST /api/sim/replay — replay deterministic records
  • POST /api/sim/random — start bounded random simulation
  • POST /api/sim/stop / GET /api/sim/status
  • GET /api/debug/render-state — in-memory sparkline/histogram buffers

To generate a static docs/api.json:

./dev.sh docs

Timezone

The clock widget renders the host machine's local time. If the host timezone is wrong, set it at the OS level (example for US Eastern):

sudo timedatectl set-timezone America/New_York
timedatectl | grep "Time zone"

Speedtest configuration

Speedtest polling and threshold behavior can be tuned via environment variables:

CASEDD_SPEEDTEST_INTERVAL=1800
CASEDD_SPEEDTEST_STARTUP_DELAY=0
CASEDD_SPEEDTEST_BINARY=speedtest
CASEDD_SPEEDTEST_ADVERTISED_DOWN_MBPS=2000
CASEDD_SPEEDTEST_ADVERTISED_UP_MBPS=200
CASEDD_SPEEDTEST_MARGINAL_RATIO=0.9
CASEDD_SPEEDTEST_CRITICAL_RATIO=0.7
CASEDD_OLLAMA_API_BASE=http://localhost:11434
CASEDD_OLLAMA_INTERVAL=10
CASEDD_OLLAMA_TIMEOUT=3
CASEDD_UPS_COMMAND=
CASEDD_UPS_INTERVAL=5
CASEDD_UPS_UPSC_TARGET=ups@localhost

Notes:

  • CASEDD_SPEEDTEST_STARTUP_DELAY delays the first speedtest after startup. Set this to 60-300 in production to avoid startup-time network/CPU spikes.

Framebuffer performance and debug flags

CASEDD_FB_DEVICE=/dev/fb0
CASEDD_FB_ROTATION=0
CASEDD_FB_CLAIM_ON_NO_INPUT=1
CASEDD_DEBUG_FRAME_LOGS=0
CASEDD_LOG_LEVEL=INFO
CASEDD_STARTUP_FRAME_SECONDS=5

Notes:

  • Keep CASEDD_DEBUG_FRAME_LOGS=0 for production; enable only while debugging.
  • CASEDD_FB_CLAIM_ON_NO_INPUT=1 enables inputless display takeover behavior.
  • CASEDD_FB_ROTATION supports 0, 90, 180, 270.
  • CASEDD_STARTUP_FRAME_SECONDS keeps a startup status frame on screen while getters warm up before live data rendering begins.

Dev vs production

  • The production systemd service in deploy/casedd.service forces:
    • CASEDD_LOG_LEVEL=NONE
    • CASEDD_DEBUG_FRAME_LOGS=0
    • framebuffer output enabled
  • deploy/casedd.service is a template; deploy/install/install.sh renders it with the current clone path and selected service user.
  • ./dev.sh forces a development profile:
    • CASEDD_DEV_LOG_LEVEL=DEBUG by default
    • CASEDD_DEV_DEBUG_FRAME_LOGS=1 by default
    • CASEDD_DEV_NO_FB=1 by default so iteration happens in the web UI, not on the real framebuffer
    • if casedd.service is already active, dev mode automatically isolates itself by:
      • forcing CASEDD_NO_FB=1
      • switching to CASEDD_DEV_HTTP_PORT (default 18080)
      • switching to CASEDD_DEV_WS_PORT (default 18765)

You can override dev behavior with:

CASEDD_DEV_LOG_LEVEL=INFO
CASEDD_DEV_DEBUG_FRAME_LOGS=0
CASEDD_DEV_NO_FB=0
CASEDD_DEV_HTTP_PORT=18080
CASEDD_DEV_WS_PORT=18765

Template rotation, schedule, and triggers

Rotation is configured in casedd.yaml and used as the single source of truth by both daemon startup and the advanced web UI.

template: system_stats
template_rotation_enabled: true
template_rotation:
  - template: apod
    seconds: 10
  - template: nzbget_queue
    seconds: 15
template_rotation_interval: 30

In this example, template_rotation_interval is the default hold time and entries with seconds override it per template.

template is not automatically included in rotation; add it explicitly to template_rotation when you want it in the cycle. Set template_rotation_enabled: false to disable rotation and pin to template.

Schedules and triggers are also configured in casedd.yaml. See casedd.yaml.example for a complete sample:

template_schedule:
  - template: slideshow
    start: "23:00"
    end: "06:00"
    days: [0, 1, 2, 3, 4, 5, 6]

template_triggers:
  - source: cpu.percent
    operator: gte
    value: 90
    template: system_stats
    duration: 10
    hold_for: 20
    clear_operator: lte
    clear_value: 70
    cooldown: 30
    priority: 10

Selection priority is:

  1. Trigger rules
  2. Schedule rules
  3. Rotation list
  4. Base template (template in casedd.yaml)

Licensing

casedd is released under the Business Source License 1.1.

  • Free for personal, hobbyist, and home-lab use.
  • Commercial use, white-labeling, or enterprise deployments require a paid license.

See LICENSE and LICENSE-COMMERCIAL.md for full details.

Directory structure

casedd/          Python package (daemon source code)
templates/       .casedd layout/widget definition files
assets/          Static assets (images, fonts)
  slideshow/     Images cycled by the slideshow widget
deploy/          systemd unit + install script
docs/            API JSON + template format spec
scripts/         Local dev scripts (not CI)
run/             PID files (dev, git-ignored)
logs/            Log files (dev, git-ignored)

Contributing

  1. Pick an issue (or create one)
  2. Create a branch: git checkout -b issue/<number>-<slug>
  3. Write code — ruff check . and mypy --strict casedd/ must pass
  4. Open a PR to main

All commit messages follow <type>(<scope>): <summary> (e.g. feat(renderer): add gauge widget).

About

Python daemon to output system and other data to linux compatible framebuffer displays or webbrowser

Topics

Resources

License

Unknown, Unknown licenses found

Licenses found

Unknown
LICENSE
Unknown
LICENSE-COMMERCIAL.md

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors