Serve the home! Inference stack for your Nvidia DGX Spark aka the Grace Blackwell AI supercomputer on your desk. Mostly vLLM based for now and single-spark. For the not-so-rich buddies
-
Updated
Apr 26, 2026 - JavaScript
Serve the home! Inference stack for your Nvidia DGX Spark aka the Grace Blackwell AI supercomputer on your desk. Mostly vLLM based for now and single-spark. For the not-so-rich buddies
High-performance interactive system monitor for NVIDIA DGX systems — GPU, CPU, memory, disk, network in a beautiful TUI
Local diagnostic CLI for NVIDIA DGX Spark (GB10). Detects power caps, unified memory pressure, thermal risk, Docker/runtime issues, and validates vLLM/Ollama/llama.cpp/SGLang recipes.
headless remote desktop to your dgx spark in crystal clear 4k
Real-time hardware and LLM inference monitoring — GPU, CPU, memory, and vLLM metrics streamed to a dashboard.
A kubernetes operator for managing nvidia MIG instances.
GPU-accelerated WhisperX on NVIDIA Blackwell (SM_121) - DGX Spark compatible
Browser-based datacenter lab simulator for NCP-AII certification exam prep — 20 command simulators, 32 guided scenarios, and a full learning progression system.
This is a home brewed menu system for Nvidia's DGX Field Diagnostics Suite.
Pedestrian Detector— using Faster RCNN. This is an application of computer vision, object recognition. We detect people walking on the road and cyclists. Research is also done comparing its performance with different models like YOLO v3.
gpu thrashingNVIDIA GPU Unified Memory diagnostic tool — architecture-aware, measurement-based, PCIe/coherent transport detection
ImageTextDataset adalah sebuah kelas yang mengimplementasikan torch.utils.data.Dataset untuk memuat dataset gambar dan teks. Dataset ini dirancang untuk mempermudah proses pelatihan model pembelajaran mesin yang memerlukan input berupa gambar dan anotasi teks.
GPU-native agent-swarm orchestration for the NVIDIA AI stack — NeMo, NIM, Triton, DCGM, NGC, NIXL, OpenShell. Spawn GPU-pinned agent teams across DGX/HGX nodes with NVLink-aware scheduling, task DAGs, adaptive scheduling, and full observability.
🚀 Run 120B AI Models on Spark 2026 - vLLM API & Coding Assistant
Infrastructure-as-code for deploying Apache Spark on Nvidia DGX systems with GPU acceleration
A lightweight GPU architectural proposal — 4 reserve cores per Warp. Revolutionary efficiency for Data Center, HPC, DGX. By Mohamed Rafik Chabira.
Add a description, image, and links to the dgx topic page so that developers can more easily learn about it.
To associate your repository with the dgx topic, visit your repo's landing page and select "manage topics."