A comprehensive Brain-Computer Interface (BCI) platform for real-time EEG streaming, motor imagery classification and ocular feature extraction (pupillometry & rPPG).
Table of Contents
The repository is modularly structured into the following main components:
backend/– A FastAPI streaming server and processing engine for BCI and webcam data.frontend/– A React + Vite based real-time visualization dashboard.docs/– Supplementary project documentation and algorithmic explanations.
For more in-depth explanations of specific modules within the codebase, refer to our detailed markdown documentation:
- EEG Math & Processing: Details on channel statistics, constraints, and signal filtering.
- Motor Imagery: Explanation of the EEG classification pipeline and Motor Imagery foundations.
- rPPG Implementation: Details on remote photoplethysmography algorithms used for heartbeat and SPO2 detection.
- Ocular Features: Details on pupillometry, blink detection, and eye segmentation using
meye.
See Thonk in action! Below are video demonstrations of the platform's key features:
Real-time EEG data visualization with channel mapping and 3D brain rendering.
LaBraM foundation model generating real-time EEG embeddings for downstream tasks.
Simultaneous rPPG (heart rate & SpO2) and pupillometry extraction from webcam input.
Real-time motor imagery classification and control interface with live predictions.
Git Submodules
This project uses Git submodules to include external dependencies. When you clone the repository for the first time, you need to initialize and update the submodules to fetch the external code:
git submodule update --init --recursiveThis ensures all referenced external libraries are properly downloaded and available in your local workspace.
OpenBCI Cyton Board
This project uses the OpenBCI Cyton Board, a 8-channel EEG acquisition device. The board streams brainwave data via USB dongle to the backend server.
Prepare the OpenBCI Cyton Board (set to PC Mode, there should be a blue light) and electrodes. Connect the USB Dongle to your laptop (there should be a blue light on the dongle).
The project requires Python 3.12 for the backend, managed via uv.
Step A: .env file
Create the environment file:
cp backend/.env.example backend/.envStep B: Starting the server
cd backend
uv sync
source .venv/bin/activate
python app.pyYou need bun or npm installed for frontend package management.
Step A: .env file
Create the environment file:
cp frontend/.env.example frontend/.envStep B: Run frontend
cd frontend
bun install # or npm install
bun dev # or npm run devNavigate to http://localhost:5173.
Verify the following functionality to ensure the setup is successful:
- EEG Streaming
- EEG data is being streamed on the time series chart.
- EEG channels can be mapped to electrodes on the headplot.
- The 3D brain changes colors as data is streamed and a channel is mapped to an electrode.
- BFM
- The LaBraM model is able to generate embeddings (this takes some time to collect an initial buffer, so give it a while).
- Webcam
- Displays rPPG data in real-time.
- Displays pupillometry data (pupil diameters) in real-time.
- Motor Imagery
- As EEG data is being streamed, dot on the control interface and corresponding data should be displayed/updated in real-time.
Distributed under the MIT License. See LICENSE for more information.
This project utilizes several open-source tools and resources:
- NeuralFlight for motor imagery foundations.
- OpenBCI for the core EEG hardware and streaming interfaces.
- rPPG-Toolbox for remote photoplethysmography algorithms.
- meye for efficient eye segmentation and pupillometry.
- LaBraM for EEG foundation model embeddings.
- Various research papers in the fields of BCI, computer vision, and cognitive state.



