Circadian Predictive Coding is a research-first repository focused on biologically inspired learning where models adapt their own structure over wake and sleep cycles.
This project is built around one central idea:
- Circadian Predictive Coding should be compared rigorously against
- traditional backpropagation
- traditional predictive coding
The baseline models stay in the repo as stable references, while circadian behavior is the primary innovation surface.
The circadian model is the primary focus of this project. Backprop and predictive coding baselines are kept to ensure fair comparison and reproducible evaluation.
flowchart LR
A[Wake Training] --> B[Chemical Accumulation]
B --> C[Plasticity Gating]
C --> D{Sleep Trigger}
D -- No --> A
D -- Yes --> E[Sleep Consolidation]
E --> F[Split / Prune Adaptation]
F --> G[Replay + Homeostasis]
G --> H[Optional Rollback Guard]
H --> A
Interactive version (Plotly, with internals replay):
The circadian algorithm models wake and sleep phases:
- Wake: train with predictive-coding updates while each neuron accumulates a chemical usage signal.
- Sleep: consolidate with architecture updates (split high-usage neurons, prune low-usage neurons), optional rollback, and homeostatic controls.
This lets model capacity adapt over time instead of staying fixed.
- NumPy circadian predictive coding baseline for small-scale experiments
- Torch ResNet-50 benchmark pipeline for speed and accuracy comparisons
- Adaptive sleep triggers, adaptive split/prune thresholds, dual-timescale chemical dynamics
- Reward-modulated wake learning and adaptive sleep budget scaling (NumPy + ResNet circadian head)
- Function-preserving split behavior and guarded sleep rollback
- Multi-seed benchmark runner with JSON/CSV output
Interactive dashboard:
Interactive Plotly chart files:
- Overview (interactive, compact)
- Accuracy (interactive)
- Training speed (interactive)
- Inference latency P95 (interactive)
- Interactive chart source files
Note: GitHub README pages do not execute custom JavaScript, so Plotly interactivity will not run inline inside README itself.
| Model | Accuracy Mean | Train SPS Mean | Inference P95 (ms) |
|---|---|---|---|
| BackpropResNet50 | 0.6901 | 1775.3 | 17.34 |
| PredictiveCodingResNet50 | 0.6810 | 1732.1 | 17.74 |
| CircadianPredictiveCodingResNet50 | 0.6715 | 1643.6 | 18.71 |
| Model | Accuracy | Train SPS | Inference SPS | Notes |
|---|---|---|---|---|
| BackpropResNet50 | 0.706 | 1350.7 | 4672.4 | fixed head |
| PredictiveCodingResNet50 | 0.723 | 2093.4 | 4839.0 | fixed head |
| CircadianPredictiveCodingResNet50 | 0.734 | 2059.9 | 4831.4 | hidden 384->394, splits=12, prunes=2, rollbacks=7 |
Command:
python resnet50_benchmark.py --dataset-name cifar100 --classes 100 --dataset-train-subset-size 20000 --dataset-test-subset-size 5000 --epochs 12 --device cuda --target-accuracy -1 --backprop-freeze-backbone --backbone-weights imagenet| Model | Accuracy | Cross-Entropy | Train SPS | Inference P95 (ms) | Notes |
|---|---|---|---|---|---|
| BackpropResNet50 | 0.678 | 1.7144 | 981.3 | 23.03 | fixed head |
| PredictiveCodingResNet50 | 0.692 | 1.1175 | 965.2 | 20.77 | fixed head |
| CircadianPredictiveCodingResNet50 | 0.685 | 1.1082 | 874.2 | 23.27 | hidden 384->384, splits=0, prunes=0, rollbacks=0 |
Raw benchmark output: docs/benchmarks/benchmark_master_cifar100_subset_2026-02-28.txt
Strengths:
- Competitive retention/adaptation behavior under hard continual shift.
- Strong balance in the moderate strength-case stress test (circadian balanced score
0.949vs predictive coding0.947vs backprop0.946). - Sources:
- Dynamic capacity adaptation is observable and measurable (updated hardest-case: mean splits
48.57, hidden size24 -> 72.57). - Competitive behavior in moderate continual-shift stress tests with stable multi-seed performance.
Weaknesses:
- Not best on every benchmark; on the latest CIFAR-100 subset master check, predictive coding accuracy (
0.692) was higher than circadian (0.685). - In the updated ultra-hard hardest-case setting, the margin between circadian and predictive coding is small (
0.812vs0.808) with high variance, so ranking can flip across seeds/configurations. - Extra algorithmic machinery (sleep scheduling, replay, split/prune controls) adds tuning burden and implementation complexity compared with fixed-width baselines.
- Speed overhead can appear depending on configuration; in the latest CIFAR-100 subset master check, circadian train speed (
874.2SPS) was lower than predictive coding (965.2SPS). - Results are regime-dependent; claims should be tied to specific benchmark settings and seeds instead of treated as universal.
src/
core/ # Learning rules and model definitions
app/ # Experiment and benchmark orchestration
adapters/ # CLI entrypoints
infra/ # Dataset and dataloader construction
config/ # Environment-backed defaults
shared/ # Small cross-cutting runtime helpers
tests/ # Unit/integration tests
docs/
adr/ # Architecture decision records
modules/ # Module responsibility docs
figures/ # Generated and static figures for documentation
scripts/ # Reproducible benchmark scripts
python -m venv .venv
.\.venv\Scripts\Activate.ps1
pip install -r requirements.txtOptional torch benchmark dependencies:
pip install -r requirements-resnet.txtFor NVIDIA GPUs (example CUDA wheels):
python -m pip install --upgrade --force-reinstall torch torchvision --index-url https://download.pytorch.org/whl/cu128Toy baseline:
python predictive_coding_experiment.pyToy baseline with review-driven circadian controls:
python predictive_coding_experiment.py --adaptive-sleep-trigger --adaptive-sleep-budget --reward-modulated-learning --reward-scale-min 0.8 --reward-scale-max 1.4Continual shift stress test (retention vs adaptation):
python scripts/run_continual_shift_benchmark.py --profile strength-case --seeds 3,7,11,19,23,31,37Hardest continual-shift stress test (expanded hidden capacity + very heavy drift):
python scripts/run_continual_shift_benchmark.py --profile hardest-case --seeds 3,7,11,19,23,31,37ResNet benchmark (all 3 models):
python resnet50_benchmark.py --dataset-name cifar100 --classes 100 --dataset-train-subset-size 20000 --dataset-test-subset-size 5000 --epochs 12 --device cudaMulti-seed benchmark export:
python scripts/run_multiseed_resnet_benchmark.py --dataset-name cifar100 --seeds 7,13,29 --dataset-train-subset-size 20000 --dataset-test-subset-size 5000 --epochs 12 --device cuda --output-prefix benchmark_multiseed_cifar100Regenerate README charts:
python scripts/generate_readme_figures.py --summary-csv benchmark_multiseed_cifar100_summary.csv --output-dir docs/figuresDeploy dashboard via GitHub Pages:
- Workflow:
.github/workflows/pages.yml - Hosted entrypoint:
docs/index.html
ruff check .
mypy src tests scripts
pytest -q- License: MIT
- Contributing: CONTRIBUTING.md
- Architecture: ARCHITECTURE.md
- Changelog: CHANGELOG.md
- Security policy: SECURITY.md
- Code of conduct: CODE_OF_CONDUCT.md
- Governance: GOVERNANCE.md
- Support process: SUPPORT.md
- Model Card: docs/model-card.md
- Review Notes: docs/circadian-model-review-notes.md
If this repository contributes to your work, cite it using CITATION.cff.


