The repository is designed to evolve Circadian Predictive Coding as the main algorithm while preserving reproducible comparisons with:
- traditional backpropagation
- traditional predictive coding
src/core- Pure model logic, learning dynamics, and typed model configs
- No CLI parsing, environment loading, or dataset IO
src/app- Use-case orchestration for experiment runs and benchmark workflows
src/infra- Dataset and dataloader construction only
src/adapters- User-facing CLI parsing and text formatting
src/config- Environment variable mapping into typed settings
src/shared- Small runtime helpers shared across modules (for example optional torch loading)
adapters -> app -> core
config -> adapters
infra -> app
shared -> core + app + infra
core must not depend on app/infra/adapters.
BackpropMLP- Baseline one-hidden-layer backprop model for toy tasks
PredictiveCodingNetwork- Baseline predictive coding model with iterative hidden-state inference
CircadianPredictiveCodingNetwork- Primary algorithm with:
- chemical-gated plasticity
- wake/sleep phases
- split/prune structural adaptation
- replay/homeostasis/threshold control knobs
- Primary algorithm with:
resnet50_variants.py- Head-to-head benchmark implementations for all three model families on a shared ResNet-50 backbone
infra.datasetscreates deterministic two-cluster dataapp.experiment_runnertrains all three toy modelsadapters.cliexposes baseline and in-depth modes
infra.vision_datasetscreates synthetic or torchvision dataloadersapp.resnet50_benchmarkruns all three models with aligned evaluation metricsadapters.resnet_benchmark_cliexposes benchmark configurationscripts/run_multiseed_resnet_benchmark.pyaggregates cross-seed results
- Circadian-first with mandatory baseline comparisons
- Why: improvements are only meaningful when measured against stable references.
- Separate wake training and sleep consolidation
- Why: mirrors the circadian concept and keeps adaptation logic explicit and testable.
- Configuration-heavy experiment control
- Why: enables reproducible sweeps and ablations without branching code paths.
- Deterministic seed handling
- Why: avoids flaky claims in model comparisons.
- New adaptation strategies should be added via policy/config extension points, not by hardcoding branches across modules.
- New datasets must be added in
infraand wired viaapp, never directly fromcore. - Major algorithmic changes require an ADR in
docs/adr/.