Local-first CLI for generating QuantConnect trading algorithms from research papers — powered by Ollama
QuantCoder transforms academic quant research into compilable QuantConnect LEAN algorithms using local LLMs. No cloud API keys required.
Models:
- qwen2.5-coder:14b — code generation, refinement, error fixing
- mistral — reasoning, summarization, chat
- Python 3.10+
- Ollama running locally
# Pull the required models
ollama pull qwen2.5-coder:14b
ollama pull mistralgit clone https://github.com/SL-Mar/quantcoder-cli.git
cd quantcoder-cli
python -m venv .venv
source .venv/bin/activate
pip install -e .
python -m spacy download en_core_web_sm# Check Ollama is running
curl http://localhost:11434/api/tags
# Launch QuantCoder
quantcoderquantcoder # or: qc# Search for papers
quantcoder search "momentum trading" --num 5
# Download and summarize
quantcoder download 1
quantcoder summarize 1
# Generate QuantConnect algorithm
quantcoder generate 1
quantcoder generate 1 --open-in-editor
# Validate and backtest (requires QC credentials)
quantcoder validate generated_code/algorithm_1.py
quantcoder backtest generated_code/algorithm_1.py --start 2022-01-01 --end 2024-01-01quantcoder --prompt "Find articles about mean reversion"quantcoder auto start --query "momentum trading" --max-iterations 50
quantcoder auto statusquantcoder evolve start 1 --gens 3 --variants 5
quantcoder evolve start 1 --gens 3 --push-to-qc # Push best to QC
quantcoder evolve list
quantcoder evolve export abc123# Shows Sharpe, Total Return, CAGR, Max Drawdown, Win Rate, Total Trades
quantcoder backtest generated_code/algorithm_1.py --start 2022-01-01 --end 2024-01-01quantcoder library build --comprehensive --max-hours 24
quantcoder library statusConfiguration is stored in ~/.quantcoder/config.toml:
[model]
provider = "ollama"
model = "qwen2.5-coder:14b"
code_model = "qwen2.5-coder:14b"
reasoning_model = "mistral"
ollama_base_url = "http://localhost:11434"
ollama_timeout = 600
temperature = 0.5
max_tokens = 3000
[ui]
theme = "monokai"
editor = "zed"For backtesting and deployment, set credentials in ~/.quantcoder/.env:
QUANTCONNECT_API_KEY=your_key
QUANTCONNECT_USER_ID=your_id
To use a remote Ollama instance:
[model]
ollama_base_url = "http://your-server:11434"quantcoder/
├── cli.py # CLI entry point
├── config.py # Configuration management
├── chat.py # Interactive chat
├── llm/ # Ollama provider layer
├── core/ # LLM handler, processor, NLP
├── agents/ # Multi-agent system (Coordinator, Alpha, Risk, Universe)
├── evolver/ # AlphaEvolve-inspired evolution engine
├── autonomous/ # Self-improving pipeline
├── library/ # Batch strategy library builder
├── tools/ # Pluggable tool system
└── mcp/ # QuantConnect MCP integration
QuantCoder was initiated in November 2023 based on "Dual Agent Chatbots and Expert Systems Design". The initial version coded a blended momentum/mean-reversion strategy from "Outperforming the Market (1000% in 10 years)", which received over 10,000 impressions on LinkedIn.
v2.0.0 is a complete rewrite — local-only inference, multi-agent architecture, evolution engine, and autonomous learning.
Apache License 2.0. See LICENSE.