Self-hosted Go daemon that polls GitHub repositories and dispatches AI agents to analyze issues automatically. No webhooks, no CI glue — just run and forget.
🚧 Active development — beta. The core pipeline works end-to-end: issues are polled, analyzed by a local LLM, and results are posted as GitHub comments. See it in action. Contributions welcome.
- Polls your GitHub repositories for new and updated issues
- Sends each issue to a local Ollama model for AI analysis
- Posts the result as a comment on the GitHub issue — automatically
git clone https://github.com/thumbrise/autosolve.git && cd autosolve
go mod download
cp config.yml.example config.yml # set your token + repos
go run . migrate up -y
go run . scheduleConfigure in config.yml:
github:
token: ghp_your_token # needs issues:write scope
repositories:
- owner: your-org
name: your-repo
issues:
requiredLabel: "autosolve" # optional — only analyze labeled issues
ollama:
endpoint: "http://localhost:11434"
model: "qwen2.5-coder:7b" # any Ollama modelThat's it. Every issue with the autosolve label gets an AI analysis comment within seconds.
📖 thumbrise.github.io/autosolve — full docs, guides, architecture, devlog.
| Section | What's there |
|---|---|
| Quick Start | Setup in 5 minutes |
| Configuration | All config options |
| Architecture | How the system works |
| The Idea | Why this project exists |
| Contributing | Add a worker in 4 steps |
| Devlog | How we got here — design decisions diary |
Epic v1 is in progress — see Epic: v1 architecture redesign.
What works today: end-to-end AI dispatch pipeline — multi-repo polling, issue sync, outbox relay, Ollama analysis, GitHub comment posting, feedback loop prevention, per-error retry with degraded mode, rate limiting, full OTEL observability, SQLite with goose + sqlc.
What's next: re-analysis on issue updates, adaptive polling, CLI commands, GitHub App migration.