PhilLit generates analytical literature reviews with verified bibliographies for philosophy research. Give it a topic description, and it searches academic databases, collects and checks references, and writes a structured review — typically in half an hour.
PhilLit is free and open-source. You only pay Anthropic for using Claude. This is a research project evaluating AI-generated literature reviews for philosophy (see below).
- Searches multiple academic databases — Semantic Scholar, PhilPapers, SEP, IEP, OpenAlex, CORE, arXiv, and NDPR
- Checks every reference — All citations are sourced from academic databases (not generated by the LLM) and validated against CrossRef
- Produces a structured review with a bibliography you can import into Zotero, BibDesk, or any reference manager
- Outputs a Word document and a BibTeX file ready for reference managers or LaTeX
PhilLit itself is free. Running it requires Claude Code, which needs a subscription or API access. Each literature review uses roughly $9 to $13 in API credits, depending on whether you use Sonnet or Opus.
What you need:
- Claude Code — the AI coding tool that runs PhilLit
- A Brave Search API key (free tier available)
- Optionally, a Semantic Scholar API key (free) for better search results
Setup takes about 10 minutes. See GETTING_STARTED.md for step-by-step instructions.
Once set up, describe your topic and PhilLit does the rest (~45 minutes):
I need a literature review on [topic].
[Describe the topic in 1-5 paragraphs]
PhilLit running a literature review. The system decomposes the topic into research domains and searches multiple academic databases in parallel.
- Plan — Breaks your topic into searchable research domains
- Research — Searches academic databases for each domain, collects and verifies references
- Outline — Designs the structure of the review based on the collected literature
- Write — Drafts each section of the review
- Assemble — Combines sections into a final document with a complete bibliography
- Instructions on contributing:
CONTRIBUTING.md - Agent architecture:
.claude/docs/ARCHITECTURE.md - Claude instructions:
CLAUDE.md
Each review is saved in its own directory under reviews/:
reviews/[topic]/
├── literature-review-final.md # Complete review (markdown)
├── literature-review-final.docx # Complete review (Word, requires pandoc)
├── literature-all.bib # Aggregated bibliography
└── intermediate_files/
├── json/ # API response files (archived)
├── lit-review-plan.md # Domain decomposition
├── literature-domain-*.bib # Per-domain BibTeX files
├── synthesis-outline.md # Review structure
├── synthesis-section-*.md # Individual sections
└── task-progress.md # Progress tracker
We're conducting validation studies (pending IRB approval) to rigorously assess whether PhilLit meets the standards required for serious philosophical research.
- Study 1: Use PhilLit on topics you already know well. We provide technical support and cover API costs.
- Study 2: Review literature overviews generated by others (no technical setup required).
Interested in participating? Sign up here and we'll notify you when the studies launch.
- Johannes Himmelreich — Syracuse University — jrhimmel@syr.edu
- Marco Meyer — University of Hamburg — marco.meyer@uni-hamburg.de
Inspired by LiRA multi-agent patterns.

