Skip to content

fintelkai/PhilLit

 
 

Repository files navigation

PhilLit

PhilLit

License: Apache 2.0 Python 3.9+ Built with Claude Code

PhilLit generates analytical literature reviews with verified bibliographies for philosophy research. Give it a topic description, and it searches academic databases, collects and checks references, and writes a structured review — typically in half an hour.

PhilLit is free and open-source. You only pay Anthropic for using Claude. This is a research project evaluating AI-generated literature reviews for philosophy (see below).

Example Reviews

Highlights

  • Searches multiple academic databases — Semantic Scholar, PhilPapers, SEP, IEP, OpenAlex, CORE, arXiv, and NDPR
  • Checks every reference — All citations are sourced from academic databases (not generated by the LLM) and validated against CrossRef
  • Produces a structured review with a bibliography you can import into Zotero, BibDesk, or any reference manager
  • Outputs a Word document and a BibTeX file ready for reference managers or LaTeX

What does it cost?

PhilLit itself is free. Running it requires Claude Code, which needs a subscription or API access. Each literature review uses roughly $9 to $13 in API credits, depending on whether you use Sonnet or Opus.

Quick Start

What you need:

Setup takes about 10 minutes. See GETTING_STARTED.md for step-by-step instructions.

Once set up, describe your topic and PhilLit does the rest (~45 minutes):

I need a literature review on [topic].

[Describe the topic in 1-5 paragraphs]

What does it look like?

PhilLit in action: generating a literature review

PhilLit running a literature review. The system decomposes the topic into research domains and searches multiple academic databases in parallel.

How It Works

  1. Plan — Breaks your topic into searchable research domains
  2. Research — Searches academic databases for each domain, collects and verifies references
  3. Outline — Designs the structure of the review based on the collected literature
  4. Write — Drafts each section of the review
  5. Assemble — Combines sections into a final document with a complete bibliography

Development

  • Instructions on contributing: CONTRIBUTING.md
  • Agent architecture: .claude/docs/ARCHITECTURE.md
  • Claude instructions: CLAUDE.md

Output Structure

Each review is saved in its own directory under reviews/:

reviews/[topic]/
├── literature-review-final.md        # Complete review (markdown)
├── literature-review-final.docx      # Complete review (Word, requires pandoc)
├── literature-all.bib                # Aggregated bibliography
└── intermediate_files/
    ├── json/                         # API response files (archived)
    ├── lit-review-plan.md            # Domain decomposition
    ├── literature-domain-*.bib       # Per-domain BibTeX files
    ├── synthesis-outline.md          # Review structure
    ├── synthesis-section-*.md        # Individual sections
    └── task-progress.md              # Progress tracker

Participate in Research

We're conducting validation studies (pending IRB approval) to rigorously assess whether PhilLit meets the standards required for serious philosophical research.

  • Study 1: Use PhilLit on topics you already know well. We provide technical support and cover API costs.
  • Study 2: Review literature overviews generated by others (no technical setup required).

Interested in participating? Sign up here and we'll notify you when the studies launch.

Contact


Inspired by LiRA multi-agent patterns.

About

Multi-agent workflow for accurate and comprehensive literature reviews in philosophy

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • TeX 75.3%
  • Python 24.3%
  • Shell 0.4%