Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Empty file added Codegen/REPO_NAME_OPERATE.md
Empty file.
161 changes: 161 additions & 0 deletions Codegen/analysis.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,161 @@
---
name: analysis
description: Spawn parallel agents to produce a deep, comprehensive, multi-dimensional codebase analysis — architecture, flows, APIs, quality, and onboarding
---

Perform a complete, exhaustive analysis of this codebase. Spawn **9 parallel agents** using the Task tool (subagent_type: Explore) in a **single response**. Each agent owns one analytical dimension. No agent may speculate — every finding must reference actual file paths, line numbers, or content read from the repository.

---

## Agent Assignments

### Agent 1 — Repository Topology & Module Map
- List every top-level directory with its precise purpose
- Identify sub-modules, workspaces, packages, or monorepo members
- Identify major architectural layers (e.g., API, domain, data access, UI, infrastructure, scripts, shared libs) and describe how they relate to one another
- Produce a text tree of the repo at 2–3 levels deep with inline annotations
- Flag any directories whose purpose is ambiguous or redundant

### Agent 2 — Entrypoints & Execution Flows
- Find ALL entrypoints: CLIs, HTTP servers, background workers, schedulers, event listeners, framework bootstraps (main(), app factories, WSGI/ASGI apps, server start scripts, lambda handlers)
- For each entrypoint, trace the high-level control flow from external trigger → request parsing → business logic dispatch → response/side effect
- Note middleware chains, plugin hooks, and lifecycle hooks involved
- Identify startup/teardown sequences and what they initialize or release
- Flag any entrypoints that are dead, unreachable, or unregistered

### Agent 3 — Data Flows & Transformation Paths
- Trace all major data flows: where data enters (HTTP, CLI args, message queues, files, DB reads, environment), how it is transformed, and where it exits (HTTP response, DB write, file write, queue publish, external API call)
- Identify every read/write path to persistent stores (databases, caches, files, object storage)
- Summarize key data transformation steps: parsing, validation, enrichment, serialization
- Produce text descriptions ready to render as:
- **Component Diagram**: list every major module/service and its named dependencies
- **Sequence Diagram (primary use-case)**: step-by-step actor→system message flow for the single most important operation (e.g., core API endpoint or main CLI command)
- **Sequence Diagram (secondary use-case)**: next most important operation
- Flag any data that flows without validation, sanitization, or error handling

### Agent 4 — APIs, Interfaces & Public Contracts
- Enumerate ALL public interfaces: exported functions, classes, REST endpoints, gRPC services, CLI commands, WebSocket events, plugin extension points, SDK entry surfaces
- For each, document: purpose, parameters (name + type), return type/shape, side effects, error conditions, and expected caller behavior
- Identify which interfaces are versioned, deprecated, or unstable
- Identify interfaces that lack documentation, input validation, or error contracts
- Flag any breaking changes risk between layers (e.g., internal API used externally)

### Agent 5 — Core Files, Functions & Data Structures
- List the 15–25 most central files in the codebase (highest dependency, most critical logic)
- For each critical function or class, summarize: inputs, outputs, algorithm, and side effects
- Enumerate all core domain models, entities, DTOs, schemas, and database models — including their fields, types, relationships, and validation constraints
- Identify shared utilities, helpers, and constants that are used across 3+ modules
- Document configuration loading: which files, env vars, feature flags, and secrets are read — and when
- Flag any god files, god classes, or functions with excessive cyclomatic complexity

### Agent 6 — Frameworks, Libraries & Tech Stack
- Identify all programming languages, runtimes, and their versions (from lock files, toolchain files, or manifests)
- List all major frameworks (web, ORM, CLI, testing, auth, queuing, etc.) with versions
- Document the full build pipeline: package manager, bundler/compiler, transpilation steps, asset pipeline
- Document how to run the project locally: all required commands from zero to running
- Document how tests are run, and what coverage tooling is present
- Identify containerization (Docker, Compose, K8s manifests) and CI/CD scripts
- Flag any dependency version conflicts, unresolved peer deps, or critically outdated packages

### Agent 7 — Capabilities, Features & Use-Cases
- Summarize what this program does from an end-user perspective — its core value proposition
- List every discrete user-facing feature or capability
- Produce 5 concrete example use-cases in this format:
```
Use-case N: [User goal]
Trigger: [How user initiates]
Flow: [Modules A → B → C involved]
Output: [What the user gets]
```
- Identify features that are partially implemented, stubbed out, or marked TODO
- Identify any capability gaps relative to what the README or documentation promises

### Agent 8 — Code Quality, Consistency & Onboarding
- Assess naming consistency: files, functions, variables, constants, types — are conventions followed uniformly?
- Assess modularity: single-responsibility adherence, coupling/cohesion balance, circular dependency presence
- Assess test coverage: what is tested vs. what is untested; identify the riskiest untested paths
- Assess documentation level: inline comments, JSDoc/docstrings, README completeness, architecture docs
- Assess error handling consistency: are errors caught, typed, logged, and propagated uniformly?
- Rate onboarding difficulty (Easy / Medium / Hard / Very Hard) with specific justification
- Identify the top 5 most confusing or undiscoverable parts of the codebase for a new developer

### Agent 9 — Strengths, Risks & Strategic Assessment
- Identify the top 5 architectural strengths with specific evidence (file/pattern references)
- Identify the top 5 technical risks: scalability bottlenecks, single points of failure, security exposure, maintainability debt
- Identify any anti-patterns present (e.g., anemic domain model, leaky abstractions, spaghetti dependencies)
- Rate overall implementation comprehensiveness on this scale — with justification:
- `1 — Skeleton`: scaffolding only, nothing functional
- `2 — Prototype`: core path works, major gaps elsewhere
- `3 — MVP`: primary use-cases work end-to-end, many edge cases missing
- `4 — Solid`: production-capable, tested, documented
- `5 — Production-Grade`: hardened, observable, fully documented, extensible
- State explicitly: what is this codebase best suited for, and where would it be ill-suited?

---

## Agent Rules

1. Read actual source files — no assumptions about what code probably does
2. Every claim must reference a specific file path or line number
3. If a file cannot be read, note it explicitly and skip rather than guess
4. Do not report opinions or preferences — only structural facts and verified patterns
5. Agents 1–8 are purely descriptive; Agent 9 is the only agent permitted to make evaluative judgments

---

## Synthesis & Output

After all 9 agents complete, synthesize their findings into a single `ANALYSIS.md` file at the project root using this exact structure:

```markdown
# CODEBASE ANALYSIS: [Project Name]
Generated: [date]
Analyst: Claude (parallel 9-agent exploration)

---

## 1. Repository Topology

[From Agent 1 — tree + layer map]

## 2. Entrypoints & Execution Flows

[From Agent 2 — each entrypoint with control flow]

## 3. Data Flows & Architecture Diagrams

### 3a. Component Diagram (text)
### 3b. Sequence Diagram — [Primary Use-Case Name]
### 3c. Sequence Diagram — [Secondary Use-Case Name]

[From Agent 3]

## 4. APIs, Interfaces & Public Contracts

[From Agent 4 — full enumeration with signatures]

## 5. Core Files, Functions & Data Structures

[From Agent 5 — central files, critical functions, domain models]

## 6. Frameworks, Libraries & Tech Stack

[From Agent 6 — full stack + run instructions]

## 7. Capabilities, Features & Use-Cases

[From Agent 7 — feature list + 5 use-cases]

## 8. Code Quality & Onboarding Assessment

[From Agent 8 — quality metrics + onboarding rating]

## 9. Strengths, Risks & Strategic Assessment

[From Agent 9 — strengths, risks, comprehensiveness rating, suitability]

---
*Analysis produced by parallel codebase exploration. All findings reference actual source files.*
```

Write the file, then tell the user it's ready and how many files were analyzed.
45 changes: 45 additions & 0 deletions Codegen/candy.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
---
name: candy
description: Find low-risk, high-reward wins across the codebase using parallel exploration agents
---

Find quick wins in this codebase. Spawn 5 explore agents in parallel using the Task tool (subagent_type: Explore), each focusing on one area. Adapt each area to what's relevant for THIS project's stack and architecture.

**Agent 1 - Performance**: Inefficient algorithms, unnecessary work, missing early returns, blocking operations, things that scale poorly

**Agent 2 - Dead Weight**: Unused code, unreachable paths, stale comments/TODOs, obsolete files, imports to nowhere

**Agent 3 - Lurking Bugs**: Unhandled edge cases, missing error handling, resource leaks, race conditions, silent failures

**Agent 4 - Security**: Hardcoded secrets, injection risks, exposed sensitive data, overly permissive access, unsafe defaults

**Agent 5 - Dependencies & Config**: Unused packages, vulnerable dependencies, misconfigured settings, dead environment variables, orphaned config files

## The Only Valid Findings

A finding is ONLY valid if it falls into one of these categories:

1. **Dead** - Code that literally does nothing. Unused, unreachable, no-op.
2. **Broken** - Will cause errors, crashes, or wrong behavior. Not "might" - WILL.
3. **Dangerous** - Security holes, data exposure, resource exhaustion.

That's it. Three categories. If it doesn't fit, don't report it.

**NOT valid findings:**
- "This works but could be cleaner" - NO
- "Modern best practice suggests..." - NO
- "This is verbose/repetitive but functional" - NO
- "You could use X instead of Y" - NO
- "This isn't how I'd write it" - NO

If the code works, isn't dangerous, and does something - leave it alone.

## Output Format

For each finding:
```
[DEAD/BROKEN/DANGEROUS] file:line - What it is
Impact: What happens if left unfixed
```

Finding nothing is a valid outcome. Most codebases don't have easy wins - that's fine.
55 changes: 55 additions & 0 deletions Codegen/carrot.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
---
name: carrot
description: Verify implementations against real-world code samples and official documentation using parallel agents
---

Verify this codebase against current best practices and official documentation. Spawn 8 explore agents in parallel using the Task tool (subagent_type: Explore), each focusing on one category. Each agent must VERIFY findings using Grep MCP (real code samples) or WebSearch (official docs) - no assumptions allowed.

**Agent 1 - Core Framework**: Detect the main framework (React, Next, Express, Django, Rails, etc.), verify usage patterns against official documentation via WebSearch

**Agent 2 - Dependencies/Libraries**: Check if library APIs being used are current or deprecated. Verify against library documentation and Grep MCP for how modern codebases use these libraries

**Agent 3 - Language Patterns**: Identify the primary language (TypeScript, Python, Go, etc.), verify idioms and patterns are current. Use Grep MCP to see how modern projects write similar code

**Agent 4 - Configuration**: Examine build tools, bundlers, linters, and config files. Verify settings against current tool documentation via WebSearch

**Agent 5 - Security Patterns**: Review auth, data handling, secrets management. Verify against current security guidance and OWASP recommendations via WebSearch

**Agent 6 - Testing**: Identify test framework in use, verify testing patterns match current library recommendations. Check via docs and Grep MCP for modern test patterns

**Agent 7 - API/Data Handling**: Review data fetching, state management, storage patterns. Verify against current patterns via Grep MCP and framework docs

**Agent 8 - Error Handling**: Examine error handling patterns, verify they match library documentation. Use Grep MCP to compare against real-world implementations

## Agent Workflow

Each agent MUST follow this process:
1. **Identify** - What's relevant in THIS project for your category
2. **Find** - Locate specific implementations in the codebase
3. **Verify** - Check against Grep MCP (real code) OR WebSearch (official docs)
4. **Report** - Only report when verified current practice differs from codebase

## The Only Valid Findings

A finding is ONLY valid if:
1. **OUTDATED** - Works but uses old patterns with verified better alternatives
2. **DEPRECATED** - Uses APIs marked deprecated in current official docs
3. **INCORRECT** - Implementation contradicts official documentation

**NOT valid findings:**
- "I think there's a better way" without verification - NO
- "This looks old" without proof - NO
- Style preferences or subjective improvements - NO
- Anything not verified via Grep MCP or official docs - NO

## Output Format

For each finding:
```
[OUTDATED/DEPRECATED/INCORRECT] file:line - What it is
Current: How it's implemented now
Verified: What the correct/current approach is
Source: Grep MCP (X repos) | URL to official docs
```

No findings is a valid outcome. If implementations match current practices, that's good news.
Loading