pea stands for Product Engineer Agents.
It is the agent-side service for the Epic Systems Engineering Judgment
Workshop. The separate workshop app owns the learner experience; pea provides
the stakeholder simulations that learners talk to while they clarify ambiguous
requirements, surface constraints, and practice engineering judgment.
The workshop thesis is simple: as AI takes on more implementation work, the
scarce engineering skill becomes judgment. pea exists to help train that
skill.
pea simulates messy real-world stakeholders in structured exercises. These
agents can act like:
- product managers
- executives
- support and operations teams
- users
- regulators
- other engineers
The point is not to help participants write code. The point is to create the kind of ambiguity, friction, and conflicting incentives that force participants to ask better questions before anything gets built.
pea is not:
- a prompt engineering workshop
- an AI coding tutor
- a framework training project
- the primary workshop application
The learner-facing workshop app already exists. pea is the service that app
uses to run stakeholder conversations, and it will also provide instructor/admin
controls for managing agents, scenarios, and prompts.
bun install
bun run devSee docs/getting-started.md for local setup and
environment details.
Each scenario is meant to drive a critique loop:
- Present an ambiguous situation.
- Let participants ask clarification questions.
- Require them to define the problem, constraints, assumptions, risks, and success criteria.
- Reveal an implementation or proposed solution.
- Critique the gap between intent and outcome.
- Extract heuristics that improve future judgment.
Learning happens through repeated critique cycles, not lectures.
Stakeholder agents should:
- reveal information progressively instead of dumping requirements
- speak like stakeholders, not like system design interviewers
- have incomplete knowledge
- sometimes contradict themselves or introduce new constraints late
- surface hidden tradeoffs only when participants ask the right questions
Good exercises force participants to uncover missing requirements, conflicting incentives, rollout risks, UX concerns, and vague success criteria.
At a high level, pea is responsible for:
- stakeholder conversation runtime
- scenario and agent behavior configuration
- instructor/admin controls for prompts and simulations
- service endpoints the workshop app can call or embed
The workshop app remains responsible for cohort flow, learner UX, facilitation, and the broader curriculum experience.
The current codebase is built on:
| Layer | Technology |
|---|---|
| Runtime | Cloudflare Workers |
| Web stack | Remix 3 (alpha) |
| Package manager | Bun |
| Database | Cloudflare D1 |
| Session/OAuth | Cloudflare KV |
| Stateful agents | Durable Objects |
| Testing | Playwright |
| Bundling | esbuild |
| Document | Description |
|---|---|
docs/product-overview.md |
Purpose, scope, and workshop framing |
docs/roadmap.md |
High-level project phases and priorities |
docs/getting-started.md |
Local setup and development commands |
docs/architecture/index.md |
Runtime architecture and system boundaries |
docs/environment-variables.md |
Environment variable guidance |
docs/cloudflare-offerings.md |
Optional Cloudflare integrations |
docs/agents/setup.md |
Local development and verification expectations |
