DAD-M is a structured framework for AI-assisted project work. It separates analysis, design, implementation, and validation into four explicit phases: Discover, Apply, Deploy, and Monitor.
The repository is published as a public method repository for people who want a more controlled alternative to ad-hoc prompting.
DAD-M is a reusable operating model for projects that need more than ad-hoc prompting. It is designed around milestones, documented outputs, and repeatable delivery instead of one-off chats.
Many AI-supported tasks start fast but become hard to reproduce once a project spans several steps, files, or stakeholders. DAD-M addresses that by:
- collecting facts before design
- separating planning from execution
- requiring concrete deliverables per phase
- feeding review results back into the next cycle
- stopping at defined points for human decisions
| Phase | Purpose | Typical outputs |
|---|---|---|
| Discover | Understand the current state and constraints | system overview, file structure, dependencies, risks, open questions |
| Apply | Turn facts into a solution design | architecture, interfaces, data models, pseudocode, acceptance criteria |
| Deploy | Implement the approved design | code, documents, scripts, configuration, proofs |
| Monitor | Validate results and capture follow-up work | test results, error analysis, regression risks, recommended fixes |
More detail: framework/core/phases.md
DAD-M is useful for people who need structure around AI-assisted delivery, especially in:
- software development
- system analysis
- documentation-heavy work
- automation workflows
- project planning with explicit milestones
It is most useful when work needs reviewable outputs, explicit scope boundaries, and milestone-by-milestone control.
- Read the method summary in docs/overview.md.
- Use the setup sequence in framework/core/bootstrap.md.
- Define milestones with framework/core/milestones.md.
- Run the first cycle and record outputs using framework/core/deliverables.md.
- Keep the work inside the rules in governance/guardrails.md.
For AI systems: Start with runtime/AI_BIOS.md and load only the runtime cards required by the current task and profile. General summary: AI_CONTEXT.md remains available as a compact repository overview. For software implementers: See docs/software-reference.md for interface contracts and schemas.
docs/— public explanation and onboardingframework/core/— canonical method artifactsframework/templates/— reusable framework templatesgovernance/— operating rules, evidence boundaries, and policiesgovernance/policies/— machine-readable (YAML) forms of governance rulesruntime/— AI runtime loader, profiles, and short execution cards
| File | Purpose |
|---|---|
| framework/core/phases.md | Phase definitions, states, and transition protocol |
| framework/core/milestones.md | Milestone structure including dependencies and priority |
| framework/core/deliverables.md | Expected outputs, decision log, artifact retention |
| framework/core/bootstrap.md | Setup sequence including scope declaration and plan approval |
| framework/core/artifact-retention.md | Immutability and retention classes for artifacts |
| framework/core/modules.md | Module system, registry, profiles, and extensibility |
| File | Purpose |
|---|---|
| governance/guardrails.md | Operating boundaries, human decision triggers, AWAITING_HUMAN |
| governance/quality-principles.md | Method quality principles (normative obligations) |
| governance/evidence-policy.md | Claim strength standard and publication policy |
| governance/severity-framework.md | Five-level severity scale, category overrides, threshold rationale |
| governance/governance-matrix.md | Which checks apply in which phase; blocking vs. advisory |
| governance/rework-and-escalation.md | Intra-phase rework loop and escalation rules |
| governance/framework-change-process.md | How to change framework documents (minor / significant / breaking) |
| governance/document-quality.md | Quality criteria for each document type |
| governance/policies/severity-policy.yaml | Severity level actions and category overrides (machine-readable) |
| governance/policies/human-decision-policy.yaml | Human decision triggers and format (machine-readable) |
| governance/policies/rework-policy.yaml | Rework conditions, limit, escalation (machine-readable) |
| governance/policies/approval-policy.yaml | Milestone plan approval rules (machine-readable) |
| governance/policies/scope-policy.yaml | Scope declaration and violation handling (machine-readable) |
| File | Purpose |
|---|---|
| framework/templates/human-decision-record.md | Template for mandatory human decision artifacts |
| framework/templates/discover-output.md | Template for Discover phase output records |
| framework/templates/apply-output.md | Template for Apply phase output records |
| framework/templates/rework-plan.md | Template for intra-phase rework plans (MR1–MRn) |
| File | Purpose |
|---|---|
| runtime/AI_BIOS.md | Runtime loader entry point for AI systems |
| runtime/file-registry.yaml | Runtime document registry with profiles, routes, and opt-in references |
| AI_CONTEXT.md | Compact structured summary for humans and general AI overview |
| docs/overview.md | Concise method overview including design assumption |
| docs/methodology.md | Methodological positioning, design assumption, comparison table |
| docs/getting-started.md | Step-by-step first use (7 steps including scope + approval) |
| docs/use-cases.md | Where the framework fits and where it does not |
| docs/software-reference.md | Integration contracts and schemas for software implementations |
| docs/reproducibility.md | Reproducibility requirements and checklist |
| docs/glossary.md | Canonical term definitions |
| docs/examples/rbac-case-example.md | Full structured milestone example with phase outputs and decision log |
| docs/variants/education.md | DAD-M Education variant |
| docs/decisions/MILESTONE_PLAN_V1.md | Approved milestone plan for this repository (v1.0) |
- Choose a runtime load profile and load only the relevant runtime cards.
- Define the project brief, safety boundaries, and scope declaration.
- Break the work into milestones with clear scope, dependencies, and priority.
- Obtain approval for the milestone plan before starting M1.
- Run Discover to collect the facts for milestone M1.
- Run Apply to design the solution within those facts.
- Run Deploy to implement only the approved design and capture proofs.
- Run Monitor to validate the result and prepare the next milestone.
For a concrete public example, see docs/examples/rbac-case-example.md.
DAD-M is strict about workflow boundaries. Discover is for facts, Apply is for design, Deploy is for implementation, and Monitor is for validation. The framework pauses at mandatory checkpoints and requires a documented human decision before continuing. Dependency changes, network use, and work outside the declared scope stay explicitly controlled.
This repository documents DAD-M as a public framework repository. The current focus is a clear method overview, practical starting guidance, and conservative governance notes.
A DAD-M Education variant is also being documented as an early public extension for validated learning and knowledge-building workflows. A public summary for that variant is available in docs/variants/education.md.
This repository does not claim formal approval, organization-wide adoption, or enterprise validation.
I curated this Repository in a way that hopefully helps with structuring your projects and uses the key advantages of agentic systems while keeping responsibility firmly in the hands of human operators.