Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .beads/metadata.json
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
{
"database": "beads.db",
"jsonl_export": "issues.jsonl"
}
}
39 changes: 39 additions & 0 deletions .github/workflows/prek.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
name: Prek

on:
push:
branches: [main]
pull_request:

concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true

permissions:
contents: read

jobs:
determine-runner:
runs-on: ubuntu-latest
outputs:
runner: ${{ steps.runner.outputs.use-runner }}
steps:
- name: Determine runner
id: runner
uses: mikehardy/runner-fallback-action@v1
with:
github-token: ${{ secrets.GH_RUNNER_TOKEN }}
primary-runner: self-hosted-16-cores
fallback-runner: ubuntu-latest
organization: fuww
fallback-on-error: true

prek:
runs-on: ${{ fromJson(needs.determine-runner.outputs.runner) }}
needs: [determine-runner]
steps:
- uses: actions/checkout@v4
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- name: Run prek checks
run: nix develop --command prek run --all-files

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Add credentials for private flake input in CI

In .github/workflows/prek.yml, the prek job runs nix develop, but this repository’s flake uses a private SSH input (prompts from ssh://git@github.com/fuww/prompts.git in flake.nix), and this workflow does not configure any SSH key/agent before invoking Nix. On a standard GitHub-hosted runner this causes flake input fetch to fail, so the new check workflow will fail before running any hooks.

Useful? React with 👍 / 👎.

6 changes: 6 additions & 0 deletions AGENTS.md
Original file line number Diff line number Diff line change
Expand Up @@ -87,3 +87,9 @@ All content is markdown and JSON — edit directly, no build step required. When
- NEVER stop before pushing - that leaves work stranded locally
- NEVER say "ready to push when you are" - YOU must push
- If push fails, resolve and retry until it succeeds

## Code Quality

```bash
nix develop --command prek run --all-files # Run all pre-commit checks
```
6 changes: 3 additions & 3 deletions PROMPT_plan.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
- For each epic, verify child tasks cover all aspects of the specification
- Check for missing dependencies using `bd dep cycles` (should be empty)
- Identify any tasks that should block others but don't

2. Update the beads database to fix any issues found:
- Create missing tasks with `bd create "title" -t task -p <priority> -d "description"`
- Add missing dependencies with `bd dep add <child> <parent> --type blocks`
Expand All @@ -20,7 +20,7 @@
- `bd blocked` should show tasks waiting on dependencies
- `bd stats` should show accurate counts

IMPORTANT: Plan only. Do NOT implement anything. Do NOT assume functionality is missing;
IMPORTANT: Plan only. Do NOT implement anything. Do NOT assume functionality is missing;
use `bd list` and code search to verify first.

ULTIMATE GOAL: Refactor all knowledge-work plugins from generic Anthropic templates to FashionUnited-specific workflows, tools, and domain context. Ensure all necessary tasks exist as beads with proper dependencies so `bd ready` always shows the right next work.
ULTIMATE GOAL: Refactor all knowledge-work plugins from generic Anthropic templates to FashionUnited-specific workflows, tools, and domain context. Ensure all necessary tasks exist as beads with proper dependencies so `bd ready` always shows the right next work.
7 changes: 7 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -96,3 +96,10 @@ Plugins are just markdown files. Fork the repo, make your changes, and submit a
## License

This fork is licensed under Apache 2.0, same as the [original Anthropic repository](https://github.com/anthropics/knowledge-work-plugins). See [LICENSE](LICENSE) for details.

## Development Environment

```bash
nix develop # Enter dev shell with all tools
nix develop --command prek run --all-files # Run pre-commit checks
```
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ description: Generate clinical trial protocols for medical devices or drugs. Thi

## Overview

This skill generates clinical trial protocols for **medical devices or drugs** using a **modular, waypoint-based architecture**
This skill generates clinical trial protocols for **medical devices or drugs** using a **modular, waypoint-based architecture**

## What This Skill Does

Expand Down Expand Up @@ -504,5 +504,3 @@ When this skill is invoked:
- **Research Only:** Display research summary location and offer to continue with full protocol
- **Full Protocol:** Congratulate user, display protocol location and next steps
- Remind user of disclaimers


Original file line number Diff line number Diff line change
Expand Up @@ -198,4 +198,4 @@ If `waypoints/intervention_metadata.json` already exists:
- Ensure the intervention_id is filesystem-safe (no spaces, special chars)
- Validate that required fields are not empty
- Write clean, formatted JSON with proper indentation
- Handle both device and drug interventions appropriately with the right terminology
- Handle both device and drug interventions appropriately with the right terminology
Original file line number Diff line number Diff line change
Expand Up @@ -250,11 +250,11 @@ STATEMENT OF COMPLIANCE
**Content to Generate:**

STATEMENT OF COMPLIANCE
Provide a statement that the trial will be conducted in compliance with the protocol, International Conference on Harmonisation Good Clinical Practice (ICH GCP) and applicable state, local and federal regulatory requirements. Each engaged institution must have a current Federal-Wide Assurance (FWA) issued by the Office for Human Research Protections (OHRP) and must provide this protocol and the associated informed consent documents and recruitment materials for review and approval by an appropriate Institutional Review Board (IRB) or Ethics Committee (EC) registered with OHRP. Any amendments to the protocol or consent materials must also be approved before implementation. Select one of the two statements below:
Provide a statement that the trial will be conducted in compliance with the protocol, International Conference on Harmonisation Good Clinical Practice (ICH GCP) and applicable state, local and federal regulatory requirements. Each engaged institution must have a current Federal-Wide Assurance (FWA) issued by the Office for Human Research Protections (OHRP) and must provide this protocol and the associated informed consent documents and recruitment materials for review and approval by an appropriate Institutional Review Board (IRB) or Ethics Committee (EC) registered with OHRP. Any amendments to the protocol or consent materials must also be approved before implementation. Select one of the two statements below:

(1) [The trial will be carried out in accordance with International Conference on Harmonisation Good Clinical Practice (ICH GCP) and the following:
(1) [The trial will be carried out in accordance with International Conference on Harmonisation Good Clinical Practice (ICH GCP) and the following:

• United States (US) Code of Federal Regulations (CFR) applicable to clinical studies (45 CFR Part 46, 21 CFR Part 50, 21 CFR Part 56, 21 CFR Part 312, and/or 21 CFR Part 812)
• United States (US) Code of Federal Regulations (CFR) applicable to clinical studies (45 CFR Part 46, 21 CFR Part 50, 21 CFR Part 56, 21 CFR Part 312, and/or 21 CFR Part 812)

National Institutes of Health (NIH)-funded investigators and clinical trial site staff who are responsible for the conduct, management, or oversight of NIH-funded clinical trials have completed Human Subjects Protection and ICH GCP Training.

Expand Down Expand Up @@ -341,7 +341,7 @@ This section contains three major components. Generate each with appropriate det

#### Section 1.2: Schema (30 lines)

**Generate a text-based flow diagram** showing study progression.
**Generate a text-based flow diagram** showing study progression.

**Required Elements:**
- **Screening Period:** Show duration (e.g., "Within 28 days") and key activities (eligibility assessment)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -29,11 +29,11 @@ ASM Hierarchy → Flat Column
device-system-document.
device-identifier → instrument_serial_number
model-number → instrument_model

measurement-aggregate-document.
analyst → analyst
measurement-time → measurement_datetime

measurement-document[].
sample-identifier → sample_id
viable-cell-density.value → viable_cell_density
Expand Down Expand Up @@ -185,43 +185,43 @@ import pandas as pd
def flatten_asm(asm_dict, technique="cell-counting"):
"""
Flatten ASM JSON to pandas DataFrame.

Args:
asm_dict: Parsed ASM JSON
technique: ASM technique type

Returns:
pandas DataFrame with one row per measurement
"""
rows = []

# Get aggregate document
agg_key = f"{technique}-aggregate-document"
agg_doc = asm_dict.get(agg_key, {})

# Extract device info
device = agg_doc.get("device-system-document", {})
device_info = {
"instrument_serial_number": device.get("device-identifier"),
"instrument_model": device.get("model-number")
}

# Get technique documents
doc_key = f"{technique}-document"
for doc in agg_doc.get(doc_key, []):
meas_agg = doc.get("measurement-aggregate-document", {})

# Extract common metadata
common = {
"analyst": meas_agg.get("analyst"),
"measurement_datetime": meas_agg.get("measurement-time"),
**device_info
}

# Extract each measurement
for meas in meas_agg.get("measurement-document", []):
row = {**common}

# Flatten measurement fields
for key, value in meas.items():
if isinstance(value, dict) and "value" in value:
Expand All @@ -232,9 +232,9 @@ def flatten_asm(asm_dict, technique="cell-counting"):
row[f"{col}_unit"] = value["unit"]
else:
row[key.replace("-", "_")] = value

rows.append(row)

return pd.DataFrame(rows)

# Usage
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -63,16 +63,16 @@
def convert_to_asm(filepath: str) -> Optional[Dict[str, Any]]:
"""
Convert {instrument_name} file to ASM format.

Args:
filepath: Path to input file

Returns:
ASM dictionary or None if conversion fails
"""
if not ALLOTROPY_AVAILABLE:
raise ImportError("allotropy library required. Install with: pip install allotropy")

try:
asm = allotrope_from_file(filepath, Vendor.{vendor})
return asm
Expand All @@ -84,36 +84,36 @@ def convert_to_asm(filepath: str) -> Optional[Dict[str, Any]]:
def flatten_asm(asm: Dict[str, Any]) -> list:
"""
Flatten ASM to list of row dictionaries for CSV export.

Args:
asm: ASM dictionary

Returns:
List of flattened row dictionaries
"""
technique = "{technique}"
rows = []

agg_key = f"{{technique}}-aggregate-document"
agg_doc = asm.get(agg_key, {{}})

# Extract device info
device = agg_doc.get("device-system-document", {{}})
device_info = {{
"instrument_serial_number": device.get("device-identifier"),
"instrument_model": device.get("model-number"),
}}

doc_key = f"{{technique}}-document"
for doc in agg_doc.get(doc_key, []):
meas_agg = doc.get("measurement-aggregate-document", {{}})

common = {{
"analyst": meas_agg.get("analyst"),
"measurement_time": meas_agg.get("measurement-time"),
**device_info
}}

for meas in meas_agg.get("measurement-document", []):
row = {{**common}}
for key, value in meas.items():
Expand All @@ -125,7 +125,7 @@ def flatten_asm(asm: Dict[str, Any]) -> list:
else:
row[clean_key] = value
rows.append(row)

return rows


Expand All @@ -134,36 +134,36 @@ def main():
parser.add_argument("input", help="Input file path")
parser.add_argument("--output", "-o", help="Output JSON path")
parser.add_argument("--flatten", action="store_true", help="Also generate CSV")

args = parser.parse_args()

input_path = Path(args.input)
if not input_path.exists():
print(f"Error: File not found: {{args.input}}")
return 1

# Convert to ASM
print(f"Converting {{args.input}}...")
asm = convert_to_asm(str(input_path))

if asm is None:
print("Conversion failed")
return 1

# Write ASM JSON
output_path = args.output or str(input_path.with_suffix('.asm.json'))
with open(output_path, 'w') as f:
json.dump(asm, f, indent=2, default=str)
print(f"ASM written to: {{output_path}}")

# Optionally flatten
if args.flatten and PANDAS_AVAILABLE:
rows = flatten_asm(asm)
df = pd.DataFrame(rows)
flat_path = str(input_path.with_suffix('.flat.csv'))
df.to_csv(flat_path, index=False)
print(f"CSV written to: {{flat_path}}")

return 0


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ Research advances generally fall into one of these categories, each with two dim
- *Logic*: Novel ways to manipulate biological systems (e.g., using CRISPR for deep mutational scanning)
- *Technology*: New tools for manipulation (e.g., developing base editors, creating whole-genome CRISPR libraries)

**MEASUREMENT**
**MEASUREMENT**
- *Logic*: Novel applications of existing measurement tools (e.g., using tissue clearing to study liver fibrosis)
- *Technology*: New measurement capabilities (e.g., developing tissue-clearing techniques, super-resolution microscopy)

Expand Down Expand Up @@ -133,7 +133,7 @@ After generating ideas, we must evaluate them critically. Here are the most comm
#### Trap #1: The Truffle Hound
**Warning:** Don't become so good at one system or technique that you fail to ask questions of biological import.

**Bad:** "What is the role of p190 RhoGAP in wing development?"
**Bad:** "What is the role of p190 RhoGAP in wing development?"
**Better:** "How do signaling pathways and cytoskeleton coordinate to control wing development?"

**Self-Check:** Is the question driven by biological curiosity or by what the user is technically capable of?
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -205,8 +205,8 @@ Claude should produce a **2-page Risk Assessment Document**:
| [Assumption 2] | Bio/Tech | 1-5 | X mo | [Rationale for score] |
| ... | ... | ... | ... | ... |

*Bio = Biological reality, Tech = Technical capability
†Risk: 1=very likely to 5=very unlikely
*Bio = Biological reality, Tech = Technical capability
†Risk: 1=very likely to 5=very unlikely
‡Time to test in months

#### Risk Profile Summary:
Expand Down
Loading
Loading