Skip to content

Commit 3dbca2d

Browse files
committed
v1.3.1
Patch release replacing the mbedTLS crypto backend with OpenSSL 3, fixing the `print` built-in to match OPA semantics, and adding vcpkg packaging support. **Breaking Changes** - Removed the `mbedtls` crypto backend. The default `REGOCPP_CRYPTO_BACKEND` is now `openssl3` (previously `mbedtls`). Builds on Linux/macOS require OpenSSL 3.0+ development headers (e.g. `libssl-dev`, `openssl@3`). The `bcrypt` backend for Windows is unchanged. **Bug Fixes** - Fixed the `print` built-in (`internal.print`) to match OPA semantics: arguments are now wrapped in set comprehensions, undefined arguments print as `<undefined>` instead of aborting, and multi-valued arguments produce the cross-product of output lines. - Fixed `regocppConfig.cmake.in` to use `@PACKAGE_INIT@` and `CMakeFindDependencyMacro` instead of re-fetching Trieste from git. Corrected the typo `REGOCPP_lIBRARIES` → `REGOCPP_LIBRARIES`. **Build System / Packaging** - Added `REGOCPP_USE_FETCH_CONTENT` option (default `ON`). When `OFF`, Trieste and its parsers are located via `find_package` instead of `FetchContent`, enabling vcpkg and other package-manager workflows. - Added vcpkg port files (`ports/rego-cpp/`): `vcpkg.json` manifest with `openssl3` and `tools` features, `portfile.cmake`, and usage instructions. - Header install rules now use `CONFIGURATIONS Release RelWithDebInfo MinSizeRel` to prevent duplicate headers under `debug/include` in multi-config builds (vcpkg compatibility). - The fuzzer (`rego_fuzzer`) is no longer installed unless tests are also enabled, avoiding stray binaries in vcpkg's `bin/` directory. - Install targets (`trieste`, `json`, `yaml`, `snmalloc`) are only exported when `REGOCPP_USE_FETCH_CONTENT=ON`. - Added a minimal vcpkg consumer test project (`tests/vcpkg-consumer/`) that verifies `find_package(regocpp)` and a simple query. **CI** - Added `vcpkg-integration` and `vcpkg-integration-minimal` CI jobs testing vcpkg install on Ubuntu, macOS, and Windows. - Added version consistency check between `VERSION` and `ports/rego-cpp/vcpkg.json`. - All CI jobs now install OpenSSL development headers where needed. - Python wheel builds (`build_wheels.yml`) updated to install OpenSSL on macOS (Homebrew), manylinux (yum), and musllinux (apk). **Wrappers** - Python, Rust, and .NET wrappers updated to use `openssl3` instead of `mbedtls` as the default crypto backend. - Rust `build.rs` now selects `bcrypt` on Windows and `openssl3` elsewhere, linking against the appropriate system libraries. **Test Infrastructure** - Added `want_output` field to YAML test cases, enabling validation of `print` output via stdout capture. - Added 10 new `print` test cases covering basic output, multiple arguments, undefined arguments, collections, variables, null/false, empty calls, and multiple print calls. Signed-off-by: Matthew A Johnson <matjoh@microsoft.com>
1 parent 975bfa0 commit 3dbca2d

29 files changed

Lines changed: 1086 additions & 2153 deletions

.github/copilot-instructions.md

Lines changed: 22 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -169,53 +169,60 @@ For non-trivial features (new syntax, new passes, AST restructuring), analyze th
169169

170170
### Multi-perspective Planning Process
171171

172-
When planning a non-trivial code change, use four sub-planners running in parallel to generate competing plans, then synthesise the best elements into a final plan.
172+
When planning a non-trivial code change, use five sub-planners running in parallel to generate competing plans and an adversarial attack report, then synthesise the best elements into a final plan.
173173

174174
#### Step 1 — Gather sub-plans
175175

176-
Spawn **four fresh subagents**, each prompted to use one of the following skills. Each subagent receives the same task description and context but plans through a different lens:
176+
Spawn **five fresh subagents**, each prompted to use one of the following skills. Each subagent receives the same task description and context but plans through a different lens:
177177

178178
| Subagent | Skill | Focus |
179179
|----------|-------|-------|
180180
| Speed Planner | `/plan-speed` | Runtime performance, low allocations, minimal passes, cache efficiency |
181181
| Security Planner | `/plan-security` | Defence in depth, safe error handling, bounded resources, fuzz coverage |
182182
| Usability Planner | `/plan-usability` | Clarity, readability, correctness, consistent naming, one-concept-per-pass |
183183
| Conservative Planner | `/plan-conservative` | Smallest diff, maximum reuse, no speculative generality, backwards compat |
184+
| Adversarial Planner | `/plan-adversarial` | Red-team attacks, hidden assumptions, untested edge cases, semantic divergence, consensus blind spots |
184185

185-
Prompt each subagent with:
186+
Prompt each build-planner subagent (Speed, Security, Usability, Conservative) with:
186187
> You are planning a change to the rego-cpp project. Use the `/[skill-name]` skill to guide your planning. Here is the task: [task description and relevant context]. Produce a numbered plan following the output format defined in the skill.
187188
188-
#### Step 2 — Evaluate the four plans
189+
Prompt the Adversarial Planner with:
190+
> You are a red-team adversary attacking a proposed change to rego-cpp. Use the `/plan-adversarial` skill to guide your attack. Here is the proposed change: [task description and relevant context]. The other four planners are building this feature — your job is to break it. Produce an attack report following the output format defined in the skill.
189191
190-
Review the four plans yourself and produce a short evaluation covering:
192+
#### Step 2 — Evaluate the five plans
191193

192-
- **Convergence**: where two or more plans agree on the same approach. High convergence suggests a clearly correct design.
194+
Review the four build plans and the adversarial attack report yourself and produce a short evaluation covering:
195+
196+
- **Convergence**: where two or more build planners agree on the same approach. High convergence suggests a clearly correct design — but check the adversarial report for challenges to that convergence.
197+
- **Adversarial findings**: which attacks from the adversarial planner are valid and must be addressed in the final plan. Classify each as MUST-ADDRESS, SHOULD-ADDRESS, or ACKNOWLEDGED (risk accepted).
193198
- **Unique insights**: ideas that appear in only one plan and are worth incorporating.
194199
- **Conflicts**: where plans disagree. For each conflict, state which perspective you favour and why.
195-
- **Gaps**: anything none of the four plans addressed.
200+
- **Gaps**: anything none of the five plans addressed.
196201

197202
#### Step 3 — Synthesise the final plan
198203

199-
Spawn a **fifth subagent** (the synthesiser). Provide it with:
204+
Spawn a **sixth subagent** (the synthesiser). Provide it with:
200205
- The original task description.
201-
- All four sub-plans (labelled by perspective).
202-
- Your evaluation from Step 2.
206+
- All four build sub-plans (labelled by perspective).
207+
- The adversarial attack report.
208+
- Your evaluation from Step 2 (including adversarial finding classifications).
203209

204210
Prompt the synthesiser with:
205-
> You are producing the final plan for a change to rego-cpp. You have received four sub-plans from different perspectives (Speed, Security, Usability, Conservative) and an evaluation of those plans. Synthesise them into a single coherent, numbered plan that balances all four concerns. Where the evaluation favours one perspective, follow it. Where the evaluation is neutral, prefer the Conservative approach. Output the final plan in the standard format: Goal, Steps (with file paths and descriptions balancing all four perspectives), Rationale (explaining the synthesis), and Trade-offs (any conflicts between perspectives and how they were resolved).
211+
> You are producing the final plan for a change to rego-cpp. You have received four build sub-plans from different perspectives (Speed, Security, Usability, Conservative), an adversarial attack report, and an evaluation of all five. Synthesise them into a single coherent, numbered plan that balances all four build concerns and defends against the adversarial attacks classified as MUST-ADDRESS or SHOULD-ADDRESS. Where the evaluation favours one perspective, follow it. Where the evaluation is neutral, prefer the Conservative approach. For each MUST-ADDRESS adversarial finding, include a specific mitigation step in the plan. Output the final plan in the standard format: Goal, Steps (with file paths and descriptions balancing all perspectives), Adversarial Mitigations (how each MUST-ADDRESS attack is handled), Rationale (explaining the synthesis), and Trade-offs (any conflicts between perspectives and how they were resolved).
206212
207213
#### Step 4 — Review the synthesised plan
208214

209215
Before presenting the plan, run an iterative review loop:
210216

211-
1. Spawn a subagent to review the synthesised plan. Provide it with the original task description, the four sub-plans, your evaluation, and the synthesised plan. Ask it to check for: logical errors in the step ordering, steps that contradict each other, missing error handling or edge cases, violations of rego-cpp conventions, and anything the synthesis dropped that should have been kept.
217+
1. Spawn a subagent to review the synthesised plan. Provide it with the original task description, the four build sub-plans, the adversarial attack report, your evaluation, and the synthesised plan. Ask it to check for: logical errors in the step ordering, steps that contradict each other, missing error handling or edge cases, violations of rego-cpp conventions, anything the synthesis dropped that should have been kept, and any MUST-ADDRESS adversarial attacks that lack adequate mitigation in the plan.
212218
2. If the review finds issues, revise the plan yourself and spawn a **different** subagent to review the revised version.
213219
3. Repeat until a review comes back clean (no issues found).
214220

215221
#### Step 5 — Present for approval
216222

217223
Present the reviewed plan to the user along with a brief summary of:
218-
- Key points of agreement across the four sub-planners.
224+
- Key points of agreement across the build sub-planners.
225+
- Adversarial attacks that were addressed and how, plus any that were acknowledged but not mitigated (with rationale).
219226
- Notable trade-offs made during synthesis.
220227
- Any minority opinions from individual sub-planners that were overruled.
221228
- Issues caught and resolved during the review loop (if any).
@@ -315,7 +322,7 @@ When iterating on a specific feature, **prefer running individual OPA subdirecto
315322

316323
### Debugging with lldb
317324

318-
Debug builds (e.g., `build-mbedtls` with `CMAKE_BUILD_TYPE=Debug`) include full debug symbols. Use `lldb` to diagnose test failures, crashes, or incorrect results:
325+
Debug builds (e.g., with `CMAKE_BUILD_TYPE=Debug`) include full debug symbols. Use `lldb` to diagnose test failures, crashes, or incorrect results:
319326

320327
```bash
321328
# Break at a specific function and run a single test case
@@ -330,7 +337,7 @@ cd build && lldb ./tests/rego_test -- opa/v1/test/cases/testdata/v1/<suite>/<tes
330337
(lldb) n / s / c # next / step / continue
331338
```
332339

333-
This is particularly useful for debugging backend-specific failures (e.g., a test passes with OpenSSL but fails with mbedTLS) where the issue is in crypto or encoding logic.
340+
This is particularly useful for debugging backend-specific failures (e.g., a test passes with OpenSSL but fails with bcrypt) where the issue is in crypto or encoding logic.
334341

335342
### Generative Fuzzer (`rego_fuzzer`)
336343

.github/skills/bump-version/SKILL.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -22,6 +22,7 @@ library and its wrapper packages.
2222
| File | Field / Pattern | Example |
2323
|------|----------------|---------|
2424
| `VERSION` | Entire file contents | `1.3.0` |
25+
| `ports/rego-cpp/vcpkg.json` | `"version": "X.Y.Z"` | `"version": "1.3.0"` |
2526
| `wrappers/python/setup.py` | `VERSION = "X.Y.Z"` | `VERSION = "1.3.0"` |
2627
| `wrappers/rust/regorust/Cargo.toml` | `version = "X.Y.Z"` | `version = "1.3.0"` |
2728
| `wrappers/dotnet/Rego/Rego.csproj` | `<Version>X.Y.Z</Version>` | `<Version>1.3.0</Version>` |

.github/skills/code-review/SKILL.md

Lines changed: 38 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,11 @@
11
---
22
name: code-review
3-
description: 'Perform a multi-perspective code review of rego-cpp changes. Use when: reviewing a release, auditing a branch diff, evaluating a PR, or performing a pre-merge code review. Launches four parallel review subagents (Security, Performance, Usability, Conservative), verifies key findings, synthesises a unified report with severity-ranked findings, and produces actionable remediation recommendations.'
3+
description: 'Perform a multi-perspective code review of rego-cpp changes. Use when: reviewing a release, auditing a branch diff, evaluating a PR, or performing a pre-merge code review. Launches five parallel review subagents (Security, Performance, Usability, Conservative, Adversarial), verifies key findings, synthesises a unified report with severity-ranked findings, and produces actionable remediation recommendations.'
44
---
55

66
# Multi-Perspective Code Review
77

8-
Perform a structured code review by examining changes from four independent
8+
Perform a structured code review by examining changes from five independent
99
perspectives, cross-checking findings against source code, and producing a
1010
unified report with actionable recommendations.
1111

@@ -20,8 +20,9 @@ unified report with actionable recommendations.
2020

2121
A single reviewer tends toward their own bias — a security expert over-flags
2222
performance patterns, a performance expert under-flags input validation. This
23-
skill runs four parallel reviews, each with a strict lens, then synthesises
24-
findings where multiple perspectives converge or provide unique insight.
23+
skill runs five parallel reviews — four constructive perspectives and one
24+
adversarial red team — then synthesises findings where multiple perspectives
25+
converge or provide unique insight.
2526

2627
## Perspectives
2728

@@ -31,6 +32,7 @@ findings where multiple perspectives converge or provide unique insight.
3132
| **Performance** | Allocation minimisation, cache-friendly access, pass count, hot-path awareness, algorithmic complexity | [plan-speed](../plan-speed/SKILL.md) |
3233
| **Usability** | Correctness, clarity, naming, WF precision, error message quality, one-concept-per-pass, API ergonomics | [plan-usability](../plan-usability/SKILL.md) |
3334
| **Conservative** | Smallest diff, backwards compatibility, API stability, reuse, no speculative generality, blast radius | [plan-conservative](../plan-conservative/SKILL.md) |
35+
| **Adversarial** | Red-team attacks, hidden assumptions, untested edge cases, semantic divergence from OPA, consensus blind spots, breaking inputs | [plan-adversarial](../plan-adversarial/SKILL.md) |
3436

3537
## Procedure
3638

@@ -46,17 +48,17 @@ git diff --stat v1.2.0..HEAD
4648
Group changed files by subsystem (parser, builtins, VM, C API, build system,
4749
wrappers) to assign review focus areas.
4850

49-
### Step 2: Launch Four Review Subagents
51+
### Step 2: Launch Five Review Subagents
5052

51-
Spawn four Explore subagents **in parallel**, one per perspective. Each
53+
Spawn five Explore subagents **in parallel**, one per perspective. Each
5254
subagent receives:
5355

5456
1. The same list of changed files and feature summary
5557
2. The perspective-specific review lens (from the table above)
5658
3. Specific files to examine based on the subsystem grouping
5759
4. Instructions to classify findings by severity and provide file/line references
5860

59-
**Prompt template for each subagent:**
61+
**Prompt template for each constructive subagent (Security, Performance, Usability, Conservative):**
6062

6163
> You are performing a {PERSPECTIVE}-focused code review of rego-cpp.
6264
> The changes add: {FEATURE_SUMMARY}.
@@ -77,9 +79,34 @@ Severity scales per perspective:
7779
- **Usability**: CONCERN / SUGGESTION / POSITIVE
7880
- **Conservative**: BREAKING / HIGH-RISK / MEDIUM-RISK / LOW-RISK / OK
7981

82+
**Prompt template for the Adversarial subagent:**
83+
84+
> You are a red-team adversary reviewing rego-cpp changes.
85+
> The changes add: {FEATURE_SUMMARY}.
86+
>
87+
> Your review lens: **Attack the implementation. Find hidden assumptions,
88+
> untested edge cases, semantic divergence from OPA, consensus blind spots,
89+
> and inputs that break the new code.**
90+
>
91+
> THOROUGHNESS: thorough
92+
>
93+
> Please examine these files and report attacks:
94+
> {FILE_LIST_WITH_SPECIFIC_QUESTIONS}
95+
>
96+
> For each attack, classify confidence as HIGH (proven with a test case),
97+
> MEDIUM (likely based on code analysis), or LOW (theoretical). Provide
98+
> concrete adversarial inputs (Rego policies, JSON data) wherever possible.
99+
> Identify any shared assumptions across the other review perspectives that
100+
> may be wrong. Return a structured attack report.
101+
102+
Severity scale for Adversarial:
103+
- **HIGH**: Concrete breaking input or proven semantic divergence from OPA
104+
- **MEDIUM**: Likely failure based on code analysis, no concrete input yet
105+
- **LOW**: Theoretical concern, conditions for failure are speculative
106+
80107
### Step 3: Verify Key Findings
81108

82-
After collecting all four reports, identify the highest-severity findings and
109+
After collecting all five reports, identify the highest-severity findings and
83110
**spot-check them against source code**. Launch a verification subagent:
84111

85112
> For each claim below, read the relevant code and report whether the claim
@@ -103,9 +130,9 @@ severity scale:
103130

104131
| Unified Severity | Mapping |
105132
|-----------------|---------|
106-
| CRITICAL / HIGH | Security CRITICAL/HIGH, Performance HIGH, Usability CONCERN (correctness bug), Conservative BREAKING |
107-
| MEDIUM | Security MEDIUM, Performance MEDIUM, Usability CONCERN (non-correctness), Conservative HIGH-RISK |
108-
| LOW | Security LOW, Performance LOW, Usability SUGGESTION, Conservative MEDIUM-RISK |
133+
| CRITICAL / HIGH | Security CRITICAL/HIGH, Performance HIGH, Usability CONCERN (correctness bug), Conservative BREAKING, Adversarial HIGH |
134+
| MEDIUM | Security MEDIUM, Performance MEDIUM, Usability CONCERN (non-correctness), Conservative HIGH-RISK, Adversarial MEDIUM |
135+
| LOW | Security LOW, Performance LOW, Usability SUGGESTION, Conservative MEDIUM-RISK, Adversarial LOW |
109136

110137
Each finding gets: number, description, originating perspective(s), verification
111138
status, file path and line references.

0 commit comments

Comments
 (0)