Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
63 changes: 41 additions & 22 deletions .github/workflows/README-staging-lint-checks.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,37 +11,52 @@ merged into the main codebase.
The workflow serves as a validation tool for linter rule development with the following workflow:

1. **Rule Development**: Engineers write new validation rules or modify existing ones
2. **Testing Setup**: Engineers create a PR and specify which rules to test via labels or PR description
2. **Testing Setup**: Engineers create a PR and specify which rules to test via labels
3. **Automated Validation**: The workflow runs specified rules against live specification files using AutoRest
4. **Result Analysis**: Engineers review the output to identify false positives, false negatives, or unexpected behavior
5. **Quality Assurance**: Engineers and reviewers validate that rules work correctly before production release

**Important**: This workflow is designed to assist validation, not to block PR merging. The responsibility for
ensuring rule quality lies with the engineer and reviewer based on the workflow output.
**Important**: This workflow enforces three merge-blocking gates:

## How to Use
1. **Rule changes require test rules** — If rule files are modified, the PR must specify rules to test via labels
2. **Command failures block merges** — AutoRest crashes or script errors cause the workflow to fail
3. **Validation errors require acknowledgment** — When errors are found, the author must add an `errors-acknowledged` label after reviewing them

### Specifying Rules to Test
The workflow automatically re-runs when labels are added or removed.

You can specify which validation rules to test using either of these methods:
## Validation Requirements

#### Method 1: PR Labels
### When Rule Files Are Changed

Add labels to your pull request with the format `test-<RuleName>`:
If your PR modifies any of these files, you **must** specify test rules:

- `test-PostResponseCodes`
- `test-DeleteMustNotHaveRequestBody`
- `test-LongRunningOperationsWithLongRunningExtension`
- `packages/rulesets/src/spectral/az-arm.ts`
- `packages/rulesets/src/spectral/az-common.ts`
- `packages/rulesets/src/spectral/az-dataplane.ts`
- `packages/rulesets/src/spectral/functions/*.ts`
- `packages/rulesets/src/native/legacyRules/**/*.ts`
- `packages/rulesets/src/native/functions/**/*.ts`
- `packages/rulesets/src/native/rulesets/**/*.ts`

#### Method 2: PR Description
The workflow will fail until test rules are specified.

Add a line in your PR body:
### When Validation Errors Are Found

```text
rules: PostResponseCodes, DeleteMustNotHaveRequestBody
```
1. Download the `linter-findings` artifact and review the errors
2. Add the `errors-acknowledged` label to confirm you have reviewed them
3. The workflow re-runs automatically and passes (reviewer approval is still required)

Note: If both methods are used, rules from both sources are combined.
Removing the label re-triggers the workflow and re-blocks the PR.

## How to Use

### Specifying Rules to Test

Add labels to your pull request with the format `test-<RuleName>`:

- `test-PostResponseCodes`
- `test-DeleteMustNotHaveRequestBody`
- `test-LongRunningOperationsWithLongRunningExtension`

### Workflow Configuration

Expand All @@ -50,19 +65,24 @@ The workflow can be configured through environment variables:
```yaml
env:
SPEC_REPO: Azure/azure-rest-api-specs
FAIL_ON_ERRORS: "false"
MAX_FILES: "100"
ALLOWED_RPS: "compute,monitor,sql,hdinsight,network,resource,storage"
```

- **SPEC_REPO**: Source repository for OpenAPI specifications
- **FAIL_ON_ERRORS**: Whether the workflow should fail when validation errors are found
- **MAX_FILES**: Maximum number of specification files to process
- **ALLOWED_RPS**: Comma-separated list of resource providers to include in testing

**Note**: These values can be modified directly in the `.github/workflows/staging-lint-checks.yaml` file to adjust the
workflow behavior based on testing requirements.

### Labels

| Label | Purpose |
| --------------------- | ------------------------------------------------------------- |
| `test-<RuleName>` | Specifies a rule to validate (e.g., `test-PostResponseCodes`) |
| `errors-acknowledged` | Confirms the PR author has reviewed validation errors |

## Debugging and Troubleshooting

### Viewing Results
Expand All @@ -74,8 +94,7 @@ workflow behavior based on testing requirements.

### Common Issues

**No rules detected**: Ensure your PR labels follow the exact format `test-<RuleName>` or verify the rules line
in your PR description is properly formatted.
**No rules detected**: Ensure your PR labels follow the exact format `test-<RuleName>`.

**Workflow fails**: Check the workflow logs for specific error messages. The artifact will still be uploaded
even if the workflow fails.
Expand All @@ -85,5 +104,5 @@ the `ALLOWED_RPS` environment variable.

## Related Components

- `.github/workflows/src/extract-rule-names-and-run-validation.js`: Single consolidated script that parses rule names (from PR labels or body) and runs AutoRest with the selected rules over allowed spec files.
- `.github/workflows/src/extract-rule-names-and-run-validation.js`: Single consolidated script that parses rule names from PR labels and runs AutoRest with the selected rules over allowed spec files.
- GitHub Action workflow file: `.github/workflows/staging-lint-checks.yaml` orchestrates checkout, build, and script execution.
190 changes: 145 additions & 45 deletions .github/workflows/src/extract-rule-names-and-run-validation.js
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
import { existsSync, mkdirSync, readdirSync, writeFileSync } from "fs";
import { basename, dirname, join, relative } from "path";
import { fileURLToPath } from "url";
import { execNpmExec } from "../../shared/src/exec.js";
import { execNpmExec, isExecError } from "../../shared/src/exec.js";

const __dirname = dirname(fileURLToPath(import.meta.url));

Expand All @@ -34,37 +34,16 @@ export function extractRulesFromLabels(labels) {
}

/**
* Extract rule names from PR body
* Parses lines matching pattern: rules: RuleName1, RuleName2
* @param {string} body - PR body text
* @returns {string[]} - Array of extracted rule names
*/
export function extractRulesFromBody(body) {
// Find lines like "rules: A, B" or "rule: A"
const matches = body.match(/^\s*rules?\s*:(.*)$/gim) ?? [];
const ruleLine = matches.map((line) => line.split(":")[1] ?? "").join(",");
return ruleLine
.split(/[\n,]/)
.map((s) => s.trim())
.filter(Boolean);
}

/**
* Extract and combine rule names from GitHub context
* Combines rules from both labels and body, removing duplicates
* Extract rule names from GitHub context
* Extracts rules from PR labels with test- prefix
* @param {import('@actions/github-script').AsyncFunctionArguments['context']} context - GitHub Actions context object
* @returns {string[]} - Array of unique rule names
*/
export function extractRuleNames(context) {
const pr = context.payload.pull_request;
if (!pr) return [];

const fromLabels = extractRulesFromLabels(pr.labels ?? []);
const fromBody = extractRulesFromBody(pr.body ?? "");

// Combine both sources and remove duplicates
const allRules = [...fromLabels, ...fromBody];
return Array.from(new Set(allRules));
return extractRulesFromLabels(pr.labels ?? []);
}

/**
Expand Down Expand Up @@ -213,12 +192,118 @@ function parseMessages(stdout, stderr) {
return msgs;
}

/**
* Check if the PR has a specific label
* Reads directly from the event payload — no GitHub API calls needed.
* @param {import('@actions/github-script').AsyncFunctionArguments['context']} context
* @param {string} labelName
* @returns {boolean}
*/
export function hasLabel(context, labelName) {
const labels = context.payload.pull_request?.labels ?? [];
return labels.some(
(/** @type {string | {name: string}} */ l) =>
(typeof l === "object" ? l.name : l).toLowerCase() === labelName.toLowerCase(),
);
}

/**
* Evaluate blocking conditions after validation completes.
* Triggers core.setFailed() when merge should be blocked.
*
* Blocking scenarios:
* 1. Command failures — AutoRest or script crashes
* 2. Validation errors without "errors-acknowledged" label
*
* @param {object} params
* @param {import('@actions/github-script').AsyncFunctionArguments['context']} params.context
* @param {import('@actions/github-script').AsyncFunctionArguments['core']} params.core
* @param {{ specs: number, errors: number, warnings: number, commandFailures: number }} params.result
*/
export function checkBlockingConditions({ context, core, result }) {
const { errors, warnings, commandFailures } = result;

// Scenario 1: Command failures always block
if (commandFailures > 0) {
core.setFailed(
`Pipeline execution failed with ${commandFailures} command failure(s). ` +
"Review the logs and linter-findings artifact for details.",
);
return;
}

// Scenario 2: Validation errors block unless acknowledged
if (errors > 0) {
if (hasLabel(context, ERRORS_ACKNOWLEDGED_LABEL)) {
core.warning(
`Found ${errors} validation error(s) and ${warnings} warning(s). ` +
`Proceeding because "${ERRORS_ACKNOWLEDGED_LABEL}" label is present. ` +
"Reviewer approval is still required.",
);
} else {
core.setFailed(
`Found ${errors} validation error(s). Action required:\n` +
" 1. Review the errors in the linter-findings artifact\n" +
` 2. Add the "${ERRORS_ACKNOWLEDGED_LABEL}" label to this PR to confirm you have reviewed them\n` +
" 3. The workflow will automatically re-run when the label is added",
);
}
}
}

/** Label name used to acknowledge validation errors on a PR. */
export const ERRORS_ACKNOWLEDGED_LABEL = "errors-acknowledged";

/**
* Regex pattern matching linter rule source file paths.
* Covers both spectral and native rule paths.
*/
export const RULE_FILE_PATTERN =
/^packages\/rulesets\/src\/spectral\/(az-arm|az-common|az-dataplane|functions\/.+)\.ts$|^packages\/rulesets\/src\/native\/(legacyRules|functions|rulesets)\/.+\.ts$/;

/**
* Detect whether any linter rule source files were changed in the current PR.
* Uses the GitHub REST API (pulls.listFiles) via the Octokit client provided by
* actions/github-script
*
* @param {import('@actions/github-script').AsyncFunctionArguments['github']} github - Octokit client
* @param {import('@actions/github-script').AsyncFunctionArguments['context']} context - GitHub Actions context
* @returns {Promise<boolean>} - true if rule files were changed
*/
export async function detectRuleChanges(github, context) {
const pr = context.payload.pull_request;
if (!pr) return false;

// Paginate to handle PRs with many changed files
const files = await github.paginate(github.rest.pulls.listFiles, {
owner: context.repo.owner,
repo: context.repo.repo,
pull_number: pr.number,
per_page: 100,
});

const changedRuleFiles = files.filter(
(f) => (f.status === "added" || f.status === "modified") && RULE_FILE_PATTERN.test(f.filename),
);

if (changedRuleFiles.length > 0) {
console.log("Detected rule file changes:");
changedRuleFiles.forEach((f) => console.log(` ${f.status}\t${f.filename}`));
return true;
}

console.log("No rule file changes detected");
return false;
}

/**
* Main execution function for GitHub Actions
* @param {import('@actions/github-script').AsyncFunctionArguments} AsyncFunctionArguments
*/
export async function runInGitHubActions({ context, core }) {
export default async function runInGitHubActions({ github, context, core }) {
try {
const ruleChangesDetected = await detectRuleChanges(github, context);

// Extract rule names from PR
const selectedRules = extractRuleNames(context);

Expand All @@ -227,7 +312,16 @@ export async function runInGitHubActions({ context, core }) {
);

if (selectedRules.length === 0) {
core.warning("No linting rules specified in labels or PR body");
// Scenario 1: Rule changes without test rules
if (ruleChangesDetected) {
core.setFailed(
"Rule changes detected but no test rules specified. " +
"Add test-<RuleName> labels to this PR.",
);
return;
}

console.log("No linter-related changes found in the PR. Exiting with empty findings file.");
// Create empty output file
const outputFile = join(
process.env.GITHUB_WORKSPACE || "",
Expand All @@ -237,12 +331,17 @@ export async function runInGitHubActions({ context, core }) {
mkdirSync(dirname(outputFile), { recursive: true });
writeFileSync(
outputFile,
"INFO | Runner | summary | Files scanned: 0, Errors: 0, Warnings: 0\n",
"INFO | Runner | summary | Files scanned: 0, Errors: 0, Warnings: 0, Command failures: 0\n",
);
return;
}

return await runValidation(selectedRules, process.env, core);
const result = await runValidation(selectedRules, process.env, core);

// Evaluate blocking conditions (scenarios 2 & 3)
checkBlockingConditions({ context, core, result });

return result;
} catch (error) {
const errorMessage = error instanceof Error ? error.message : String(error);
core.setFailed(`Script execution failed: ${errorMessage}`);
Expand All @@ -255,6 +354,7 @@ export async function runInGitHubActions({ context, core }) {
* @param {string[]} selectedRules - Array of rule names to validate
* @param {any} env - Environment variables object
* @param {import('@actions/github-script').AsyncFunctionArguments['core'] | null} core - GitHub Actions core object (optional)
* @returns {Promise<{ specs: number, errors: number, warnings: number, commandFailures: number }>}
*/
export async function runValidation(selectedRules, env, core = null) {
const repoRoot = env.GITHUB_WORKSPACE || process.cwd();
Expand All @@ -281,7 +381,7 @@ export async function runValidation(selectedRules, env, core = null) {
} else {
console.error(`ERROR | ${message}`);
}
return { specs: 0, errors: 0, warnings: 0 };
return { specs: 0, errors: 0, warnings: 0, commandFailures: 0 };
}
if (!specs.length) {
const message = "No matching spec files found";
Expand All @@ -294,14 +394,15 @@ export async function runValidation(selectedRules, env, core = null) {
mkdirSync(dirname(outputFile), { recursive: true });
writeFileSync(
outputFile,
"INFO | Runner | summary | Files scanned: 0, Errors: 0, Warnings: 0\n",
"INFO | Runner | summary | Files scanned: 0, Errors: 0, Warnings: 0, Command failures: 0\n",
);
return { specs: 0, errors: 0, warnings: 0 };
return { specs: 0, errors: 0, warnings: 0, commandFailures: 0 };
}

console.log(`Processing ${specs.length} file(s)`);
let errors = 0;
let warnings = 0;
let commandFailures = 0;
const outLines = [];
const seenErrors = new Set(); // Track unique errors to avoid duplicates from $ref resolution

Expand All @@ -311,7 +412,16 @@ export async function runValidation(selectedRules, env, core = null) {
try {
res = await runAutorest(spec, specRoot, selectedRules, repoRoot);
} catch (error) {
console.log(`Failed ${spec}: ${error}`);
commandFailures++;
const specRelPath = relative(specRoot, spec).replace(/\\/g, "/");
const errorMessage = error instanceof Error ? error.message : String(error);
console.log(`Failed ${spec}: ${errorMessage}`);
if (isExecError(error)) {
if (error.stdout) console.log(` stdout: ${error.stdout}`);
if (error.stderr) console.log(` stderr: ${error.stderr}`);
if (error.code !== undefined) console.log(` exit code: ${error.code}`);
}
outLines.push(`ERROR | CommandFailed | ${specRelPath} | ${errorMessage}`);
continue;
}

Expand Down Expand Up @@ -390,22 +500,12 @@ export async function runValidation(selectedRules, env, core = null) {
}

// Output results
const summary = `INFO | Runner | summary | Files scanned: ${specs.length}, Errors: ${errors}, Warnings: ${warnings}`;
const summary = `INFO | Runner | summary | Files scanned: ${specs.length}, Errors: ${errors}, Warnings: ${warnings}, Command failures: ${commandFailures}`;
console.log(summary);

// Write output file
mkdirSync(dirname(outputFile), { recursive: true });
writeFileSync(outputFile, outLines.concat([summary]).join("\n") + "\n", "utf8");

// Handle failure conditions
if (errors > 0 && env.FAIL_ON_ERRORS === "true") {
const message = `Found ${errors} error(s) in validation`;
if (core) core.setFailed(message);
else {
console.error(`ERROR | ${message}`);
process.exit(1);
}
}

return { specs: specs.length, errors, warnings };
return { specs: specs.length, errors, warnings, commandFailures };
}
Loading