-
Notifications
You must be signed in to change notification settings - Fork 88
docs: update with AI Contribution Policy #621
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,58 @@ | ||
| # AI contribution policy | ||
|
|
||
| At Pruna, we very much welcome any contributions from the community. However, we want to ensure they are high quality and aligned with our guidelines. To prevent unwanted behavior by the community, we have created this AI contribution policy. Please read it carefully before contributing. | ||
|
|
||
| > Greatly inspired by [CodeCarbon's AI policy](https://docs.codecarbon.io/latest/contributing/AI_POLICY/). | ||
|
|
||
| ## 1. Core Philosophy | ||
|
|
||
| Pruna accepts AI-assisted code (e.g., using Copilot, Cursor, etc.), but strictly rejects AI-generated contributions where the submitter acts merely as a proxy. The submitter is the **Sole Responsible Author** for every line of code, comment, and design decision. | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I agree with the accountability principle, but 'Sole Responsible Author' may be too absolute for AI-assisted contributions. I’d suggest wording like 'The human contributor remains fully responsible for every submitted change' instead. That keeps the accountability point without creating tension with the fact that AI assistance is permitted. |
||
|
|
||
| > **Accountability lies with the human contributor, not the AI agent** | ||
|
|
||
| Coding agents (e.g., Copilot, Claude Code) are not conscious entities and cannot be held accountable for their outputs. They can produce code that looks correct and plausible but contains subtle bugs, security vulnerabilities, or design flaws. So, maintainers and reviewers are ultimately responsible for catching these issues. The following rules ensure that all contributions are carefully vetted and that there is a human submitter behind the agent, taking full responsibility for the submitted code. | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This rationale is mostly strong, but I’d tweak the responsibility framing. Right now it sounds like maintainers/reviewers are ultimately on the hook for catching AI mistakes. I think the contributor should be framed as primarily responsible for reviewing and validating the submission, with maintainers providing an additional review layer. |
||
|
|
||
| ## 2. The Laws of Contribution | ||
|
|
||
| ### Law 1: Proof of Verification | ||
|
|
||
| AI tools frequently write code that looks correct but fails execution. Therefore, "vibe checks" are insufficient. | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I agree with the intent here. Optional wording tweak: replace ‘vibe checks are insufficient’ with something slightly more formal like ‘superficial review is insufficient’. |
||
|
|
||
| **Requirement:** Every PR introducing functional changes must be carefully tested locally by the human contributor before submission. | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Should 'tested locally' be broadened a bit to something like ‘validated through appropriate local and/or CI checks before submission.’ |
||
|
|
||
| ### Law 2: The Hallucination & Redundancy Ban | ||
|
|
||
| AI models often hallucinate comments or reinvent existing utilities. | ||
|
|
||
| **Requirement:** You must use existing methods and libraries, and never reinvent the wheel. | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Instead, something like 'Use existing project utilities and avoid introducing redundant helpers when an appropriate implementation already exists.' |
||
|
|
||
| **Failure Conditions:** | ||
|
|
||
| - Creating new helper functions when a Pruna equivalent exists is grounds for immediate rejection. | ||
| - "Ghost Comments" (comments explaining logic that was deleted or doesn't exist) will result in a request for a full manual rewrite. Unnecessary comments are not allowed. Example: "This function returns the input". | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I’d suggest softer, enforceable wording like ‘may be asked to revise or justify’ |
||
|
|
||
| ### Law 3: The "Explain It" Standard | ||
|
|
||
| **Requirement:** If a maintainer or reviewer asks during code review, you must be able to explain the logic of any function you submit. | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. to use 'need' instead of 'must' and also change 'any function' to 'any submitted change' |
||
|
|
||
| **Failure Condition:** | ||
|
|
||
| - Answering a review question with "That's what the AI outputted" or "I don't know, it works" leads to immediate closure. | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I agree with this, but 'immediate closure' may be unnecessarily strict. Maybe say the PR may be closed if the contributor cannot explain or revise the code after reviewer feedback. That keeps the standard high while allowing room for good-faith iteration. |
||
|
|
||
| ### Law 4: Transparency in AI Usage Disclosure | ||
|
|
||
| **Requirement:** If you used AI tools for coding, but manually reviewed and tested every line following the guidelines, you must mark the PR as "AI-assisted". | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I would suggest something like this 'If a non-trivial portion of this PR was generated or substantially shaped by AI tools, the PR must be marked as "AI-assisted". Trivial autocomplete, formatting, or minor line completions do not require disclosure.' |
||
|
|
||
| **Failure Condition:** | ||
|
|
||
| - Lack of transparency about AI tool usage may result in PR closure, especially if the code contains hallucinations or cannot be explained during review. | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The disclosure principle makes sense, but I’d tighten the wording here. 'Hallucinations' is understandable informally, though 'fabricated or inaccurate code/comments' may be clearer in policy language. |
||
|
|
||
| ## 3. Cases where Human must stay in control | ||
|
|
||
| In some cases, such as boilerplate code outside the logic of the product, we could accept AI-generated code reviewed by another AI agent. | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This paragraph seems inconsistent with the earlier accountability principle. If the core message is that humans must remain responsible, I don’t think 'reviewed by another AI agent' should be the accepted standard, even for boilerplate. I’d suggest framing AI review as optional assistance, while still requiring human review and approval. |
||
|
|
||
| But for the core logic of the product, we want to ensure that humans fully understand the code and the design decisions. This is to ensure that the code is maintainable, secure, and aligned with the project's goals. | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I agree with the principle here, but 'core logic of the product' may be too subjective to enforce consistently. It may help to define this more concretely or provide examples of the kinds of changes that require especially strong human understanding and validation. |
||
|
|
||
| ## Additional Resources | ||
|
|
||
| For comprehensive guidance on contributing to Pruna, including development workflows, code quality standards, testing practices, and AI-assisted development best practices, see the [CONTRIBUTING.md](CONTRIBUTING.md). | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
change 'prevent unwanted behavior by the community' to 'ensure AI-assisted contributions remain high quality, reviewable, and aligned with project standards,...'