Conversation
Up to standards ✅🟢 Issues
|
minettekaum
left a comment
There was a problem hiding this comment.
Thanks for creating this, Sara 😍
I had a few comments on the wording, mostly to make the policy clearer and a bit easier for us to follow. Let me know if you have any questions about my comments :)
| @@ -0,0 +1,58 @@ | |||
| # AI contribution policy | |||
|
|
|||
| At Pruna, we very much welcome any contributions from the community. However, we want to ensure they are high quality and aligned with our guidelines. To prevent unwanted behavior by the community, we have created this AI contribution policy. Please read it carefully before contributing. | |||
There was a problem hiding this comment.
change 'prevent unwanted behavior by the community' to 'ensure AI-assisted contributions remain high quality, reviewable, and aligned with project standards,...'
|
|
||
| ## 1. Core Philosophy | ||
|
|
||
| Pruna accepts AI-assisted code (e.g., using Copilot, Cursor, etc.), but strictly rejects AI-generated contributions where the submitter acts merely as a proxy. The submitter is the **Sole Responsible Author** for every line of code, comment, and design decision. |
There was a problem hiding this comment.
I agree with the accountability principle, but 'Sole Responsible Author' may be too absolute for AI-assisted contributions. I’d suggest wording like 'The human contributor remains fully responsible for every submitted change' instead. That keeps the accountability point without creating tension with the fact that AI assistance is permitted.
|
|
||
| > **Accountability lies with the human contributor, not the AI agent** | ||
|
|
||
| Coding agents (e.g., Copilot, Claude Code) are not conscious entities and cannot be held accountable for their outputs. They can produce code that looks correct and plausible but contains subtle bugs, security vulnerabilities, or design flaws. So, maintainers and reviewers are ultimately responsible for catching these issues. The following rules ensure that all contributions are carefully vetted and that there is a human submitter behind the agent, taking full responsibility for the submitted code. |
There was a problem hiding this comment.
This rationale is mostly strong, but I’d tweak the responsibility framing. Right now it sounds like maintainers/reviewers are ultimately on the hook for catching AI mistakes. I think the contributor should be framed as primarily responsible for reviewing and validating the submission, with maintainers providing an additional review layer.
|
|
||
| ### Law 1: Proof of Verification | ||
|
|
||
| AI tools frequently write code that looks correct but fails execution. Therefore, "vibe checks" are insufficient. |
There was a problem hiding this comment.
I agree with the intent here. Optional wording tweak: replace ‘vibe checks are insufficient’ with something slightly more formal like ‘superficial review is insufficient’.
|
|
||
| AI tools frequently write code that looks correct but fails execution. Therefore, "vibe checks" are insufficient. | ||
|
|
||
| **Requirement:** Every PR introducing functional changes must be carefully tested locally by the human contributor before submission. |
There was a problem hiding this comment.
Should 'tested locally' be broadened a bit to something like ‘validated through appropriate local and/or CI checks before submission.’
|
|
||
| **Failure Condition:** | ||
|
|
||
| - Answering a review question with "That's what the AI outputted" or "I don't know, it works" leads to immediate closure. |
There was a problem hiding this comment.
I agree with this, but 'immediate closure' may be unnecessarily strict. Maybe say the PR may be closed if the contributor cannot explain or revise the code after reviewer feedback. That keeps the standard high while allowing room for good-faith iteration.
|
|
||
| ### Law 4: Transparency in AI Usage Disclosure | ||
|
|
||
| **Requirement:** If you used AI tools for coding, but manually reviewed and tested every line following the guidelines, you must mark the PR as "AI-assisted". |
There was a problem hiding this comment.
I would suggest something like this 'If a non-trivial portion of this PR was generated or substantially shaped by AI tools, the PR must be marked as "AI-assisted". Trivial autocomplete, formatting, or minor line completions do not require disclosure.'
|
|
||
| **Failure Condition:** | ||
|
|
||
| - Lack of transparency about AI tool usage may result in PR closure, especially if the code contains hallucinations or cannot be explained during review. |
There was a problem hiding this comment.
The disclosure principle makes sense, but I’d tighten the wording here. 'Hallucinations' is understandable informally, though 'fabricated or inaccurate code/comments' may be clearer in policy language.
|
|
||
| ## 3. Cases where Human must stay in control | ||
|
|
||
| In some cases, such as boilerplate code outside the logic of the product, we could accept AI-generated code reviewed by another AI agent. |
There was a problem hiding this comment.
This paragraph seems inconsistent with the earlier accountability principle. If the core message is that humans must remain responsible, I don’t think 'reviewed by another AI agent' should be the accepted standard, even for boilerplate. I’d suggest framing AI review as optional assistance, while still requiring human review and approval.
|
|
||
| In some cases, such as boilerplate code outside the logic of the product, we could accept AI-generated code reviewed by another AI agent. | ||
|
|
||
| But for the core logic of the product, we want to ensure that humans fully understand the code and the design decisions. This is to ensure that the code is maintainable, secure, and aligned with the project's goals. |
There was a problem hiding this comment.
I agree with the principle here, but 'core logic of the product' may be too subjective to enforce consistently. It may help to define this more concretely or provide examples of the kinds of changes that require especially strong human understanding and validation.
Description
Add AI Contribution Policy in documentation. Shorter version of CodeCarbon.
Related Issue
Fixes #(issue number)
Type of Change
Testing
uv run pytest -m "cpu and not slow")For full setup and testing instructions, see the Contributing Guide.
Checklist
Thanks for contributing to Pruna! We're excited to review your work.
New to contributing? Check out our Contributing Guide for everything you need to get started.
First Prune (1-year OSS anniversary)
First Prune marks one year of Pruna’s open-source work. During the initiative window, qualifying merged contributions count toward First Prune. You can earn credits for our performance models via our API.
If you’d like your contribution to count toward First Prune, here’s how it works: