You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: AI_USAGE.md
+30-30Lines changed: 30 additions & 30 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
# 🤖 AI Usage Guidelines
2
2
3
-
Aether Datafixers permits the use of AI tools in contributions — but under strict conditions. Transparency, quality, and human accountability are **non-negotiable**. This document defines the rules that all contributors and maintainers must follow when using AI-assisted tooling.
3
+
Aether Datafixers permits the use of AI tools in contributions - but under strict conditions. Transparency, quality, and human accountability are **non-negotiable**. This document defines the rules that all contributors and maintainers must follow when using AI-assisted tooling.
4
4
5
5
---
6
6
@@ -31,11 +31,11 @@ This policy applies to all forms of contribution:
31
31
32
32
## ⚖️ General Principles
33
33
34
-
1.**AI is a tool, humans are accountable.** AI does not author contributions — humans do. The person who submits AI-assisted work bears full responsibility for it.
35
-
2.**Same quality bar — no exceptions.** AI-generated output must meet the exact same standards as human-written code: conventions, tests, documentation, and review.
34
+
1.**AI is a tool, humans are accountable.** AI does not author contributions - humans do. The person who submits AI-assisted work bears full responsibility for it.
35
+
2.**Same quality bar - no exceptions.** AI-generated output must meet the exact same standards as human-written code: conventions, tests, documentation, and review.
36
36
3.**Transparency is mandatory.** Disclosure of significant AI assistance is required, not optional. Honest disclosure is always better than concealment.
37
37
4.**Quality and security over speed.** AI can accelerate development, but speed must never come at the expense of correctness, maintainability, or security.
38
-
5.**When in doubt, write it yourself.** If you are unsure whether AI output is correct, secure, or license-compatible — do not submit it. Rewrite it by hand.
38
+
5.**When in doubt, write it yourself.** If you are unsure whether AI output is correct, secure, or license-compatible - do not submit it. Rewrite it by hand.
39
39
40
40
---
41
41
@@ -47,7 +47,7 @@ AI tools may be used in the following areas, subject to the conditions listed:
| Code generation (features, bug fixes) | ✅ Allowed | Must be fully understood, tested, and reviewed by the contributor |
49
49
| Test generation (JUnit 5 / AssertJ) | ✅ Allowed | Assertions and edge cases must be manually verified |
50
-
| Documentation and Javadoc | ✅ Allowed | Must be factually accurate — AI may hallucinate API details |
50
+
| Documentation and Javadoc | ✅ Allowed | Must be factually accurate - AI may hallucinate API details |
51
51
| Boilerplate and scaffolding | ✅ Allowed | Still requires review for project conventions |
52
52
| Commit message drafting | ✅ Allowed | Must accurately describe the actual changes |
53
53
| PR description drafting | ✅ Allowed | Must reflect the real changes, not AI assumptions |
@@ -60,11 +60,11 @@ AI tools may be used in the following areas, subject to the conditions listed:
60
60
61
61
## 🚫 Restricted & Prohibited Uses
62
62
63
-
### 🏷️ Good First Issues — Learning First
63
+
### 🏷️ Good First Issues - Learning First
64
64
65
65
Issues labeled **`good first issue`** are **strongly encouraged** to be completed **primarily without significant AI assistance**.
66
66
67
-
**Why:** Good first issues exist to help new contributors learn the codebase and ramp up through hands-on experience. They are intentionally kept simple. Solving them with your own effort lets you truly understand the project’s structure, patterns, and conventions — and keeps the opportunity fair for everyone who wants to make their first contribution through genuine learning.
67
+
**Why:** Good first issues exist to help new contributors learn the codebase and ramp up through hands-on experience. They are intentionally kept simple. Solving them with your own effort lets you truly understand the project’s structure, patterns, and conventions - and keeps the opportunity fair for everyone who wants to make their first contribution through genuine learning.
68
68
69
69
If you are new to the project, embrace these issues as a learning opportunity. If you are experienced enough to use AI effectively, you are welcome to pick a more challenging issue instead.
70
70
@@ -74,21 +74,21 @@ If you are new to the project, embrace these issues as a learning opportunity. I
74
74
75
75
The following areas require **extra scrutiny and maintainer approval** before AI-assisted contributions are accepted:
76
76
77
-
-**Security-sensitive code**— Any code handling untrusted data through `DynamicOps`, cryptographic operations, or artifact signing. Must be thoroughly reviewed by a maintainer with security context. See [SECURITY.md](SECURITY.md).
78
-
-**Public API design**— AI may suggest API shapes, but public API decisions must be made deliberately by maintainers. AI suggestions must not be accepted wholesale.
79
-
-**Core optics and type system changes**— Changes to the optic hierarchy (`Lens`, `Prism`, `Iso`, `Affine`, `Traversal`) or the type system require deep domain understanding that AI tools may lack.
80
-
-**CI/CD pipeline modifications**— Changes to GitHub Actions workflows, Maven profiles, or build configuration require maintainer review for security and correctness.
77
+
-**Security-sensitive code**- Any code handling untrusted data through `DynamicOps`, cryptographic operations, or artifact signing. Must be thoroughly reviewed by a maintainer with security context. See [SECURITY.md](SECURITY.md).
78
+
-**Public API design**- AI may suggest API shapes, but public API decisions must be made deliberately by maintainers. AI suggestions must not be accepted wholesale.
79
+
-**Core optics and type system changes**- Changes to the optic hierarchy (`Lens`, `Prism`, `Iso`, `Affine`, `Traversal`) or the type system require deep domain understanding that AI tools may lack.
80
+
-**CI/CD pipeline modifications**- Changes to GitHub Actions workflows, Maven profiles, or build configuration require maintainer review for security and correctness.
81
81
82
82
### 🚫 Prohibited Uses
83
83
84
84
The following uses of AI are **not permitted**:
85
85
86
-
-**AI as sole code reviewer**— Every PR must be reviewed and approved by at least one human maintainer. AI review tools may supplement but never replace human review.
87
-
-**Submitting unreviewed AI output**— Copy-pasting AI-generated code without reading, understanding, and verifying it is prohibited. You must be able to explain every line you submit.
88
-
-**Undisclosed significant AI usage**— Knowingly concealing significant AI assistance violates this policy. See [Accountability](#-accountability).
89
-
-**AI-generated vulnerability reports**— Automated AI scanning to generate public vulnerability reports is prohibited. Follow the private disclosure process in [SECURITY.md](SECURITY.md).
90
-
-**Bypassing project standards**— Using AI to circumvent tests, Checkstyle rules, or other quality gates is prohibited.
91
-
-**AI for license or legal decisions**— AI tools must not be relied upon for license compatibility analysis, DCO compliance interpretation, or other legal matters.
86
+
-**AI as sole code reviewer**- Every PR must be reviewed and approved by at least one human maintainer. AI review tools may supplement but never replace human review.
87
+
-**Submitting unreviewed AI output**- Copy-pasting AI-generated code without reading, understanding, and verifying it is prohibited. You must be able to explain every line you submit.
88
+
-**Undisclosed significant AI usage**- Knowingly concealing significant AI assistance violates this policy. See [Accountability](#-accountability).
89
+
-**AI-generated vulnerability reports**- Automated AI scanning to generate public vulnerability reports is prohibited. Follow the private disclosure process in [SECURITY.md](SECURITY.md).
90
+
-**Bypassing project standards**- Using AI to circumvent tests, Checkstyle rules, or other quality gates is prohibited.
91
+
-**AI for license or legal decisions**- AI tools must not be relied upon for license compatibility analysis, DCO compliance interpretation, or other legal matters.
92
92
93
93
---
94
94
@@ -100,7 +100,7 @@ Disclosure of AI assistance is **mandatory**. This is the most important operati
100
100
101
101
Disclosure is required whenever AI tools were used to **generate or substantially modify** code, documentation, or other contributions.
102
102
103
-
**Rule of thumb:** If the AI produced a block of code, a paragraph of text, or a test case that you then submitted — even if you edited it afterward — disclose it.
103
+
**Rule of thumb:** If the AI produced a block of code, a paragraph of text, or a test case that you then submitted - even if you edited it afterward - disclose it.
104
104
105
105
**Exempt:** Minor AI-assisted tasks such as IDE autocomplete suggestions, spell-checking, or grammar corrections do not require disclosure.
106
106
@@ -161,14 +161,14 @@ AI-generated code must meet the same standards as human-written code:
161
161
- ✅ Include proper Javadoc for all public API methods
162
162
- ✅ Use JetBrains annotations (`@NotNull`, `@Nullable`) where appropriate
163
163
- ✅ Use Guava `Preconditions` for argument validation
164
-
- ✅ Maintain thread-safety invariants — Aether Datafixers types are immutable and thread-safe by design
164
+
- ✅ Maintain thread-safety invariants - Aether Datafixers types are immutable and thread-safe by design
165
165
- ✅ Not introduce unnecessary dependencies
166
166
167
167
### Documentation Quality
168
168
169
169
AI-generated documentation must:
170
170
171
-
- ✅ Be **factually accurate**— AI tools frequently hallucinate API details, method signatures, and module names
171
+
- ✅ Be **factually accurate**- AI tools frequently hallucinate API details, method signatures, and module names
172
172
- ✅ Be **consistent** with existing documentation style and structure
173
173
- ✅ Reference **correct** class, method, and module names
174
174
- ✅ Not contradict existing documentation
@@ -185,16 +185,16 @@ AI-assisted PRs follow the same review process as all other PRs (see [CONTRIBUTI
185
185
186
186
### Additional Review Focus Areas
187
187
188
-
-**Test quality**— AI-generated tests may appear comprehensive but miss edge cases, test only happy paths, or contain incorrect assertions. Review assertions carefully.
189
-
-**Hallucinated APIs**— AI may reference methods, classes, or modules that do not exist in this project. Verify all API references against the actual codebase.
190
-
-**Naming conventions**— AI-generated names may not match project conventions (e.g., `DataVersion` vs. `Version`, `TypeReference` vs. `TypeRef`). Enforce consistency.
191
-
-**Thread safety**— Verify that AI-generated code maintains the project's immutability and thread-safety invariants.
192
-
-**Dependency additions**— AI may suggest adding external libraries. Verify necessity and license compatibility.
193
-
-**Subtle logic errors**— AI-generated code can appear correct at first glance while containing subtle bugs. Review logic paths carefully.
188
+
-**Test quality**- AI-generated tests may appear comprehensive but miss edge cases, test only happy paths, or contain incorrect assertions. Review assertions carefully.
189
+
-**Hallucinated APIs**- AI may reference methods, classes, or modules that do not exist in this project. Verify all API references against the actual codebase.
190
+
-**Naming conventions**- AI-generated names may not match project conventions (e.g., `DataVersion` vs. `Version`, `TypeReference` vs. `TypeRef`). Enforce consistency.
191
+
-**Thread safety**- Verify that AI-generated code maintains the project's immutability and thread-safety invariants.
192
+
-**Dependency additions**- AI may suggest adding external libraries. Verify necessity and license compatibility.
193
+
-**Subtle logic errors**- AI-generated code can appear correct at first glance while containing subtle bugs. Review logic paths carefully.
194
194
195
195
### Human Review Requirement
196
196
197
-
- At least **one human maintainer** must review and approve every PR — no exceptions.
197
+
- At least **one human maintainer** must review and approve every PR - no exceptions.
198
198
- Reviewers may request the contributor to **explain** any AI-generated code.
199
199
- Reviewers are not expected to detect undisclosed AI usage. The disclosure responsibility lies with the contributor.
200
200
@@ -223,7 +223,7 @@ When reviewing and merging AI agent PRs (e.g., from `copilot-swe-agent[bot]`):
223
223
224
224
## ⚖️ Intellectual Property & Licensing
225
225
226
-
All contributions — whether human-written or AI-assisted — must be compatible with the project's [MIT License](LICENSE).
226
+
All contributions - whether human-written or AI-assisted - must be compatible with the project's [MIT License](LICENSE).
227
227
228
228
### DCO and AI
229
229
@@ -244,13 +244,13 @@ By submitting AI-assisted code and signing off with the [Developer Certificate o
244
244
245
245
### Responsibility
246
246
247
-
- The **human contributor** who submits AI-assisted work is **fully responsible** for it — including bugs, security vulnerabilities, test failures, and convention violations.
247
+
- The **human contributor** who submits AI-assisted work is **fully responsible** for it - including bugs, security vulnerabilities, test failures, and convention violations.
248
248
- For **AI agent PRs** (where no human is the commit author), the maintainer who requested the agent's work and/or merged the PR assumes responsibility.
249
249
- "The AI generated it" is **not** a mitigating factor for quality issues or policy violations.
250
250
251
251
### Enforcement
252
252
253
-
- Violations of this policy — particularly **undisclosed AI usage** or **submitting unreviewed AI output**— will be handled constructively on a case-by-case basis.
253
+
- Violations of this policy - particularly **undisclosed AI usage** or **submitting unreviewed AI output**- will be handled constructively on a case-by-case basis.
254
254
- Repeated or intentional violations may be escalated through the project's [Code of Conduct](CODE_OF_CONDUCT.md) enforcement process.
255
255
- Serious violations (e.g., knowingly submitting AI-generated code with license conflicts) may result in **contribution restrictions**.
0 commit comments