Skip to content

Latest commit

 

History

History
72 lines (50 loc) · 3.63 KB

File metadata and controls

72 lines (50 loc) · 3.63 KB

AI-Assisted Development at Scratch

AI coding assistants and other AI-driven development tools are becoming part of how people write, explore, and learn code. These tools can support contributors in powerful ways, but only when they are used in ways that center people, creativity, and community.

This document describes how we approach AI-assisted development in our open-source repositories: what we aspire to, and where we draw clear boundaries. This document was inspired, in part, by our more general thoughts about Creative AI at Scratch.


Guiding Stars

These "guiding stars" shape our approach to AI-assisted development:

  • Agency: Developers remain in control and responsible for their work.
  • Creativity: AI should support experimentation, focusing on high-level ideas.
  • Human-Centered: Human insight, context, and collaboration remain essential.
  • Growth: Use AI to foster learning and understanding, not just for speed.

We believe these tools are most powerful when they expand a developer's ability to explore, prototype, and learn, rather than when they simply automate the production of code. Our goal is to empower contributors while ensuring that human judgment and creativity remain at the center of everything we build.

By prioritizing agency and human-in-the-loop collaboration, we foster a community where developers don't just ship code faster, but grow in their understanding and impact. AI should be a partner in curiosity, helping to lower barriers to entry while strengthening the individual voices and shared ownership that make open source thrive.


Lines in the Sand

Alongside our aspirations above, we set these "lines in the sand" as boundaries to protect our contributors and maintain the trust of our global community:

  • Responsibility: People are accountable for all code they submit.
  • Transparency: Contributions must be understandable and clearly explained.
  • Integrity: No misrepresentation of AI involvement or "black box" changes.
  • Ethics & Safety: No harmful uses or data leaks; ensure license compliance and compatibility.
  • Dialogue: AI does not replace peer review or human discussion.

The most fundamental boundary is that responsibility stays with people: we do not support submitting code that a contributor does not fully understand or cannot reasonably explain. Open source depends on shared understanding, and "black box" contributions — regardless of how they were produced — undermine the long-term health of our repositories.

Furthermore, AI must never be used to bypass the essential human elements of our work, such as thoughtful peer review, ethical considerations, and the protection of private data. We value transparency and dialogue over maximal automation, and we will not accept contributions that misrepresent AI involvement or attempt to end a discussion with "the tool says so."

All AI-assisted contributions should include appropriate tests, clear commit messages, and PR descriptions that explain intent and behavior changes. Maintainers may request more detail or reject unreviewable changes.


An Ongoing Conversation

AI tools — and how people use them — will continue to change. This document is not the final word, but a shared starting point.

We are committed to:

  • Centering developers' agency.
  • Supporting creativity and learning.
  • Engaging openly with our community as tools and practices evolve.

Several humans contributed to the creation of this document, with some assistance from AI tools. Humans reviewed, edited, and take ongoing responsibility for its content.