-
Notifications
You must be signed in to change notification settings - Fork 18
feat: add InnerSource and AI section #107
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
3a15eaa
82ec8e0
71054d8
b97ca32
69f5a4f
3a2ff56
bb73576
b0d6f8a
269b4b3
9158f7c
1f32165
75da1f8
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,18 @@ | ||
| { | ||
| "act": true, | ||
| "action": "synchronize", | ||
| "repository": { | ||
| "fork": false, | ||
| "default_branch": "main", | ||
| "name": "managing-innersource-projects", | ||
| "full_name": "InnerSourceCommons/managing-innersource-projects" | ||
| }, | ||
| "pull_request": { | ||
| "number": 107, | ||
| "head": { | ||
| "repo": { | ||
| "full_name": "InnerSourceCommons/managing-innersource-projects" | ||
| } | ||
| } | ||
| } | ||
| } |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,7 @@ | ||
| { | ||
| "act": true, | ||
| "repository": { | ||
| "fork": false, | ||
| "default_branch": "main" | ||
| } | ||
| } |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,17 @@ | ||
| # Authors and Reviewers | ||
|
|
||
| ## Authors | ||
|
|
||
| Chronological order: | ||
|
|
||
| * InnerSource Program Office (ISPO) Working Group, [InnerSource Commons](https://innersourcecommons.org/). | ||
|
|
||
| ## Reviewers | ||
|
|
||
| Chronological order: | ||
|
|
||
| * Jeff Bailey | ||
| * Russ Rutledge | ||
| * Micaela Eller | ||
|
|
||
| This section was drafted as a discussion starter and is open for contributions. If you would like to be listed as an author or reviewer, please open a pull request or get in touch via [Slack](https://innersourcecommons.org/slack). |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,13 @@ | ||
| # InnerSource and AI | ||
|
|
||
| Organizations are increasingly adopting AI in the workplace—from generative AI assistants to agentic coding tools that can write, refactor, and review code. This shift is changing how developers work: less time on typing code, more on defining requirements, guiding AI, and making sure systems are reliable and maintainable. For InnerSource program leads, the question is whether InnerSource still matters in this new landscape. | ||
|
|
||
| It does. InnerSource is potentially *more* important than ever. Shared repositories, clear boundaries, documentation, and collaborative practices help AI systems—and the people using them—work with the right context, reuse existing components, and keep quality high. This section explains why InnerSource matters when adopting AI, how to shape your repositories and practices for AI-assisted development, and what risks and guardrails to keep in mind. | ||
|
|
||
| The following articles in this section go deeper: | ||
|
|
||
| - [Why InnerSource Matters When Adopting AI](why-innersource-matters-with-ai.md) — Relevance of InnerSource in an AI-augmented world, reuse, and production readiness. | ||
| - [Shaping Repositories and Practices for AI](shaping-for-ai.md) — Repository design, documentation, and workflow integration so both humans and AI can contribute effectively. | ||
| - [Risks and Guardrails](risks-and-guardrails.md) — Balancing speed with safety, the role of code review, and organizational best practices for AI use. | ||
|
|
||
| AI tooling and practices are evolving quickly. This section will be updated as the community learns more and as survey and research data become available. If you are new to InnerSource, we recommend starting with [Getting Started with InnerSource](http://www.oreilly.com/programming/free/getting-started-with-innersource.csp) and the [Introduction](/introduction/introduction.md) to this book. | ||
|
jeffabailey marked this conversation as resolved.
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,9 @@ | ||
| # Risks and Guardrails | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think the overall message of this page is that InnerSource practices are the same things that are needed for successful AI adoption. AI is the ultimate InnerSource contributor, so everything that we've done to enable InnerSource also crosses over to enable AI.
Collaborator
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Removed the section. |
||
|
|
||
| AI is the ultimate InnerSource contributor. Like any external contributor, AI agents generate code that must be reviewed, validated, and integrated thoughtfully into your systems. The same InnerSource practices that enable trusted external contributions—code review, clear guidelines, transparent decision-making, and systems thinking—are exactly what you need to safely and sustainably adopt AI in development. | ||
|
|
||
| Adopting AI without these guardrails can deliver short-term gains in speed and productivity, but at the cost of long-term risks to quality, security, and maintainability. The good news: if your organization has built a strong InnerSource culture, you already have the foundations in place. | ||
|
|
||
| ## Transparency and stakeholder involvement | ||
|
|
||
| Involving stakeholders and keeping development transparent supports responsible AI deployment. When decisions about tools, patterns, and policies are visible and discussable, teams can align on what is acceptable and what is not. This aligns with InnerSource principles of openness and collaboration and helps prevent AI from being used in ways that conflict with organizational values or compliance requirements. | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,21 @@ | ||
| # Shaping Repositories and Practices for AI | ||
|
|
||
| InnerSource practices that make life easier for human contributors also help AI systems and agentic tools. Clear scope, good documentation, and consistent workflows make it easier for both people and AI to discover, understand, and contribute to shared code safely. | ||
|
|
||
| ## Repository and boundary design | ||
|
|
||
| Well-defined repositories with clear scope and interfaces make it easier for humans and AI to contribute without stepping on each other’s toes. When boundaries are explicit—what belongs in this repo, what the APIs are, what the project is *not* responsible for—AI agents and assistants can operate within a manageable context. The community is exploring a pattern sometimes called “InnerSource the AI way,” which emphasizes clear scope and boundaries; as it matures, it may be documented in the [InnerSource Patterns](https://patterns.innersourcecommons.org/) book and linked from here. | ||
|
|
||
| ## Documentation and discoverability | ||
|
|
||
| InnerSource behaviors like solid READMEs, CONTRIBUTING guides, and architecture decision records are increasingly important when AI is in the loop. They help AI and people alike understand how to use and extend shared code correctly. Documentation that explains *why* decisions were made, not just *what* the code does, supports better AI-generated contributions and reduces misuse. Making repositories searchable and well-described also helps teams and tools find the right building blocks instead of reimplementing them. | ||
|
|
||
| ## Playbooks for people and agents | ||
|
|
||
| Playbooks that describe how to contribute—and what to avoid—benefit both human contributors and AI-assisted workflows. The community is starting to develop playbooks that work for both. As these emerge, they will be reflected in the InnerSource Patterns book and linked from this section. The goal is to make it easy for contributors and tools to follow the same rules and expectations. | ||
|
|
||
| ## Skills, plugins, and workflow integration | ||
|
|
||
| InnerSource can be integrated directly into coding workflows through skills, plugins, and tooling. When reuse and contribution are part of the daily environment—for example, by suggesting existing InnerSource components when starting a new feature—both developers and AI-assisted flows are more likely to reuse rather than duplicate. This is an area of active development; program leads can work with their tooling and platform teams to explore how to surface InnerSource projects and contribution paths where developers (and their tools) already work. | ||
|
|
||
| For more on infrastructure and tooling in InnerSource, see [Tooling](/tooling/innersource-tooling.md) and [Infrastructure](/infrastructure/infrastructure.md). | ||
|
jeffabailey marked this conversation as resolved.
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,27 @@ | ||
| # Why InnerSource Matters When Adopting AI | ||
|
|
||
| AI and agentic coding are changing how development work gets done. Developers spend more time specifying requirements and guiding AI tools than writing every line of code by hand. Yet collaboration, reuse, and clear boundaries remain critical. InnerSource helps organizations move faster with *shared* components and practices instead of scattered, duplicated solutions. | ||
|
jeffabailey marked this conversation as resolved.
jeffabailey marked this conversation as resolved.
|
||
|
|
||
| ## InnerSource is more relevant than ever | ||
|
|
||
| When many teams use AI to generate or modify code, the risk of duplication and inconsistency grows. InnerSource encourages shared building blocks and a single place to contribute improvements. That reduces waste and keeps quality consistent across the organization. The demand for software architecture and orchestration skills is also rising: understanding system boundaries, interfaces, and processes is essential for building valuable, reliable AI-assisted systems. InnerSource’s emphasis on transparency, documentation, and community aligns with this need. | ||
|
|
||
| ## Reducing context for AI | ||
|
|
||
| AI systems and coding agents work best when they have a well-scoped, well-boundaried context. InnerSource projects that are clearly scoped—with explicit interfaces and a clear purpose—give AI a manageable surface area. That improves reliability and reduces the chance of AI “hallucinating” or misusing code from outside the intended scope. Shaping your repositories for both humans and AI is a theme we explore in [Shaping Repositories and Practices for AI](shaping-for-ai.md). | ||
|
|
||
| ## Reuse and avoiding duplication | ||
|
|
||
| Reuse at the service or component level is especially valuable when many teams use AI to generate code. Without shared standards and shared repos, each team may produce similar solutions in isolation. InnerSource fosters reuse and cost sharing across units, which in turn supports sustainability and efficiency. This is the same benefit InnerSource has always offered; in an AI-augmented world, it becomes harder to ignore. | ||
|
|
||
| ## Platforms ready for InnerSource | ||
|
|
||
| Platforms and tooling play a crucial role in enabling InnerSource at scale. As organizations adopt AI and agentic workflows, collaboration platforms must support discovery, visibility, and contribution across team boundaries. Platforms that make it easy to find reusable components, understand interfaces, and submit improvements reduce friction and encourage participation. Investment in platform capabilities—search, documentation, governance workflows, and integration with development tools—directly multiplies the effectiveness of InnerSource practices in an AI-augmented environment. | ||
|
|
||
| ## Enterprise AI and production readiness | ||
|
|
||
| This section focuses on large-scale enterprise adoption of AI—internal tools, pipelines, and agentic workflows—rather than consumer-facing AI products. In that context, the difference between prototype AI solutions and production-ready ones matters a lot. InnerSource practices—transparency, code review, documentation, and governance—help teams build robust, secure, and maintainable AI-assisted development. They also help leaders see what is ready for production and what still needs work. | ||
|
|
||
| ## Evidence and further reading | ||
|
|
||
| AI tooling and organizational practices are evolving. This section will be updated with results from the [InnerSource Commons](https://innersourcecommons.org/) survey and from research partnerships (e.g. with universities, [FINOS](https://finos.org/), and other organizations) as data becomes available. If you have case studies or data to share, we encourage you to contribute or get in touch via the [InnerSource Commons Slack](https://innersourcecommons.org/slack). | ||
This file was deleted.
Uh oh!
There was an error while loading. Please reload this page.