diff --git a/innersource-and-ai/authors.md b/innersource-and-ai/authors.md index 16d55c6..fa5b0a1 100644 --- a/innersource-and-ai/authors.md +++ b/innersource-and-ai/authors.md @@ -5,6 +5,8 @@ Chronological order: * InnerSource Program Office (ISPO) Working Group, [InnerSource Commons](https://innersourcecommons.org/). +* Guilherme Dellagustin ([@dellagustin-sap](https://github.com/dellagustin-sap)) — framing of agentic coding, reuse futures, and emerging standards. +* Fei Wan — agent skills for enterprise standards. ## Reviewers diff --git a/innersource-and-ai/innersource-and-ai.md b/innersource-and-ai/innersource-and-ai.md index f77ad1a..9775917 100644 --- a/innersource-and-ai/innersource-and-ai.md +++ b/innersource-and-ai/innersource-and-ai.md @@ -1,13 +1,15 @@ # InnerSource and AI -Organizations are increasingly adopting AI in the workplace—from generative AI assistants to agentic coding tools that can write, refactor, and review code. This shift is changing how developers work: less time on typing code, more on defining requirements, guiding AI, and making sure systems are reliable and maintainable. For InnerSource program leads, the question is whether InnerSource still matters in this new landscape. +Organizations are increasingly adopting AI in the workplace—from generative AI assistants to agentic coding tools that can write, refactor, and review code. In many organizations, developers are now expected to do agentic coding (sometimes called "vibe coding"), where the role shifts from writing code to providing instructions in natural language and overseeing the work of automated coding agents. Some teams are going further, with multiple agents representing roles like quality engineering, project management, and frontend/backend development working in tandem and interacting directly with tools like issue trackers and source control platforms. + +This shift raises important questions: does software reuse still matter when AI can regenerate capabilities on demand? How do you maintain quality when code is produced at unprecedented speed? For InnerSource program leads, the question is whether InnerSource still matters in this new landscape. It does. InnerSource is potentially *more* important than ever. Shared repositories, clear boundaries, documentation, and collaborative practices help AI systems—and the people using them—work with the right context, reuse existing components, and keep quality high. This section explains why InnerSource matters when adopting AI, how to shape your repositories and practices for AI-assisted development, and what risks and guardrails to keep in mind. The following articles in this section go deeper: -- [Why InnerSource Matters When Adopting AI](why-innersource-matters-with-ai.md) — Relevance of InnerSource in an AI-augmented world, reuse, and production readiness. -- [Shaping Repositories and Practices for AI](shaping-for-ai.md) — Repository design, documentation, and workflow integration so both humans and AI can contribute effectively. -- [Risks and Guardrails](risks-and-guardrails.md) — Balancing speed with safety, the role of code review, and organizational best practices for AI use. +- [Why InnerSource Matters When Adopting AI](why-innersource-matters-with-ai.md) — Relevance of InnerSource in an AI-augmented world, reuse, the future of software development, and production readiness. +- [Shaping Repositories and Practices for AI](shaping-for-ai.md) — Repository design, documentation, agent skills, emerging standards, and workflow integration so both humans and AI can contribute effectively. +- [Risks and Guardrails](risks-and-guardrails.md) — Balancing speed with safety, mitigating AI slop, the role of code review, and organizational best practices for AI use. -AI tooling and practices are evolving quickly. This section will be updated as the community learns more and as survey and research data become available. If you are new to InnerSource, we recommend starting with [Getting Started with InnerSource](http://www.oreilly.com/programming/free/getting-started-with-innersource.csp) and the [Introduction](/introduction/introduction.md) to this book. +AI tooling and practices are evolving quickly. This section will be updated as the community learns more and as survey and research data become available. If you are new to InnerSource, we recommend starting with [an Introdution to InnerSource](https://innersourcecommons.org/learn/learning-path/introduction/) and the [Introduction](../introduction/introduction.md) to this book. diff --git a/innersource-and-ai/risks-and-guardrails.md b/innersource-and-ai/risks-and-guardrails.md index ab91dad..5f48014 100644 --- a/innersource-and-ai/risks-and-guardrails.md +++ b/innersource-and-ai/risks-and-guardrails.md @@ -4,6 +4,14 @@ AI is the ultimate InnerSource contributor. Like any external contributor, AI ag Adopting AI without these guardrails can deliver short-term gains in speed and productivity, but at the cost of long-term risks to quality, security, and maintainability. The good news: if your organization has built a strong InnerSource culture, you already have the foundations in place. +## Short-term speed vs. long-term risk + +AI coding tools can deliver impressive short-term productivity gains. The risk is that teams take on more risk than they realize—releasing AI-generated content with fewer human reviews, skipping tests, or accepting code they do not fully understand. These gains can erode over time as technical debt, security vulnerabilities, and maintenance burden accumulate. InnerSource practices like mandatory code review, clear ownership, and contribution guidelines act as a natural brake on this tendency, ensuring that speed does not come at the expense of reliability. + +## Mitigating AI slop + +"AI slop" refers to low-quality, generic, or incorrect content produced by AI systems without adequate human oversight. In a development context, this can mean boilerplate code that does not fit the project's conventions, misleading documentation, or subtly incorrect implementations. InnerSource's emphasis on transparency—keeping things traceable and open for inspection—directly mitigates this risk. When contributions (whether from humans or AI) go through visible review processes in shared repositories, quality issues are caught earlier and patterns of slop become visible to the community. + ## Transparency and stakeholder involvement Involving stakeholders and keeping development transparent supports responsible AI deployment. When decisions about tools, patterns, and policies are visible and discussable, teams can align on what is acceptable and what is not. This aligns with InnerSource principles of openness and collaboration and helps prevent AI from being used in ways that conflict with organizational values or compliance requirements. diff --git a/innersource-and-ai/shaping-for-ai.md b/innersource-and-ai/shaping-for-ai.md index bb62eb2..acc44a2 100644 --- a/innersource-and-ai/shaping-for-ai.md +++ b/innersource-and-ai/shaping-for-ai.md @@ -18,4 +18,20 @@ Playbooks that describe how to contribute—and what to avoid—benefit both hum InnerSource can be integrated directly into coding workflows through skills, plugins, and tooling. When reuse and contribution are part of the daily environment—for example, by suggesting existing InnerSource components when starting a new feature—both developers and AI-assisted flows are more likely to reuse rather than duplicate. This is an area of active development; program leads can work with their tooling and platform teams to explore how to surface InnerSource projects and contribution paths where developers (and their tools) already work. +## Agent skills and enterprise standards + +A particularly promising approach is using agent skills to codify enterprise standards and best practices. Rather than documenting standards in wikis that developers must find and follow manually, organizations can encode them as skills that coding agents use directly. When an agent starts work, it can automatically apply the organization's coding standards, security policies, and architectural guidelines. This makes InnerSource contribution guidelines machine-readable and enforceable at the point of development. + +The [Agent Skills](https://agentskills.io/home) open standard provides a framework for packaging and sharing these capabilities. InnerSource programs can maintain shared skill libraries that any team's agents can use, extending the InnerSource model from shared code to shared agent behaviors. + +## Emerging standards and protocols + +Several open standards are emerging that are relevant to InnerSource programs adopting agentic workflows: + +- [Model Context Protocol (MCP)](https://modelcontextprotocol.io/docs/getting-started/intro) — A standard for connecting AI agents to external tools and data sources, enabling agents to interact with development infrastructure like issue trackers, CI/CD systems, and code review platforms. +- [Agent2Agent (A2A)](https://a2a-protocol.org/latest/) — A protocol for communication between AI agents, supporting scenarios where multiple specialized agents collaborate on development tasks. +- [agents.md](https://agents.md/) — A standard for describing how AI agents should interact with a repository, similar to how CONTRIBUTING.md guides human contributors. + +InnerSource program leads should monitor these standards as they mature. Organizations that adopt them early can shape how agent-to-agent and agent-to-repository interactions work across team boundaries, reinforcing InnerSource collaboration patterns at the tooling level. + For more on infrastructure and tooling in InnerSource, see [Tooling](/tooling/innersource-tooling.md) and [Infrastructure](/infrastructure/infrastructure.md). diff --git a/innersource-and-ai/why-innersource-matters-with-ai.md b/innersource-and-ai/why-innersource-matters-with-ai.md index c079bc4..4f5933e 100644 --- a/innersource-and-ai/why-innersource-matters-with-ai.md +++ b/innersource-and-ai/why-innersource-matters-with-ai.md @@ -6,13 +6,21 @@ AI and agentic coding are changing how development work gets done. Developers sp When many teams use AI to generate or modify code, the risk of duplication and inconsistency grows. InnerSource encourages shared building blocks and a single place to contribute improvements. That reduces waste and keeps quality consistent across the organization. The demand for software architecture and orchestration skills is also rising: understanding system boundaries, interfaces, and processes is essential for building valuable, reliable AI-assisted systems. InnerSource’s emphasis on transparency, documentation, and community aligns with this need. +## The shifting role of the developer + +Agentic coding—sometimes called “vibe coding”—is changing what it means to be a software developer. The role is shifting from one that writes code to one that provides instructions in natural language and oversees the work of automated agents. Teams are beginning to deploy agent teams where specialized agents handle quality engineering, project management, frontend, and backend work, interacting directly with tools like Jira and GitHub. + +This makes software architecture and orchestration skills more important than ever. Understanding system boundaries, interfaces, and integration points is essential when you are guiding agents rather than typing code yourself. InnerSource's emphasis on clear ownership, well-documented interfaces, and collaborative governance gives developers and their agents the structure they need to operate effectively across team boundaries. + ## Reducing context for AI -AI systems and coding agents work best when they have a well-scoped, well-boundaried context. InnerSource projects that are clearly scoped—with explicit interfaces and a clear purpose—give AI a manageable surface area. That improves reliability and reduces the chance of AI “hallucinating” or misusing code from outside the intended scope. Shaping your repositories for both humans and AI is a theme we explore in [Shaping Repositories and Practices for AI](shaping-for-ai.md). +AI systems and coding agents work best when they have a well-scoped, well-boundaried context. InnerSource projects that are clearly scoped—with explicit interfaces and a clear purpose—give AI a manageable surface area. That improves reliability and reduces the chance of AI “hallucinating” or misusing code from outside the intended scope. Context size remains a practical impediment: current models have limits on how much code and conversation history they can process at once, which makes well-boundaried, modular repositories even more valuable. Shaping your repositories for both humans and AI is a theme we explore in [Shaping Repositories and Practices for AI](shaping-for-ai.md). ## Reuse and avoiding duplication -Reuse at the service or component level is especially valuable when many teams use AI to generate code. Without shared standards and shared repos, each team may produce similar solutions in isolation. InnerSource fosters reuse and cost sharing across units, which in turn supports sustainability and efficiency. This is the same benefit InnerSource has always offered; in an AI-augmented world, it becomes harder to ignore. +The ease of generating software with AI puts the role of software reuse in question. When a coding agent can rewrite any capability on demand, why reuse an existing component? The answer is that reuse at the service or component level still matters—especially for services that carry their own system of records, compliance requirements, and organizational governance. Rewriting these introduces risk and duplication that AI speed does not offset. + +Without shared standards and shared repos, each team may produce similar solutions in isolation. InnerSource fosters reuse and cost sharing across units, which in turn supports sustainability and efficiency. Reusable InnerSource components can also reduce the cost of AI adoption: well-maintained shared libraries mean agents spend fewer tokens and less compute regenerating solutions that already exist. This is the same benefit InnerSource has always offered; in an AI-augmented world, it becomes harder to ignore. ## Platforms ready for InnerSource