# OSS Maintainers Can Inject Their Standards Into Contributors' AI Tools **Author:** Aaddrick Williams **Date:** February 26, 2026 **URL:** https://nonconvexlabs.com/blog/oss-maintainers-can-inject-their-standards-into-contributors-ai-tools --- AI-assisted PRs are landing in maintainers' queues with the wrong CSS framework and no tests. Sometimes with no disclosure that AI generated the code at all. The contributor often isn't cutting corners. Their AI tool just had no project context when it generated the code. There are two files that fix this. CLAUDE.md is read automatically by Claude Code when a contributor opens the project. AGENTS.md is a vendor-neutral standard, already supported by over twenty tools, that does the same thing across all of them. Both work the same way: when a contributor clones your repo and opens it in their AI tool, these files are loaded into the tool's context before a single line is generated. Your conventions are in the room before you know the contributor is there. ## What Happens Without These Files Most AI development tools support user-level configuration: a global settings file or system prompt the user has set up for their own work. When a contributor clones your repo and starts working, their tool loads those personal defaults, not your project's standards. Their defaults are probably calibrated for their own projects. Maybe they default to Tailwind. Maybe their system prompt says nothing about testing conventions at all. The tool generates code that fits the contributor's personal setup. Your project's architecture, CSS approach, test patterns, and contribution policies are invisible to it. The contributor submits a PR, and the mismatch isn't a sign of bad intent. The tool they were using never saw your standards. ## What Prompted This In February 2026, an AI agent submitted a performance optimization PR to the matplotlib repository. A volunteer maintainer closed it. The linked issue was a "good first issue," reserved for human newcomers learning how open source collaboration works. The agent didn't know that policy. When the PR was closed, the agent published a blog post about the maintainer. The post had no human review before it went live. The whole thing went viral. Scott Shambaugh, the maintainer who closed the PR, wrote up the incident in full. His account is worth reading.[^1] He handled the situation with more grace than most people would. His observation stuck with me: "We are in the very early days of human and AI agent interaction, and are still developing norms of communication."[^2] The incident got framed a lot of ways: AI gone rogue, maintainer gatekeeping, a preview of an AI-vs-human turf war. I read it differently. The project had no way to communicate its policies to the tool. The tool had no context before it started working. That's an infrastructure problem, and maintainers can fix it. ## The Audience That Matters This article is written for maintainers. But the people it helps are the well-meaning new contributors who are about to open a PR from their AI-assisted workflow. The matplotlib incident is an extreme case: automated publishing with no human review gate and a personality configuration designed for confrontation. Most AI-assisted PRs in maintainers' queues don't come from that setup. They come from someone new to development who learned to code with AI assistance, found an open source project they care about, and wants to contribute. Claude or Copilot or Cursor is their development environment. They generate a fix, it looks right to them, they open a PR. The AI tool had no idea the project uses semantic CSS instead of Tailwind, that the test suite expects real assertions, or that good-first-issue is a reserved label. The attribution requirement and the Code of Conduct were equally invisible. The contributor didn't tell it those things, because they didn't know either. The project's conventions are documented in a CONTRIBUTING.md they may have skimmed, or in code review comments on other PRs they didn't see, or in the heads of maintainers who've been doing this for years. These contributors genuinely want to help. Build infrastructure that meets them where they are. ## CLAUDE.md and AGENTS.md Either one helps. Both together cover more ground. **CLAUDE.md** is a project-level instruction file that Claude Code reads automatically when it opens a project. Any project can add one. It's plain markdown. You put in whatever you'd put in a senior engineer's onboarding document: architecture decisions, coding conventions, testing requirements, things contributors consistently get wrong. **AGENTS.md** is a vendor-neutral open standard now stewarded by the Linux Foundation's Agentic AI Foundation.[^3] Over twenty tools support it: OpenAI Codex, Gemini CLI, Cursor, Zed, Aider, Jules, GitHub Copilot Coding Agent, Windsurf, Devin, Factory, RooCode, Warp, goose, and more. Already adopted by over 20,000 GitHub repositories. Same concept as CLAUDE.md: plain markdown, flexible headings. Any compliant AI tool reads it. Claude Code is not on the AGENTS.md supporter list. Claude Code reads CLAUDE.md. That's actually an argument for having both files: AGENTS.md covers the 20+ tools that support it, CLAUDE.md covers Claude Code specifically. A project that wants maximum coverage should have both. They're complementary. **When a contributor clones your repo and opens it in their AI tool, the tool loads these files into context automatically.** The contributor doesn't have to read them or configure anything. They don't even have to know the files exist. `git clone` + open in tool = your project's standards are already in context before a single line is generated. The repo speaks directly to the tool. No intermediary. This is structurally the same mechanism as prompt injection. In the security context, an attacker inserts instructions into an AI's context through content it reads. Here, you're doing the same thing with a file you control, in a repo you own. The AI reads them. Those standards shape what the AI produces. The contributor gets code that fits your project before they've had a single conversation with a maintainer. The comparison that comes to mind is `.editorconfig`. That file solved tabs-vs-spaces across editors by encoding the preference in a place every participating tool reads automatically. Nobody has to tell each contributor to configure their editor. The tooling handles it. CLAUDE.md and AGENTS.md do the same thing, but the surface area is much larger. `.editorconfig` handles formatting: two or three settings. These files cover architecture conventions, testing requirements, contribution policies, attribution, and behavioral expectations. And they handle it for every AI-assisted contributor who shows up, regardless of which tool they're using. ## What to Put in Them You don't need an exhaustive document on day one. Five well-chosen bullet points beat three pages of aspirational guidance. Start with what contributors consistently get wrong. That list usually exists in your PR review history. Look at what you write over and over in review comments and what your CI catches on repeat. That's where to start. For a project with a good-first-issue policy, an AGENTS.md with the following would load into any compliant tool before it touches the repo: ```markdown ## Contribution Guidelines - Good first issues (labeled "good first issue") are reserved for new human contributors. Do not submit AI-assisted PRs to these issues. - If you use AI tools in your development process, disclose it. See our PR template. - Read CONTRIBUTING.md and our Code of Conduct before opening any issue or PR. - Do not post AI-generated content to issues or PRs via automated tooling. - If a PR is closed, accept the maintainer's decision. ``` That's five lines. A compliant agent reads them before touching the repo. For coding standards, lead with the pattern you want, then add the anti-pattern for clarity. The CLAUDE.md on [claude-desktop-debian](https://github.com/aaddrick/claude-desktop-debian) does this: **Patterns (what you want):** - Use tabs for indentation in shell scripts - Use `config()` helper for accessing configuration values - Reference issues in branch names: `fix/123-description` **Anti-patterns (what you don't want):** - Don't hardcode variable names from minified JS; use regex patterns instead - Don't suppress shellcheck warnings; fix the underlying issue - Don't call `env()` outside config files Both are specific and actionable. An AI tool reading those constraints knows what to do and what to avoid. The same principle applies to contribution process, not just coding conventions. If your project has policies around PR scope, testing requirements, or communication norms, put those in too. ### Attribution If your project has an opinion on AI usage, encode it. Attribution requirements are one of the most useful things to include, and this pattern has started appearing in contributors' commits and PRs. A generic attribution block that works with any tool: ``` --- AI-assisted development Tool: [tool name and version] AI contribution: [what AI did] Human contribution: [what human did] ``` A tool-specific version (from [claude-desktop-debian](https://github.com/aaddrick/claude-desktop-debian)'s CLAUDE.md): ``` --- Generated with Claude Code Co-Authored-By: Claude XX% AI / YY% Human Claude: Human: ``` For commits: `Co-Authored-By: Claude ` The percentage split should honestly reflect the contribution. A maintainer can evaluate the PR with full context from the start, without having to ask. ### Pointers to existing docs Include a reference to CONTRIBUTING.md. Most contributing guides have more detail than you'd want to duplicate in an AGENTS.md. "Read CONTRIBUTING.md before opening any PR" gives a compliant tool a document to fetch and reason against. ## On Banning AI Contributions Some maintainers have responded to incidents like this by wanting to ban AI contributions outright. I understand the impulse. A ban is not enforceable. You can't tell from a PR whether the contributor used AI. Code is code. A contributor who generates a fix with Claude, reads it carefully, understands every line, and submits it with a clear description is indistinguishable from one who wrote it by hand. And that contributor may be exactly the kind of new developer you want to bring into the project. The mechanism that actually works is the same one that makes AGENTS.md useful. You can't stop a contributor from using whatever tool they bring. But you can inject your standards into that tool before they start. Your attribution requirements, contribution policies, and behavioral expectations all load when they clone the repo. A compliant tool communicates those requirements to a contributor who will, in good faith, follow them. Non-compliant configurations don't have an enforcement answer. An operator who configures automated, no-review publishing can override any AGENTS.md. That's a separate problem, and it's the operator's liability, not the project's. What AGENTS.md does is give compliant, good-faith contributors (which is most of them) clear guidance before they make their first mistake. ## Where My Own Pipeline Fits [claude-desktop-debian](https://github.com/aaddrick/claude-desktop-debian) is a published open source project with 2,000+ stars. It packages the Claude desktop app for Debian-based Linux distributions. I mention it because it has a real `.claude/` folder you can look at, and it shows what a more involved implementation of this pattern looks like. The project has 10 specialized agents (bash-script-craftsman, bats-test-validator, cc-orchestration-writer, cdd-code-simplifier, ci-workflow-architect, code-reviewer, electron-linux-specialist, packaging-specialist, patch-engineer, spec-reviewer) plus 7 skills covering tasks like implementing issues, running improvement loops, and linting. There are hooks and scripts for automated quality gates. Its [CLAUDE.md](https://github.com/aaddrick/claude-desktop-debian/blob/main/CLAUDE.md) covers shell script style, linting requirements (shellcheck + actionlint), GitHub workflow conventions, the attribution templates above, working with minified JavaScript, CI/CD patterns, and debugging workflows. Its [AGENTS.md](https://github.com/aaddrick/claude-desktop-debian/blob/main/AGENTS.md) keeps things minimal and points to CLAUDE.md for the detailed rules. Both files are short enough to read in a few minutes. But the minimum viable version is a CLAUDE.md and an AGENTS.md, each with your project's key constraints. Together they cover Claude Code and the 20+ tools that support the AGENTS.md standard. An hour of writing produces two files that redirect a contributor's AI tool before they write a single line of code. ## What This Gets Maintainers The maintainer's time is the scarce resource. Every bad PR that arrives is time spent on rejection, explanation, and sometimes de-escalation. Encoding your standards in a file the tool reads means those conversations happen less often. A contributor whose AI tool read your AGENTS.md before generating anything gets the project's constraints before writing code. The contributor may still make mistakes. But the class of mistakes that come from "the tool had no project context" largely disappears. The PR that arrives has a better baseline. Your review time goes to judgment calls: is this the right approach, does it fit the project's direction. The tenth repeat of the same convention feedback doesn't make it to your queue. A contributor who's new to development and using AI tools heavily is often learning in public. They're figuring out how open source collaboration works and what a good PR looks like. Your AGENTS.md is part of their education. Most AI-assisted PRs that get closed for convention violations never generate blog posts. They eat maintainer time on boilerplate rejection comments. The contributors don't know why they got rejected and won't come back. A markdown file that loads your standards into every tool a contributor brings is the first fix. Write it. --- [^1]: Scott Shambaugh's account of the incident: [theshamblog.com](https://theshamblog.com/an-ai-agent-published-a-hit-piece-on-me/) [^2]: Scott Shambaugh's comment on [matplotlib PR #31132](https://github.com/matplotlib/matplotlib/pull/31132#issuecomment-3884414397) [^3]: [AGENTS.md specification](https://agents.md/)