The Linux kernel’s process documentation now includes a dedicated section on coding assistants. It is short, specific, and sets a precedent that will matter beyond the kernel itself. The core rules: AI agents must not add Signed-off-by tags, all AI-generated code must be reviewed by the human submitter before they add their own sign-off, and when AI tools contribute materially, the submission should include an “Assisted-by” tag listing the model and any specialised analysis tools used. The format is Assisted-by: Claude:claude-3-opus coccinelle sparse. Standard development tools like git are excluded from the tag.
The legal rationale is straightforward. The Developer Certificate of Origin that Signed-off-by certifies requires a human to attest that they have the right to submit the code and accept the licensing terms. An AI agent cannot make that certification. The human submitter bears full responsibility, which means they cannot delegate the review step, and they cannot claim the AI reviewed it on their behalf. The guidance is not hostile to AI use; it is precise about where human judgement is non-negotiable.
The attribution tag is the more practically significant addition. It does not change the legal responsibility structure, but it creates a transparent record of AI involvement in the contribution. Over time, this gives maintainers and the broader community data about which models are being used for what kinds of work, and what the quality distribution looks like across AI-assisted contributions. Other major open source projects — the Linux Foundation’s member projects, Apache, GNOME, and others — are watching how the kernel handles this. The Assisted-by tag convention is a candidate for becoming a cross-project standard.
For developers contributing to the kernel using AI tools, the immediate practical change is small: add the tag, review everything before signing off, and do not let your IDE or agent add a Signed-off-by on its behalf. The rule about Signed-off-by is already violated by some AI coding tools that auto-generate commit messages with attribution. If you contribute to any GPL-2.0-only project with a formal DCO process, check whether your toolchain is doing this silently.
The broader signal is that major open source governance bodies have moved past “should AI be allowed” to “here is how it is attributed and who is accountable.” That is a more mature position than most enterprise AI governance frameworks have reached, and the simplicity of the kernel’s approach, two tags and a review requirement, is worth noting for teams trying to establish their own internal norms.