We currently have no written policy around the use of generative AI in code contributions. We may want to establish a policy that does things like require contributors to disclose when AI tooling was used, holding AI-assisted submissions to the same standard of originality, quality, and attribution as any other contribution,
and encourages contributors to learn, understand, and reason through the code they're submitting, rather than deferring entirely to tooling.
AI tooling in general can be a useful accelerant but if a contributor cannot explain the code they’re submitting, or isn’t able to reason through feedback, that’s a red flag.
A few things we might want to consider when developing:
How do we ensure contributors appropriately credit prior work, whether human- or AI-generated?
Should we require contributors to disclose if AI tools were used? If so, how do we define acceptable usage?
How do we handle low-context, potentially AI-generated PRs that add to the review burden and may reflect limited understanding of the codebase?
How do we handle cases where trust is broken through copy-paste or AI-mediated submissions, particularly when the contributor appears early in their OSS journey?
We currently have no written policy around the use of generative AI in code contributions. We may want to establish a policy that does things like require contributors to disclose when AI tooling was used, holding AI-assisted submissions to the same standard of originality, quality, and attribution as any other contribution,
and encourages contributors to learn, understand, and reason through the code they're submitting, rather than deferring entirely to tooling.
AI tooling in general can be a useful accelerant but if a contributor cannot explain the code they’re submitting, or isn’t able to reason through feedback, that’s a red flag.
A few things we might want to consider when developing:
How do we ensure contributors appropriately credit prior work, whether human- or AI-generated?
Should we require contributors to disclose if AI tools were used? If so, how do we define acceptable usage?
How do we handle low-context, potentially AI-generated PRs that add to the review burden and may reflect limited understanding of the codebase?
How do we handle cases where trust is broken through copy-paste or AI-mediated submissions, particularly when the contributor appears early in their OSS journey?