pr template: ask contributors to review and copy-edit AI contributions#8897
Conversation
|
While I in general like the idea, I don't consider it a good execution. In my opinion if we want to pursue something like this we should orient ourselves on https://docs.fedoraproject.org/en-US/council/policy/ai-contribution-policy/ and https://discourse.llvm.org/t/rfc-llvm-ai-tool-policy-human-in-the-loop/89159 which resulted in llvm/llvm-project#154441. |
|
Sorry, I had not seen Philip's comment when I approved. I still think this PR is a good improvement. We can add a longer version later if necessary. Does that sound okay? |
SGTM. To me this is like adding the bare necessities while writing an actual policy can follow later. |
|
Here is also another nice resource I found recently on AI policies https://github.com/chaoss/wg-ai-alignment/tree/main/moderation#readme which also should be looked at when we'll write the policy. |
The goal is to nudge contributors towards taking on the burden of understanding and editing any AI-generated work they are trying to get reviewed. AFAICT, jj is currently less overwhelmed with low-quality AI contributions than to some other projects, but it started to happen occasionally. For now, this is partly an experiment as opposed to a fixed rule. Will the people who most need this guidance read it and benefit from it? Hopefully, they'll try to or will ask questions. I considered also adding a clarification to the second point, suggesting something like "If in doubt, delete all the AI-generated text and rewrite it in your own words", but that point was already getting long. We could add something like that later. See #8897 (comment) for some reasons I didn't want to require disclosing whether people used AI tools while preparing the PR.
|
Thanks everyone! Sorry for the delay. I updated the wording, thanks Martin, and added a link to #8897 (comment) to the description. If I don't hear anything else, I'll merge this tomorrow. |
|
Also, thanks Philip for the useful links! |
|
Thanks. LGTM |
|
Here's another policy (cilium/community#408) from Cilium which I think is more aligned with the current status quo. |
|
And another list of policies from the Python community: https://github.com/melissawm/open-source-ai-contribution-policies |
The goal is to nudge contributors towards taking on the burden of understanding and editing any AI-generated work they are trying to get reviewed.
AFAICT, jj is currently less overwhelmed with low-quality AI contributions than to some other projects, but it started to happen occasionally.
For now, this is partly an experiment as opposed to a fixed rule. Will the people who most need this guidance read it and benefit from it? Hopefully, they'll try to or will ask questions.
I considered also adding a clarification to the second point, suggesting something like "If in doubt, delete all the AI-generated text and rewrite it in your own words", but that point was already getting long. We could add something like that later.
Update: See See #8897 (comment)
below for some reasons I didn't want to require disclosing whether people
used AI tools while preparing the PR.
Checklist
If applicable:
CHANGELOG.mdREADME.md,docs/,demos/)cli/src/config-schema.json)