-
Notifications
You must be signed in to change notification settings - Fork 1
Open
Description
Context
PR #88 surfaced an issue: an LLM-assisted contributor's tooling auto-removed comments it deemed "self-descriptive code." Some of those comments were contextually important — they document crevices explored during development, design rationale, or non-obvious behavior that isn't self-evident from the code alone.
This is going to be a recurring pattern as more LLM-assisted contributions come in. LLM coding harnesses tend to strip comments aggressively under a "clean code" heuristic, without understanding which comments carry architectural or historical context.
Problem
There's no established guideline for what comments should exist in the codebase. This makes it:
- Hard for contributors (human or LLM-assisted) to know what to keep vs remove
- Easy for drive-by "cleanup" PRs to silently drop important context
- Difficult to review comment removals without re-deriving the original reasoning
Proposal
- Audit existing comments in tests (
crates/lineark/tests/) and core modules — which ones document non-obvious behavior, design decisions, or API quirks vs which are truly redundant? - Establish a comment policy for the project, e.g.:
- Keep comments that explain why, not what
- Keep comments that document Linear API quirks or workarounds
- Keep comments on test cases that explain what scenario is being validated and why
- Remove comments that merely restate the code
- Add the policy to CLAUDE.md so both human contributors and LLM harnesses respect it
References
- PR feat:
--projectfilter onissues list#88 review discussion: feat:--projectfilter onissues list#88 (review)
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels