Skip to content

fix: increase signal in git diff, git log, and json filters (#621)#708

Open
pszymkowiak wants to merge 1 commit intodevelopfrom
fix/signal-truncation
Open

fix: increase signal in git diff, git log, and json filters (#621)#708
pszymkowiak wants to merge 1 commit intodevelopfrom
fix/signal-truncation

Conversation

@pszymkowiak
Copy link
Collaborator

Closes #621

Summary

Three filters were cutting signal that LLMs need:

Filter Before After
git diff 30 lines/hunk 100 lines/hunk
git log 1 body line 3 body lines
json Schema only (no values) Values by default, --schema for types-only

Why

  • git diff at 30 lines: LLMs miss the end of refactors and produce incorrect code
  • git log at 1 line: BREAKING CHANGE notes, migration instructions lost
  • json without values: useless for config debugging (the primary use case)

Test plan

  • 979 tests passing, clippy clean
  • rtk git diff HEAD~3 — shows full hunks up to 100 lines
  • rtk git log -5 — shows 3 body lines per commit
  • rtk json — shows values ("rtk", "4.5", etc.)
  • rtk json --schema — shows types only (old behavior)
  • Tested with phi4:14b (LLM): 8/10 comprehension, correctly reads all values

@pszymkowiak pszymkowiak force-pushed the fix/signal-truncation branch from 0b45e2d to 9fe7879 Compare March 18, 2026 20:50
- git diff: raise max_hunk_lines from 30 to 100 (LLMs need full hunks)
- git log: show 3 body lines instead of 1 (preserves BREAKING CHANGE, migration notes)
- json: show values by default (LLMs need values for config debugging), add --schema for types-only

Tested with phi4:14b on local LLM — all 3 fixes improve comprehension.

Signed-off-by: Patrick szymkowiak <patrick.szymkowiak@innovtech.eu>
@pszymkowiak pszymkowiak force-pushed the fix/signal-truncation branch from 9fe7879 to 272534b Compare March 19, 2026 08:40
@pszymkowiak
Copy link
Collaborator Author

LLM Comprehension Test — 57 RTK Commands

Tested all 57 RTK Rust commands with 3 local LLMs to verify filtered output is actionable.

Protocol

For each command: send RTK output to LLM with "You ran <cmd>, output: <output>. <question>" — simulates real AI coding workflow. LLM must answer correctly to PASS.

Results

LLM Size Score
llama3.1:8b 8B 57/57 (100%)
phi4:14b 14B 57/57 (100%)
mistral:7b 7B 57/57 (100%)

0% confusion rate — all RTK filtered outputs correctly understood by LLMs as small as 7B. No command blocks the AI from taking the correct next action.

Commands tested

Git (5), Files (8), Data (4), Runners (4), GitHub (3), Network (2), Meta (7), Python (4), Go (2), Docker (2), TOML filters (12), JS/TS (4) = 57 total

Also fixed in this PR

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant