Skip to content

feat: gstack-decision-learn — calibrate /autoplan from user overrides#378

Open
HMAKT99 wants to merge 1 commit intogarrytan:mainfrom
HMAKT99:arun/decision-learn
Open

feat: gstack-decision-learn — calibrate /autoplan from user overrides#378
HMAKT99 wants to merge 1 commit intogarrytan:mainfrom
HMAKT99:arun/decision-learn

Conversation

@HMAKT99
Copy link
Contributor

@HMAKT99 HMAKT99 commented Mar 23, 2026

Summary

  • Reads Decision Audit Trail tables from past /autoplan runs
  • Identifies decisions users consistently override
  • Writes calibration to ~/.gstack/decision-calibration.json
  • Future /autoplan runs can reference patterns to calibrate auto-decisions
$ gstack-decision-learn

Analyzed 12 autoplan runs, 184 decisions, 23 overrides.

LEARNED PATTERNS (3):
  scope_expansion_auth — overridden 4/4 (100% override rate)
  test_defer_never     — overridden 6/8 (75% override rate)
  codex_disagree_trust — overridden 5/7 (71% override rate)

Wrote: ~/.gstack/decision-calibration.json

1 file, 156 lines

bin/gstack-decision-learn — bash + inline Python. Parses existing audit trails.

Test plan

  • All existing tests pass
  • --show prints current calibration (or empty default)
  • --reset clears calibration file
  • No new dependencies

Reads Decision Audit Trail tables from past /autoplan runs, identifies
decisions users consistently override, and writes a calibration file
at ~/.gstack/decision-calibration.json.

Future /autoplan runs can reference this to calibrate auto-decisions
toward the user's actual preferences.

Usage:
  gstack-decision-learn          # analyze and write calibration
  gstack-decision-learn --show   # print current calibration
  gstack-decision-learn --reset  # clear learned patterns
@HMAKT99 HMAKT99 changed the title feat: gstack-decision-learn — /autoplan auto-decisions calibrate over time feat: gstack-decision-learn — calibrate /autoplan from user overrides Mar 23, 2026
@sevastyanovio
Copy link

Cool idea — learning from user overrides is the right instinct. I'm working on something related in PR #405 (/meditate) that approaches "compound over time" from the other direction: instead of learning from explicit override decisions, it proactively scans the repo + past AI conversations (Claude, Codex, Gemini) to build a "User Taste" profile — what you correct, what you ask about repeatedly, what patterns you prefer.

These could work well together: /meditate surfaces what the user cares about, decision-learn calibrates how autoplan should respond to those preferences. Happy to coordinate if both land.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants