When running lossless-claw via OpenClaw, setting a summarizer model override in config does not currently affect which model is used for LCM compaction summaries.
Repro
- Configure a summary model override, e.g.:
plugins.entries.lossless-claw.config.summaryModel = "openai-resp/gpt-5.1-codex-max" (runtime config)
or
legacyParams.config.summaryModel = "openai-resp/gpt-5.1-codex-max"
- Trigger compaction (manual
/compact or automatic compaction).
- Observe the summarizer still resolves to
legacyParams.model (often the session/chat model).
Expected
If summaryModel is provided via config, it should take precedence over legacy provider/model hints.
Root cause
createLcmSummarizeFromLegacyParams currently only considers legacyParams.provider + legacyParams.model when calling deps.resolveModel(...), and ignores summaryModel from config.
Notes
When an explicit summaryModel override is provided, passing providerHint to resolveModel can also clobber a cross-provider model ref. In that case, providerHint should be omitted.
Related: #65
When running lossless-claw via OpenClaw, setting a summarizer model override in config does not currently affect which model is used for LCM compaction summaries.
Repro
plugins.entries.lossless-claw.config.summaryModel = "openai-resp/gpt-5.1-codex-max"(runtime config)or
legacyParams.config.summaryModel = "openai-resp/gpt-5.1-codex-max"/compactor automatic compaction).legacyParams.model(often the session/chat model).Expected
If
summaryModelis provided via config, it should take precedence over legacyprovider/modelhints.Root cause
createLcmSummarizeFromLegacyParamscurrently only considerslegacyParams.provider+legacyParams.modelwhen callingdeps.resolveModel(...), and ignoressummaryModelfrom config.Notes
When an explicit
summaryModeloverride is provided, passingproviderHinttoresolveModelcan also clobber a cross-provider model ref. In that case,providerHintshould be omitted.Related: #65