Nightshift itself is a scheduler and doesn’t need to talk to LLMs. The problem is that the tasks it schedules (crystallization/contemplation) hardcode assumptions about “how to call the LLM”.
Requested change (suite-wide)
Introduce a small abstraction for “call the model” that can use:
OpenClaw’s configured model routing/provider config (preferred)
OR a configurable OpenAI-compatible HTTP endpoint (current behavior) as fallback
Concretely:
Let the suite accept an llm block like:
{ provider: "openclaw" | "openai_compat", model: "...", endpoint: "...", apiKeyEnv: "...", ... }
Default can remain local endpoint for easy bootstrap, but make the “use OpenClaw provider routing” path first-class.