Skip to content

Nightshift: needs provider-agnostic LLM interface (or standard OpenClaw model routing) #1

@davidrobertson

Description

@davidrobertson

Nightshift itself is a scheduler and doesn’t need to talk to LLMs. The problem is that the tasks it schedules (crystallization/contemplation) hardcode assumptions about “how to call the LLM”.

Requested change (suite-wide)
Introduce a small abstraction for “call the model” that can use:
OpenClaw’s configured model routing/provider config (preferred)
OR a configurable OpenAI-compatible HTTP endpoint (current behavior) as fallback

Concretely:
Let the suite accept an llm block like:
{ provider: "openclaw" | "openai_compat", model: "...", endpoint: "...", apiKeyEnv: "...", ... }
Default can remain local endpoint for easy bootstrap, but make the “use OpenClaw provider routing” path first-class.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions