Skip to content

[ENHANCEMENT] Stream model thoughts #10949

@jamestmartin

Description

@jamestmartin

Edited in response to roomote's comments.

Problem (one or two sentences)

Users cannot audit or inspect the internal reasoning of models invoked through Roo Code with the Ollama provider, limiting their ability to diagnose failures, tune prompts, or audit model behavior.

Context (who is affected and when)

This affects users of the Ollama provider who would benefit from insight into model reasoning during API requests and subtasks.

Desired behavior (conceptual, not technical)

Roo Code should add support for viewing reasoning and managing reasoning effort to the Ollama provider.

Constraints / preferences (optional)

  • Distinguish clearly between when an API request is queued and when it is streaming, enabling the user to distinguish between queued/loading states and active reasoning streams.
  • Ollama supports streaming, so this information would preferably be real-time when available.

Request checklist

  • I've searched existing Issues and Discussions for duplicates
  • This describes a specific problem with clear context and impact

Roo Code Task Links (optional)

No response

Acceptance criteria (optional)

No response

Proposed approach (optional)

No response

Trade-offs / risks (optional)

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    EnhancementNew feature or request

    Type

    No type

    Projects

    Status

    Triage

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions