Skip to content

feat: read from litellm defaults for context window limits#204

Open
shreyashankar wants to merge 1 commit into
lotus-data:mainfrom
shreyashankar:main
Open

feat: read from litellm defaults for context window limits#204
shreyashankar wants to merge 1 commit into
lotus-data:mainfrom
shreyashankar:main

Conversation

@shreyashankar
Copy link
Copy Markdown

@shreyashankar shreyashankar commented Aug 21, 2025

Purpose

This PR reads from litellm's model_cost defaults instead of using hardcoded values for context limits. The LM class now automatically derives max_ctx_len and max_tokens from litellm's model information when these parameters are not explicitly provided.

Test Plan

Tested a lotus tutorial and my own scripts with long documents with the gpt-4.1-mini model (which has a 1M context window), to ensure we weren't running into context window issues.

Test Results

N/A

(Optional) Documentation Update

N/A

Type of Change

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Documentation update
  • Performance improvement
  • Refactoring (no functional changes)

Checklist

  • My code follows the style guidelines of this project
  • I have performed a self-review of my own code
  • I have commented my code, updating docstrings
  • I have made corresponding changes to the documentation
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes

mypy was failing before making the change; I didn't touch files outside of lm.py.

BEFORE SUBMITTING, PLEASE READ https://github.com/lotus-data/lotus/blob/main/CONTRIBUTING.md
anything written below this line will be removed by GitHub Actions

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant