TL;DR: Long-form reasoning (CoT) is a double-edged sword. While models like OpenAI o1 and DeepSeek-R1 are smarter than ever, debugging a 10,000-token reasoning trace is a nightmare. ReasoningLens turns that "Wall of Text" into an interactive, hierarchical map.
reasoninglens.mp4
The era of Large Reasoning Models (LRMs) has arrived. We love their ability to self-correct and plan, but there's a catch: Understanding how they reached a conclusion is getting harder.
When a model produces a massive reasoning trace, the "critical" logic often gets buried under repetitive procedural steps. Finding a single hallucination or a logical pivot feels like finding a needle in a haystack.
Built on top of Open WebUI, ReasoningLens is a developer-centric toolkit designed to help the open-source community visualize, understand, and debug model reasoning chains without losing their minds.
"ReasoningLens doesn't just show you what the model said; it shows you how the model thinks."
Most CoT tokens are just "execution" (doing the math), while only a few are "strategic" (deciding to change course). ReasoningLens separates the signal from the noise:
- Planning Unit Segmentation: We automatically detect logical words like "Wait, let me re-check..." or "Alternatively...".
- The Macro View (Exploration): See the high-level strategy—where the model backtracked, where it validated, and where it struggled.
- The Micro View (Exploitation): Dive deep into specific arithmetic or substitutions only when you need to.
Longer reasoning doesn't always mean better reasoning. "Length-scaling" can introduce hallucinations that are hard to spot. Our SectionAnalysisAgent acts as a specialized auditor for your traces:
- ⚡ Batch-wise Analysis: Efficiently parses massive traces without losing context, making large-scale debugging feasible.
- 🧠 Rolling Summary Memory: Remembers context from prior sections, catching non-local inconsistencies and logical drift that would exhaust a human reviewer.
- 🧮 Tool-Augmented Verification: Tired of models failing at basic math? ReasoningLens integrates a calculator to verify arithmetic steps automatically.
One-off debugging is great, but systemic patterns matter more. ReasoningLens aggregates data across multiple conversations to build a Reasoning Profile of your model:
- Aggregate: Collect traces across diverse domains (Coding, Math, Logic).
- Compress: Distill recurring patterns into a compact memory state.
- Report: Generate a structured Markdown report highlighting the model's "Blind Spots" and "Consistent Strengths."
- Python: Version 3.11 or higher (Required for backend services)
- Node.js: Version 22.10 or higher (Required for frontend development)
- Docker & Docker Compose (For containerized deployment)
git clone https://github.com/icip-cas/reasoning-lens.git
cd reasoning-lenscd backend
# Create and activate conda environment
conda create --name open-webui python=3.11
conda activate open-webui
# Install dependencies
pip install -r requirements.txt -U
# Start backend server
sh dev.shThe backend will be running at: http://localhost:8080
Open a new terminal:
# Install frontend dependencies
npm install --force
# Start development server
npm run devThe frontend will be running at: http://localhost:5173
# Make the script executable
chmod +x dev-docker.sh
# Start development environment
./dev-docker.shThis will automatically:
- Clean up old containers
- Create necessary data volumes
- Start both frontend and backend services
Access URLs:
- 🌐 Frontend:
http://localhost:5173 - 🔧 Backend:
http://localhost:8080
# View all logs
docker-compose -f docker-compose.dev.yaml logs -f
# View backend logs only
docker-compose -f docker-compose.dev.yaml logs -f backend
# View frontend logs only
docker-compose -f docker-compose.dev.yaml logs -f frontend
# Stop all services
docker-compose -f docker-compose.dev.yaml down
# Restart backend
docker-compose -f docker-compose.dev.yaml restart backend
# Restart frontend
docker-compose -f docker-compose.dev.yaml restart frontend# Basic build (CPU only)
docker build -t reasoning-lens:latest .
# Build with CUDA support
docker build --build-arg USE_CUDA=true -t reasoning-lens:cuda .
# Build with Ollama integration
docker build --build-arg USE_OLLAMA=true -t reasoning-lens:ollama .
# Build slim version (without pre-downloaded models)
docker build --build-arg USE_SLIM=true -t reasoning-lens:slim .| Argument | Default | Description |
|---|---|---|
USE_CUDA |
false |
Enable CUDA/GPU support |
USE_CUDA_VER |
cu128 |
CUDA version (e.g., cu117, cu121, cu128) |
USE_OLLAMA |
false |
Include Ollama in the image |
USE_SLIM |
false |
Skip pre-downloading embedding models |
USE_EMBEDDING_MODEL |
sentence-transformers/all-MiniLM-L6-v2 |
Sentence transformer model for RAG |
USE_RERANKING_MODEL |
"" |
Reranking model for RAG |
# Run the container
docker run -d \
--name reasoning-lens \
-p 8080:8080 \
-v reasoning-lens-data:/app/backend/data \
reasoning-lens:latest
# Run with GPU support
docker run -d \
--name reasoning-lens \
--gpus all \
-p 8080:8080 \
-v reasoning-lens-data:/app/backend/data \
reasoning-lens:cuda| Variable | Description |
|---|---|
OPENAI_API_KEY |
Your OpenAI API key |
OPENAI_API_BASE_URL |
Custom OpenAI-compatible API endpoint |
WEBUI_SECRET_KEY |
Secret key for session management |
DEFAULT_USER_ROLE |
Default role for new users (user or admin) |
reasoning-lens/
├── backend/ # Python backend (FastAPI)
│ ├── open_webui/ # Main application
│ │ ├── routers/ # API routes
│ │ ├── models/ # Data models
│ │ └── utils/ # Utilities
│ └── requirements.txt # Python dependencies
├── src/ # Svelte frontend
│ ├── lib/ # Shared components
│ └── routes/ # Page routes
├── static/ # Static assets
├── Dockerfile # Production Docker build
├── docker-compose.dev.yaml # Development compose file
- Backend: Python 3.11+, FastAPI, SQLAlchemy
- Frontend: Svelte 5, TypeScript, TailwindCSS
- Database: SQLite (default), PostgreSQL (optional)
- Containerization: Docker, Docker Compose
This project is licensed under the MIT License - see the LICENSE file for details.
If you find ReasoningLens useful in your research, please consider citing:
@software{Zhang_ReasoningLens_2026,
author = {Zhang, Jun and Zheng, Jiasheng and Lu, Yaojie and Cao, Boxi},
license = {MIT},
month = feb,
title = {{ReasoningLens}},
url = {https://github.com/icip-cas/ReasoningLens},
version = {0.1.0},
year = {2026}
}- Jun Zhang - Main Contributor
- Jiasheng Zheng - Contributor
- Yaojie Lu - Contributor
- Boxi Cao - Project Lead
We thank the Open WebUI community and all early users and contributors for their feedback and support. We look forward to continued contributions from the open-source community. ReasoningLens is better because of your time and curiosity.
Have questions or want to discuss ideas? Open an issue on GitHub or join the discussion in our community! Together, let's create an even more powerful tool for the community. 🌟




