-
Notifications
You must be signed in to change notification settings - Fork 16
Feat/chat streaming #65
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: dev
Are you sure you want to change the base?
Conversation
| from enum import Enum | ||
| from abc import ABC, abstractmethod | ||
| from langchain.chat_models import init_chat_model | ||
| from langchain_openai import ChatOpenAI |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- u can stream without importing specific model library, utilize the already exisiting init_chat_model. refer here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- the file location for this file is not logical.
- make handle post streaming a background task
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
-ensure the generator is wrapped in a try finally block that guarantees the message is saved even if the connection drops.
70fd56f to
3f1af96
Compare
…tant into chat-streaming
de67f37 to
b76d941
Compare
Pull Request
Description
i applied those changes
Backend
LLM Client-->llm_clients.py
RAG Generator-->rag_generator.py
Chat Endpoint--> chat.py
EventGenerator--> event_generator.py
Frontend
MessageList.ts
Chat.tsx
isSendingMessagedown toMessageListasisStreamingto manage scroll during streaming.MessageBubble.tsx
chatService.ts
/api/chatthat reads SSE chunks overfetch.useChatStore.ts
sendMessagestreaming-first, updating the thinking bubble in real-time as chunks arrive.Checklist
How Has This Been Tested?
The changes were tested end-to-end by sending messages through the chat to verify that streaming responses display correctly with partial token updates, updated UI elements such as auto-scroll and the “Thinking…” animation behave as intended, and the streaming chat functionality works reliably alongside existing features.
Additional Notes
Add any additional context or information about the pull request here.