Caution
The new version does not support old entries. Before updating the component, save the agent settings data if required.
The transition to subentry is complete. It is now convenient to create multiple agents for a single provider.
- Streaming support for LLM
- Fix for Mistral AI
- Mistral Vision action
- Mistral Web serch agent
- Timestamp fixation for each conversation session
and
-
stream_response action
usage diagram + demo
It is possible to use the action directly in automation, but then the system will not be able to end the session correctly. This is not critical, but it is not ideal. When using this activation option, the session will be interrupted at the intent processing stage, which can be observed in the agent's debug menu (it can be said that the announcement interrupts the rigid structure of voice automation).
Tip
If you have problems, try using the files from this branch
This project started off as a copy of Home Assistant's built-in OpenAI Conversation Agent, with support for changing the base URL. Only the minimal changes to make this a standalone custom component capable of supporting a different base URL to make it compatible with other services offering an OpenAI-compatible API were made.
As development on Home Assistant's built-in OpenAI Conversation Agent has progressed, more features have been added that are OpenAI specific and less compatible with other providers that offer an OpenAI compatible API. Due to this, this project does have the following limitations:
- OpenAI's reasoning parameters are not supported.
- The project currently continues to use the
max_tokensparameter as opposed to the newermax_completion_tokensparameter for backwards compatibility.
The standard model set for use with MistralAI is mistral-small-latest; it is likely that you will need to change this value.