Has anyone tried running it locally?
I adapted it for use with LM Studio by changing the tokenizer, LLM calls, and configurations. The connection to the API endpoint works, and persona creation is successful. However, listen_and_act or run fails, consistently producing an error related to the cognitive state attribute due to an empty response.
I have tried using LLaMA 3B, 1B, and Hermes 3 8B. I also increased the maximum tokens from 4000 to 8000.
the LLM generates nonsensical tokens, so I reduced the temperature from 1.5 to 0.8.
I’m reaching out to see if anyone else has experienced this issue and how they managed to resolve it.

Has anyone tried running it locally?
I adapted it for use with LM Studio by changing the tokenizer, LLM calls, and configurations. The connection to the API endpoint works, and persona creation is successful. However, listen_and_act or run fails, consistently producing an error related to the cognitive state attribute due to an empty response.
I have tried using LLaMA 3B, 1B, and Hermes 3 8B. I also increased the maximum tokens from 4000 to 8000.
the LLM generates nonsensical tokens, so I reduced the temperature from 1.5 to 0.8.
I’m reaching out to see if anyone else has experienced this issue and how they managed to resolve it.