Apparently setting the response format with langchain the way we do it right now is not optimal:
https://github.com/moritz-baumgart/AI4Science/blob/970fe7e5b6a1d005efd38dadc897ab518531389e/treesearch/llm/query.py#L91
It happend now a few times that the LLM does not return a correctly formatted response.
Apparently you can enforce the structured output more/better.
See also: https://chatgpt.com/share/69601982-f6d8-8012-9c21-d161229f1c2b
Apparently setting the response format with langchain the way we do it right now is not optimal:
https://github.com/moritz-baumgart/AI4Science/blob/970fe7e5b6a1d005efd38dadc897ab518531389e/treesearch/llm/query.py#L91
It happend now a few times that the LLM does not return a correctly formatted response.
Apparently you can enforce the structured output more/better.
See also: https://chatgpt.com/share/69601982-f6d8-8012-9c21-d161229f1c2b