Skip to content

Conversation

@aliarabat
Copy link

✨ Feature: Support for New GPT-5 Models

This Pull Request introduces full compatibility with OpenAI's new model series: gpt-5 and gpt-5-mini.

This upgrade is essential as the latest models use a revised API schema, making them incompatible with the current version of TopicGPT.

⚙️ Technical Changes
The primary changes address necessary parameter updates to correctly invoke the new gpt-5 models:

  1. Renamed Max Tokens Parameter:
  • The max_tokens argument is deprecated and no longer supported when invoking the gpt-5 series.

  • Action: I have globally replaced instances of max_tokens with the new, required parameter: max_completion_tokens.

  1. Adjusted Temperature Default:
  • The gpt-5 model series is designed to accept a default temperature value of 1.0.

  • Action: I have updated all relevant model invocation code to explicitly set the temperature to 1.0 to ensure optimal and consistent model behavior.

💡 Future Improvement (Follow-up Task)
While the current implementation hardcodes the temperature to 1.0, it would significantly improve the user experience to make this configurable.

Proposal: I suggest creating a follow-up issue (which I am happy to open) to refactor the temperature parameter. This would involve moving it to a configurable setting (e.g., via environment variables or a configuration file) to allow users to easily adjust the model's temperature without modifying the codebase.

✅ Testing Notes
I have verified successful completion calls using the following models: gpt-4o, gpt-4o-mini, gpt-5, gpt-5-mini, as well as Ollama models, including gemma3:latest and llama3.1:latest.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant