- Ctrl+Enter in the prompt editor to send the current prompt.
- Reasoning compatibility expanded:
_is_reasoning_compatible_model()now enables reasoning effort for models starting withgpt-5,o1, ando3.
- Cross-platform maximize behavior:
- Attempt
root.state('zoomed'), fall back toroot.attributes('-zoomed', True), then to a near-fullscreen geometry if needed.
- Attempt
- Reasoning Effort combobox handling:
- Values now include "(unavailable)" so disabled controls render correctly.
- "(unavailable)" is a UI-only sentinel; the control is disabled for incompatible models and never propagates this value to the session or API.
- Applies to both the New Conversation dialog and the main-window control.
- Unused
structimport.
- Reasoning Effort selector UX enhancement:
- Reasoning effort combobox now displays
(unavailable)when disabled for incompatible models. - Both main window and New Conversation dialog show clear feedback when reasoning controls aren't available.
- Default state shows
(unavailable)at app startup and when no compatible model is selected. - Improved user experience with intuitive visual feedback for feature availability.
- Reasoning effort combobox now displays
- Combobox values synchronization:
- Added
(unavailable)to reasoning effort combobox values list for proper UI rendering. - Ensures consistent display behavior across all reasoning effort selector states.
- Added
- Reasoning Effort selector in
openai_chat_app.py:- New combobox in both the New Conversation dialog and the main window controls with values
low | medium | high. - Default is
medium. Controls are enabled only for compatible models (see Notes).
- New combobox in both the New Conversation dialog and the main window controls with values
- Per-conversation reasoning setting:
OpenAIChatSession.reasoning_effortattribute stores the selected effort for each conversation.- API calls include
reasoning={"effort": <value>}when compatible.
- Dialog result now includes reasoning effort:
NewConversationDialogreturns(api_key, model_name, conversation_name, effort); effort is applied to the new session if compatible.
- UI synchronization:
AIChatAppupdates the reasoning combobox when the active conversation changes and applies user changes live to the session.
- Recursion bug in
AIChatApp._on_active_conversation_change()eliminated and the method now safely updates related UI. - Reasoning controls wiring:
- Implemented
AIChatApp._update_reasoning_controls()andAIChatApp._on_reasoning_effort_change(). - Implemented
AIChatApp._on_conversation_combobox_select()for switching active conversations from the combobox.
- Implemented
- Duplicate methods:
- Removed an obsolete duplicate of
_add_new_conversation.
- Removed an obsolete duplicate of
- Missing combobox refresh:
- Implemented
AIChatApp._update_conversation_combobox()to refresh the conversation list after creating a new one.
- Implemented
- Compatibility rule: reasoning effort is enabled for models whose name starts with
gpt-5(viaOpenAIChatSession._is_reasoning_compatible_model). - Defaults: when unset/invalid, effort falls back to
medium; controls are disabled for incompatible models. - API requests: the reasoning parameter is respected in both normal and retry code paths.
- OpenAI client initialization and wrappers in openai_chat_app.py:
OpenAIChatSession.__init__now creates an OpenAI client.- _ClientWrapper, _ModelsWrapper, _TokenCountResult provide
client.models.count_tokens(...)viatiktoken.
- Token counting utilities:
- _get_encoding_for_model() with fallbacks (
o200k_base→cl100k_base). OpenAIChatSession.update_token_count(history)to compute “Tokens in context”.
- _get_encoding_for_model() with fallbacks (
- Message sending and conversation threading:
OpenAIChatSession.send_message_to_OpenAI_API(prompt, history=None)implemented usingclient.responses.create(...),store=True, andprevious_response_idthreading.- Robust reply extraction using
response.output_textwith fallback parsing. - Role mapping for history replay (
'model'→'assistant').
- Conversation integration:
Conversation.send_message()passeshistory=self.chat_historyand updates total tokens after each send.
- Model loading:
NewConversationDialog._load_models()fetches models via OpenAI(api_key).models.list() and populates the combobox.
- Multi-turn 400 error (“previous_response_not_found”):
- Set
store=Trueto persist responses forprevious_response_idreferences. - Added retry fallback: on missing previous response, resend full history + prompt without
previous_response_id.
- Set
- GUI unchanged; all integration honors existing widgets and flows.
- Token counts are local approximations using
tiktokenbut consistent across:- Selected message tokens (_on_treeview_message_select()).
- Prompt tokens (_update_prompt_token_count()).
- Total tokens in context (top bar via
OpenAIChatSession.token_count).
Initial commit.