Skip to content

Latest commit

 

History

History
102 lines (82 loc) · 5.98 KB

File metadata and controls

102 lines (82 loc) · 5.98 KB

Changelog — OpenAI Interface App

Version 0.6.2

Added

  • Ctrl+Enter in the prompt editor to send the current prompt.

Changed

  • Reasoning compatibility expanded:
    • _is_reasoning_compatible_model() now enables reasoning effort for models starting with gpt-5, o1, and o3.

Fixed

  • Cross-platform maximize behavior:
    • Attempt root.state('zoomed'), fall back to root.attributes('-zoomed', True), then to a near-fullscreen geometry if needed.
  • Reasoning Effort combobox handling:
    • Values now include "(unavailable)" so disabled controls render correctly.
    • "(unavailable)" is a UI-only sentinel; the control is disabled for incompatible models and never propagates this value to the session or API.
    • Applies to both the New Conversation dialog and the main-window control.

Removed

  • Unused struct import.

Version 0.6.1

Improved

  • Reasoning Effort selector UX enhancement:
    • Reasoning effort combobox now displays (unavailable) when disabled for incompatible models.
    • Both main window and New Conversation dialog show clear feedback when reasoning controls aren't available.
    • Default state shows (unavailable) at app startup and when no compatible model is selected.
    • Improved user experience with intuitive visual feedback for feature availability.

Fixed

  • Combobox values synchronization:
    • Added (unavailable) to reasoning effort combobox values list for proper UI rendering.
    • Ensures consistent display behavior across all reasoning effort selector states.

Version 0.6

Added

  • Reasoning Effort selector in openai_chat_app.py:
    • New combobox in both the New Conversation dialog and the main window controls with values low | medium | high.
    • Default is medium. Controls are enabled only for compatible models (see Notes).
  • Per-conversation reasoning setting:
    • OpenAIChatSession.reasoning_effort attribute stores the selected effort for each conversation.
    • API calls include reasoning={"effort": <value>} when compatible.

Changed

  • Dialog result now includes reasoning effort:
    • NewConversationDialog returns (api_key, model_name, conversation_name, effort); effort is applied to the new session if compatible.
  • UI synchronization:
    • AIChatApp updates the reasoning combobox when the active conversation changes and applies user changes live to the session.

Fixed

  • Recursion bug in AIChatApp._on_active_conversation_change() eliminated and the method now safely updates related UI.
  • Reasoning controls wiring:
    • Implemented AIChatApp._update_reasoning_controls() and AIChatApp._on_reasoning_effort_change().
    • Implemented AIChatApp._on_conversation_combobox_select() for switching active conversations from the combobox.
  • Duplicate methods:
    • Removed an obsolete duplicate of _add_new_conversation.
  • Missing combobox refresh:
    • Implemented AIChatApp._update_conversation_combobox() to refresh the conversation list after creating a new one.

Notes

  • Compatibility rule: reasoning effort is enabled for models whose name starts with gpt-5 (via OpenAIChatSession._is_reasoning_compatible_model).
  • Defaults: when unset/invalid, effort falls back to medium; controls are disabled for incompatible models.
  • API requests: the reasoning parameter is respected in both normal and retry code paths.

Version 0.5

Added

  • OpenAI client initialization and wrappers in openai_chat_app.py:
    • OpenAIChatSession.__init__ now creates an OpenAI client.
    • _ClientWrapper, _ModelsWrapper, _TokenCountResult provide client.models.count_tokens(...) via tiktoken.
  • Token counting utilities:
    • _get_encoding_for_model() with fallbacks (o200k_basecl100k_base).
    • OpenAIChatSession.update_token_count(history) to compute “Tokens in context”.

Changed

  • Message sending and conversation threading:
    • OpenAIChatSession.send_message_to_OpenAI_API(prompt, history=None) implemented using client.responses.create(...), store=True, and previous_response_id threading.
    • Robust reply extraction using response.output_text with fallback parsing.
    • Role mapping for history replay ('model''assistant').
  • Conversation integration:
    • Conversation.send_message() passes history=self.chat_history and updates total tokens after each send.
  • Model loading:
    • NewConversationDialog._load_models() fetches models via OpenAI(api_key).models.list() and populates the combobox.

Fixed

  • Multi-turn 400 error (“previous_response_not_found”):
    • Set store=True to persist responses for previous_response_id references.
    • Added retry fallback: on missing previous response, resend full history + prompt without previous_response_id.

Notes

  • GUI unchanged; all integration honors existing widgets and flows.
  • Token counts are local approximations using tiktoken but consistent across:
    • Selected message tokens (_on_treeview_message_select()).
    • Prompt tokens (_update_prompt_token_count()).
    • Total tokens in context (top bar via OpenAIChatSession.token_count).

Version 0.4

Initial commit.