You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: AGENTS.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,9 +4,9 @@ This document provides comprehensive instructions for coding agents working on t
4
4
5
5
## Overview
6
6
7
-
This repository contains a collection of Python scripts that demonstrate how to use the OpenAI API (and compatible APIs like Azure OpenAI and Ollama) to generate chat completions. The repository includes examples of:
7
+
This repository contains a collection of Python scripts that demonstrate how to use the OpenAI Responses API (and compatible APIs like Azure OpenAI and Ollama). The repository includes examples of:
- Function calling (basic to advanced multi-function scenarios)
11
11
- Structured outputs using Pydantic models
12
12
- Retrieval-Augmented Generation (RAG) with various complexity levels
@@ -20,10 +20,10 @@ The scripts are designed to be educational and can run with multiple LLM provide
20
20
21
21
All example scripts are located in the root directory. They follow a consistent pattern of setting up an OpenAI client based on environment variables, then demonstrating specific API features.
22
22
23
-
**Chat Completion Scripts:**
24
-
-`chat.py` - Simple chat completion example
25
-
-`chat_stream.py` - Streaming chat completions
26
-
-`chat_async.py` - Async chat completions with `asyncio.gather` examples
23
+
**Chat Scripts:**
24
+
-`chat.py` - Simple response example
25
+
-`chat_stream.py` - Streaming responses
26
+
-`chat_async.py` - Async responses with `asyncio.gather` examples
27
27
-`chat_history.py` - Multi-turn chat with message history
28
28
-`chat_history_stream.py` - Multi-turn chat with streaming
@@ -17,14 +17,14 @@ This repository contains a collection of Python scripts that demonstrate how to
17
17
18
18
## Examples
19
19
20
-
### OpenAI Chat Completions
20
+
### OpenAI Responses
21
21
22
-
These scripts use the openai Python package to demonstrate how to use the OpenAI Chat Completions API.
22
+
These scripts use the openai Python package to demonstrate how to use the OpenAI Responses API.
23
23
In increasing order of complexity, the scripts are:
24
24
25
-
1.[`chat.py`](./chat.py): A simple script that demonstrates how to use the OpenAI API to generate chat completions.
26
-
2.[`chat_stream.py`](./chat_stream.py): Adds `stream=True` to the API call to return a generator that streams the completion as it is being generated.
27
-
3.[`chat_history.py`](./chat_history.py): Adds a back-and-forth chat interface using `input()` which keeps track of past messages and sends them with each chat completion call.
25
+
1.[`chat.py`](./chat.py): A simple script that demonstrates how to use the OpenAI Responses API to generate a response.
26
+
2.[`chat_stream.py`](./chat_stream.py): Adds `stream=True` to the API call to return a generator that streams the response text as it is being generated.
27
+
3.[`chat_history.py`](./chat_history.py): Adds a back-and-forth chat interface using `input()` which keeps track of past messages and sends them with each API call.
28
28
4.[`chat_history_stream.py`](./chat_history_stream.py): The same idea, but with `stream=True` enabled.
29
29
30
30
Plus these scripts to demonstrate additional features:
@@ -34,9 +34,9 @@ Plus these scripts to demonstrate additional features:
34
34
35
35
### Function calling
36
36
37
-
These scripts demonstrate using the Chat Completions API "tools" (a.k.a. function calling) feature, which lets the model decide when to call developer-defined functions and return structured arguments instead of (or before) a natural language answer.
37
+
These scripts demonstrate using the Responses API "tools" (a.k.a. function calling) feature, which lets the model decide when to call developer-defined functions and return structured arguments instead of (or before) a natural language answer.
38
38
39
-
In all of these examples, a list of functions is declared in the `tools` parameter. The model may respond with `message.tool_calls` containing one or more tool calls. Each tool call includes the function `name` and a JSON string of `arguments` that match the declared schema. Your application is responsible for: (1) detecting tool calls, (2) executing the corresponding local / external logic, and (3) (optionally) sending the tool result back to the model for a final answer.
39
+
In all of these examples, a list of functions is declared in the `tools` parameter. The model may respond with one or more tool calls as items in `response.output` (for example, items where `type == "function_call"`). Each tool call item includes the function `name` and a JSON string of `arguments` that match the declared schema. Your application is responsible for: (1) detecting tool calls, (2) executing the corresponding local / external logic, and (3) (optionally) sending the tool result back to the model for a final answer.
Then run the scripts (in order of increasing complexity):
63
63
64
64
*[`rag_csv.py`](./rag_csv.py): Retrieves matching results from a CSV file and uses them to answer user's question.
65
-
*[`rag_multiturn.py`](./rag_multiturn.py): The same idea, but with a back-and-forth chat interface using `input()` which keeps track of past messages and sends them with each chat completion call.
65
+
*[`rag_multiturn.py`](./rag_multiturn.py): The same idea, but with a back-and-forth chat interface using `input()` which keeps track of past messages and sends them with each API call.
66
66
*[`rag_queryrewrite.py`](./rag_queryrewrite.py): Adds a query rewriting step to the RAG process, where the user's question is rewritten to improve the retrieval results.
67
67
*[`rag_documents_ingestion.py`](./rag_documents_ingestion.py): Ingests PDFs by using pymupdf to convert to markdown, then using Langchain to split into chunks, then using OpenAI to embed the chunks, and finally storing in a local JSON file.
68
68
*[`rag_documents_flow.py`](./rag_documents_flow.py): A RAG flow that retrieves matching results from the local JSON file created by `rag_documents_ingestion.py`.
Copy file name to clipboardExpand all lines: spanish/README.md
+8-8Lines changed: 8 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,9 +1,9 @@
1
1
# Demos de Python con OpenAI
2
2
3
-
Este repositorio contiene una colección de scripts en Python que demuestran cómo usar la API de OpenAI (y modelos compatibles) para generar completados de chat. 📺 [Video tutorial de como usar este repositorio](https://youtu.be/0WwpMFMHEOo?si=9K4jFdBYdj-kb_GL)
3
+
Este repositorio contiene una colección de scripts en Python que demuestran cómo usar la API de Responses de OpenAI (y modelos compatibles). 📺 [Video tutorial de cómo usar este repositorio](https://youtu.be/0WwpMFMHEOo?si=9K4jFdBYdj-kb_GL)
4
4
5
5
*[Ejemplos](#ejemplos)
6
-
*[Completados de chat de OpenAI](#completados-de-chat-de-openai)
6
+
*[Responses de OpenAI](#responses-de-openai)
7
7
*[Llamadas a funciones (Function calling)](#llamadas-a-funciones-function-calling)
8
8
*[Generación aumentada con recuperación (RAG)](#generación-aumentada-con-recuperación-rag)
9
9
*[Salidas estructuradas](#salidas-estructuradas)
@@ -16,11 +16,11 @@ Este repositorio contiene una colección de scripts en Python que demuestran có
16
16
17
17
## Ejemplos
18
18
19
-
### Completados de chat de OpenAI
19
+
### Responses de OpenAI
20
20
21
-
Estos scripts usan el paquete `openai` de Python para demostrar cómo utilizar la API de Chat Completions. En orden creciente de complejidad:
22
-
1.[`chat.py`](chat.py): Script simple que muestra cómo generar un completado de chat.
23
-
2.[`chat_stream.py`](chat_stream.py): Añade `stream=True` para recibir el completado progresivamente.
21
+
Estos scripts usan el paquete `openai` de Python para demostrar cómo utilizar la API de Responses. En orden creciente de complejidad:
22
+
1.[`chat.py`](chat.py): Script simple que muestra cómo generar una respuesta.
23
+
2.[`chat_stream.py`](chat_stream.py): Añade `stream=True` para recibir la respuesta progresivamente.
24
24
3.[`chat_history.py`](chat_history.py): Añade un chat bidireccional que conserva el historial y lo reenvía en cada llamada.
25
25
4.[`chat_history_stream.py`](chat_history_stream.py): Igual que el anterior pero además con `stream=True`.
26
26
@@ -32,9 +32,9 @@ Scripts adicionales de características:
32
32
33
33
### Llamadas a funciones (Function calling)
34
34
35
-
Estos scripts muestran cómo usar la característica "tools" (function calling) de la API de Chat Completions. Permite que el modelo decida si invoca funciones definidas por el desarrollador y devolver argumentos estructurados en lugar (o antes) de una respuesta en lenguaje natural.
35
+
Estos scripts muestran cómo usar la característica "tools" (function calling) de la API de Responses. Permite que el modelo decida si invoca funciones definidas por el desarrollador y devolver argumentos estructurados en lugar (o antes) de una respuesta en lenguaje natural.
36
36
37
-
En todos los ejemplos se declara una lista de funciones en el parámetro `tools`. El modelo puede responder con `message.tool_calls` que contiene una o más llamadas. Cada llamada incluye el `name` de la función y una cadena JSON con `arguments` que respetan el esquema declarado. Tu aplicación debe: (1) detectar las llamadas, (2) ejecutar la lógica local/externa correspondiente y (3) (opcionalmente) enviar el resultado de la herramienta de vuelta al modelo para una respuesta final.
37
+
En todos los ejemplos se declara una lista de funciones en el parámetro `tools`. En estos demos con Responses, las llamadas a herramientas aparecen en `response.output`, por ejemplo como elementos con `type == "function_call"`. Cada una de esas llamadas incluye el `name` de la función y una cadena JSON con `arguments` que respetan el esquema declarado. Tu aplicación debe: (1) detectar las llamadas, (2) ejecutar la lógica local/externa correspondiente y (3) (opcionalmente) enviar el resultado de la herramienta de vuelta al modelo para una respuesta final.
0 commit comments