-
Notifications
You must be signed in to change notification settings - Fork 604
feat(langchain): Record run_name in on_chat_model_start
#5924
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: webb/langchain/agent-name
Are you sure you want to change the base?
Changes from all commits
87ed060
ea94bfc
cd08d96
568e6f7
ed3e824
1b6ddfa
52eb5c3
412af15
5bddf72
1efa748
a566ced
fb388a9
efc9460
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -432,6 +432,13 @@ def on_chat_model_start( | |
| SPANDATA.GEN_AI_AGENT_NAME, agent_metadata["lc_agent_name"] | ||
| ) | ||
|
|
||
| run_name = kwargs.get("name") | ||
| if run_name: | ||
| span.set_data( | ||
| SPANDATA.GEN_AI_FUNCTION_ID, | ||
| run_name, | ||
| ) | ||
|
Comment on lines
+435
to
+440
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Bug: The Suggested FixIn the Prompt for AI AgentDid we get this right? 👍 / 👎 to inform future reviews. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Inconsistent span attribute for run_name across callbacksMedium Severity The Additional Locations (1)Reviewed by Cursor Bugbot for commit efc9460. Configure here. |
||
|
|
||
| for key, attribute in DATA_FIELDS.items(): | ||
| if key in all_params and all_params[key] is not None: | ||
| set_data_normalized(span, attribute, all_params[key], unpack=False) | ||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -1265,26 +1265,31 @@ def streaming_chat_completions_model_response(): | |
|
|
||
| @pytest.fixture | ||
| def nonstreaming_chat_completions_model_response(): | ||
| return openai.types.chat.ChatCompletion( | ||
| id="chatcmpl-test", | ||
| choices=[ | ||
| openai.types.chat.chat_completion.Choice( | ||
| index=0, | ||
| finish_reason="stop", | ||
| message=openai.types.chat.ChatCompletionMessage( | ||
| role="assistant", content="Test response" | ||
| ), | ||
| ) | ||
| ], | ||
| created=1234567890, | ||
| model="gpt-3.5-turbo", | ||
| object="chat.completion", | ||
| usage=openai.types.CompletionUsage( | ||
| prompt_tokens=10, | ||
| completion_tokens=20, | ||
| total_tokens=30, | ||
| ), | ||
| ) | ||
| def inner( | ||
| response_id: str, response_model: str, message_content: str, created: int | ||
| ): | ||
| return openai.types.chat.ChatCompletion( | ||
| id=response_id, | ||
| choices=[ | ||
| openai.types.chat.chat_completion.Choice( | ||
| index=0, | ||
| finish_reason="stop", | ||
| message=openai.types.chat.ChatCompletionMessage( | ||
| role="assistant", content=message_content | ||
| ), | ||
| ) | ||
| ], | ||
| created=created, | ||
| model=response_model, | ||
| object="chat.completion", | ||
| usage=openai.types.CompletionUsage( | ||
| prompt_tokens=10, | ||
| completion_tokens=20, | ||
| total_tokens=30, | ||
| ), | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Swapped token counts break existing test assertionsHigh Severity The fixture uses Reviewed by Cursor Bugbot for commit a566ced. Configure here. |
||
| ) | ||
|
|
||
| return inner | ||
|
|
||
|
|
||
| @pytest.fixture | ||
|
|
||


There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: The user-provided
run_nameis stored asgen_ai.function_idfor chat models but asgen_ai.pipeline.namefor LLMs, creating inconsistent tracing data.Severity: MEDIUM
Suggested Fix
In
on_chat_model_start, store therun_nameunder the same key used inon_llm_start, which isSPANDATA.GEN_AI_PIPELINE_NAME(gen_ai.pipeline.name), instead ofSPANDATA.GEN_AI_FUNCTION_IDto ensure consistency.Prompt for AI Agent