test(langchain): Add tool execution test#5739
test(langchain): Add tool execution test#5739alexander-alderman-webb wants to merge 11 commits intowebb/langchain/add-basic-testfrom
Conversation
Semver Impact of This PR🟢 Patch (bug fixes) 📋 Changelog PreviewThis is how your changes will appear in the changelog. New Features ✨
Bug Fixes 🐛Anthropic
Other
Documentation 📚
Internal Changes 🔧Langchain
Other
Other
🤖 This preview updates automatically when you update the PR. |
Codecov Results 📊✅ 13 passed | Total: 13 | Pass Rate: 100% | Execution Time: 7.97s All tests are passing successfully. ✅ Patch coverage is 100.00%. Project has 14381 uncovered lines. Generated by Codecov Action |
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
Autofix Details
Bugbot Autofix prepared a fix for the issue found in the latest run.
- ✅ Fixed: Redundant assertion contradicts comment and prior assertion
- I removed the contradictory comment and the redundant weaker assertion because the existing exact-count assertion already covers this condition.
Or push these changes by commenting:
@cursor push f520920c95
Preview (f520920c95)
diff --git a/tests/integrations/langchain/test_langchain.py b/tests/integrations/langchain/test_langchain.py
--- a/tests/integrations/langchain/test_langchain.py
+++ b/tests/integrations/langchain/test_langchain.py
@@ -336,9 +336,6 @@
assert chat_spans[1]["origin"] == "auto.ai.langchain"
assert tool_exec_span["origin"] == "auto.ai.langchain"
- # We can't guarantee anything about the "shape" of the langchain execution graph
- assert len(list(x for x in tx["spans"] if x["op"] == "gen_ai.chat")) > 0
-
# Token usage is only available in newer versions of langchain (v0.2+)
# where usage_metadata is supported on AIMessageChunk
if "gen_ai.usage.input_tokens" in chat_spans[0]["data"]:This Bugbot Autofix run was free. To enable autofix for future PRs, go to the Cursor dashboard.
ericapisani
left a comment
There was a problem hiding this comment.
Non-blocking questions, LGTM otherwise
| "Tool calls should be recorded when send_default_pii=True and include_prompts=True" | ||
| ) | ||
| tool_calls_data = chat_spans[0]["data"][SPANDATA.GEN_AI_RESPONSE_TOOL_CALLS] | ||
| assert isinstance(tool_calls_data, (list, str)) # Could be serialized |
There was a problem hiding this comment.
Having a loose assertion like this (where we're checking if it's a list or a str) makes me a bit nervous in the context of a test where we have control over everything, because we should be certain about what value we can expect to get back in the response.
Is the serialization that's mentioned here controlled by the value of send_default_pii or include_prompts?
There was a problem hiding this comment.
thanks, this was copy-pasted from another test.
but 100% agree we shouldn't be spreading a bad pattern. The value is always a string and the test asserts the type now: 09673b7
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.


Description
Add a test that uses
langchainfunctionality introduced in v1.0 of the library.Re-use Responses API response with a tool call request from
openai-agentstest.Issues
Reminders
tox -e linters.feat:,fix:,ref:,meta:)