Add agent thought process outputs#1756
Add agent thought process outputs#1756thepatrickchin wants to merge 12 commits intoNVIDIA:developfrom
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughAdds a thought-emission feature: new emit_thought APIs and tests, Function-level thought_description metadata propagated during invocation, StepAdaptor extraction/display of thought_text for LLM/TOOL/FUNCTION/SPAN events, corresponding data model fields, docs, and LangChain callback/factory wiring. Changes
Sequence Diagram(s)sequenceDiagram
actor Developer
participant LangChain as LangChain Agent
participant Context as NAT Context
participant StepAdaptor as FastAPI Step Adaptor
participant UI as User Interface
Developer->>LangChain: Invoke agent
LangChain->>Context: Emit LLM_START / NEW_TOKEN / LLM_END
Context->>StepAdaptor: Forward events
StepAdaptor->>UI: Display placeholder or extracted thought_text
LangChain->>Context: Emit FUNCTION_START (metadata includes thought_description)
Context->>StepAdaptor: Process FUNCTION_START (derive thought_text)
StepAdaptor->>UI: Display thought_description
LangChain->>Context: Emit FUNCTION_END
StepAdaptor->>UI: Display function completion thought
sequenceDiagram
participant Function as Function Code
participant EmitAPI as Thought Emit API
participant Manager as Intermediate Step Manager
participant UI as User Interface
Function->>EmitAPI: emit_thought(context, "Processing")
EmitAPI->>Manager: Push SPAN_START (uuid)
EmitAPI->>Manager: Push SPAN_END (same uuid)
Manager->>UI: Show complete thought
Function->>EmitAPI: id = emit_thought_start(context, "Starting stream")
EmitAPI->>Manager: Push SPAN_START (id)
Function->>EmitAPI: emit_thought_chunk(context, id, "50%")
EmitAPI->>Manager: Push SPAN_CHUNK (id)
Manager->>UI: Update streaming thought
Function->>EmitAPI: emit_thought_end(context, id, "Done")
EmitAPI->>Manager: Push SPAN_END (id)
Manager->>UI: Finalize thought
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes 🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Comment Tip CodeRabbit can suggest fixes for GitHub Check annotations.Configure the |
e3159ed to
1925cb8
Compare
There was a problem hiding this comment.
Actionable comments posted: 6
🧹 Nitpick comments (2)
packages/nvidia_nat_core/tests/nat/front_ends/fastapi/test_step_adaptor.py (1)
29-78: Switch these fixtures to explicit names +fixture_-prefixed implementations.The fixtures work, but this block doesn't follow the repository pytest convention. Please expose the current fixture names via
@pytest.fixture(name="...")and rename the implementation functions tofixture_....As per coding guidelines, "Pytest fixtures should define the name argument when applying the pytest.fixture decorator. The fixture function being decorated should be named using the
fixture_prefix, using snake_case."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/nvidia_nat_core/tests/nat/front_ends/fastapi/test_step_adaptor.py` around lines 29 - 78, Rename each fixture implementation to use the fixture_ prefix and set an explicit name in the pytest.fixture decorator; e.g. change the function default_config() to fixture_default_config() and decorate with `@pytest.fixture`(name="default_config"), do the same for custom_config -> fixture_custom_config, disabled_config -> fixture_disabled_config, step_adaptor_default -> fixture_step_adaptor_default, step_adaptor_custom -> fixture_step_adaptor_custom, step_adaptor_disabled -> fixture_step_adaptor_disabled, and make_intermediate_step -> fixture_make_intermediate_step; ensure each decorator uses the name= argument and the returned objects (StepAdaptorConfig, StepAdaptor, IntermediateStepType, StepAdaptorMode) remain unchanged.packages/nvidia_nat_core/tests/nat/builder/test_thought.py (1)
49-75: Use the repo fixture naming convention here.The exported fixture names are fine, but the implementation functions should use the
fixture_...prefix rather than the_fixturesuffix to stay aligned with the test conventions used elsewhere in the repo.As per coding guidelines, "Pytest fixtures should define the name argument when applying the pytest.fixture decorator. The fixture function being decorated should be named using the
fixture_prefix, using snake_case."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/nvidia_nat_core/tests/nat/builder/test_thought.py` around lines 49 - 75, Rename the fixture implementation functions to use the fixture_ prefix (e.g., change ctx_state_fixture to fixture_ctx_state, output_steps_fixture to fixture_output_steps, and ctx_fixture to fixture_ctx) while keeping the existing pytest.fixture(name="...") decorators as-is; update only the function names for the fixtures that define ContextState, output_steps, and ctx (referenced by the decorators pytest.fixture(name="ctx_state"), pytest.fixture(name="output_steps"), and pytest.fixture(name="ctx")) so the exported fixture names remain unchanged but the implementation follows the repo naming convention.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/source/build-workflows/workflow-configuration.md`:
- Line 78: Reword the sentence to avoid possessive "'s" with inanimate objects:
replace "that function's configuration" and "the tool's help text" with
non-possessive phrasing (e.g., "the configuration for that function" and "help
text for the tool"); update the sentence that mentions thought_description and
description so it reads like "set `thought_description` in the configuration for
that function" and "Unlike `description`, which provides help text for the
tool..." ensuring `thought_description` and `description` remain referenced
exactly as shown.
In `@packages/nvidia_nat_core/src/nat/builder/thought.py`:
- Around line 100-104: emit_thought_chunk currently hardcodes
name="custom_thought" which desyncs from emit_thought_start's custom name;
update emit_thought_chunk to use the same name provided at start (or read the
stored name from the thought's intermediate step in context) instead of the
literal string so IntermediateStepPayload(...,
event_type=IntermediateStepType.SPAN_CHUNK, name=...) uses the real thought
name; adjust the emit_thought_chunk signature to accept a name parameter if
needed (or fetch it from context/intermediate_step_manager) and pass that value
into the IntermediateStepPayload.
- Around line 115-119: emit_thought_end currently hardcodes
name="custom_thought" which breaks consistency with emit_thought_start; change
emit_thought_end to accept a name parameter (e.g., thought_name or name) and use
that value when constructing IntermediateStepPayload
(IntermediateStepPayload(... name=passed_name ...)), and update all call sites
to pass the same name used in emit_thought_start so start/end names match;
ensure the parameter is optional with a sensible default if needed to preserve
backward compatibility.
In `@packages/nvidia_nat_core/tests/nat/builder/test_thought.py`:
- Around line 219-242: The test lacks assertions for the END events — verify
that calling emit_thought_end(ctx, uuid2) and emit_thought_end(ctx, uuid1)
produces two END steps with payload.UUID matching uuid2 then uuid1 (ensuring
LIFO pop/order is correct) by clearing/inspecting output_steps after the end
calls and asserting len == 2 and output_steps[0].payload.UUID == uuid2 and
output_steps[1].payload.UUID == uuid1; use the same output_steps and
payload.UUID checks used for START/CHUNK to locate the relevant assertions and
functions (emit_thought_start, emit_thought_chunk, emit_thought_end).
In `@packages/nvidia_nat_core/tests/nat/front_ends/fastapi/test_step_adaptor.py`:
- Line 673: The test functions are receiving an unused fixture parameter
`make_intermediate_step` (e.g.,
test_function_start_with_thought_description_override) which is lint-noise;
remove the `make_intermediate_step` parameter from the signature of that test
and the three other tests mentioned so they construct IntermediateStepPayload
manually as they already do, then run ruff check --fix to clear the lint warning
and ensure signatures are updated consistently (adjust any references/imports if
present).
In
`@packages/nvidia_nat_langchain/src/nat/plugins/langchain/agent/react_agent/register.py`:
- Around line 116-120: The shared LangchainProfilerHandler() is being created
when building the ReActAgentGraph (graph) so its mutable dicts
(_run_id_to_model_name, _run_id_to_llm_input, _run_id_to_tool_input,
_run_id_to_start_time) are shared across concurrent _response_fn() calls causing
races and leaks; fix by instantiating LangchainProfilerHandler per request when
invoking the graph (i.e., pass a new LangchainProfilerHandler() in the callbacks
argument at graph.ainvoke() / per-invocation config rather than at
ReActAgentGraph construction) or, if you must reuse a single instance, wrap
every mutation of those dicts with the existing _lock (not just last_call_ts)
and add explicit cleanup (e.g., an on_chain_end handler to delete run_id
entries) to prevent unbounded growth.
---
Nitpick comments:
In `@packages/nvidia_nat_core/tests/nat/builder/test_thought.py`:
- Around line 49-75: Rename the fixture implementation functions to use the
fixture_ prefix (e.g., change ctx_state_fixture to fixture_ctx_state,
output_steps_fixture to fixture_output_steps, and ctx_fixture to fixture_ctx)
while keeping the existing pytest.fixture(name="...") decorators as-is; update
only the function names for the fixtures that define ContextState, output_steps,
and ctx (referenced by the decorators pytest.fixture(name="ctx_state"),
pytest.fixture(name="output_steps"), and pytest.fixture(name="ctx")) so the
exported fixture names remain unchanged but the implementation follows the repo
naming convention.
In `@packages/nvidia_nat_core/tests/nat/front_ends/fastapi/test_step_adaptor.py`:
- Around line 29-78: Rename each fixture implementation to use the fixture_
prefix and set an explicit name in the pytest.fixture decorator; e.g. change the
function default_config() to fixture_default_config() and decorate with
`@pytest.fixture`(name="default_config"), do the same for custom_config ->
fixture_custom_config, disabled_config -> fixture_disabled_config,
step_adaptor_default -> fixture_step_adaptor_default, step_adaptor_custom ->
fixture_step_adaptor_custom, step_adaptor_disabled ->
fixture_step_adaptor_disabled, and make_intermediate_step ->
fixture_make_intermediate_step; ensure each decorator uses the name= argument
and the returned objects (StepAdaptorConfig, StepAdaptor, IntermediateStepType,
StepAdaptorMode) remain unchanged.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: b19d4b04-162c-4227-bf14-3a6fc4bb6475
📒 Files selected for processing (11)
docs/source/build-workflows/workflow-configuration.mddocs/source/extend/custom-components/custom-functions/functions.mdpackages/nvidia_nat_core/src/nat/builder/function.pypackages/nvidia_nat_core/src/nat/builder/thought.pypackages/nvidia_nat_core/src/nat/data_models/api_server.pypackages/nvidia_nat_core/src/nat/data_models/function.pypackages/nvidia_nat_core/src/nat/front_ends/fastapi/message_validator.pypackages/nvidia_nat_core/src/nat/front_ends/fastapi/step_adaptor.pypackages/nvidia_nat_core/tests/nat/builder/test_thought.pypackages/nvidia_nat_core/tests/nat/front_ends/fastapi/test_step_adaptor.pypackages/nvidia_nat_langchain/src/nat/plugins/langchain/agent/react_agent/register.py
packages/nvidia_nat_core/tests/nat/front_ends/fastapi/test_step_adaptor.py
Outdated
Show resolved
Hide resolved
packages/nvidia_nat_langchain/src/nat/plugins/langchain/agent/react_agent/register.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
🧹 Nitpick comments (1)
packages/nvidia_nat_core/tests/nat/front_ends/fastapi/test_step_adaptor.py (1)
812-877: Consider adding a test forSPAN_ENDwiththought_text.The tests cover
SPAN_STARTandSPAN_CHUNKevents, but there's no explicit test forSPAN_ENDwithdata.outputcontaining thought text. Based on the_handle_spanimplementation,SPAN_ENDextractsthought_textfromdata.output, which should be verified.📝 Suggested test case
def test_span_end_with_custom_thought(step_adaptor_default): """Test that SPAN_END events with thought_text in data.output are processed correctly.""" uuid = "span-uuid-end" # Process start event first payload_start = IntermediateStepPayload( event_type=IntermediateStepType.SPAN_START, name="custom_thought", data=StreamEventData(input="Processing..."), UUID=uuid, ) step_start = IntermediateStep( parent_id="root", function_ancestry=InvocationNode(parent_id="abc", function_id="def", function_name="xyz"), payload=payload_start, ) step_adaptor_default.process(step_start) # Process end event with output final_thought = "Processing complete!" payload_end = IntermediateStepPayload( event_type=IntermediateStepType.SPAN_END, name="custom_thought", data=StreamEventData(output=final_thought), UUID=uuid, ) step_end = IntermediateStep( parent_id="root", function_ancestry=InvocationNode(parent_id="abc", function_id="def", function_name="xyz"), payload=payload_end, ) result = step_adaptor_default.process(step_end) assert result is not None assert isinstance(result, ResponseIntermediateStep) assert result.thought_text == final_thought🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/nvidia_nat_core/tests/nat/front_ends/fastapi/test_step_adaptor.py` around lines 812 - 877, Add a unit test that verifies SPAN_END events extract thought_text from data.output: create a span UUID, call step_adaptor_default.process(...) with a SPAN_START IntermediateStepPayload (IntermediateStep/InvocationNode as in existing tests) then call process with a SPAN_END IntermediateStepPayload whose StreamEventData has output set to the final thought; assert the returned result is a ResponseIntermediateStep and that result.thought_text equals the output string (mirrors tests for SPAN_START and SPAN_CHUNK and exercises the _handle_span code path for IntermediateStepPayload.event_type == IntermediateStepType.SPAN_END).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@packages/nvidia_nat_core/tests/nat/front_ends/fastapi/test_step_adaptor.py`:
- Around line 812-877: Add a unit test that verifies SPAN_END events extract
thought_text from data.output: create a span UUID, call
step_adaptor_default.process(...) with a SPAN_START IntermediateStepPayload
(IntermediateStep/InvocationNode as in existing tests) then call process with a
SPAN_END IntermediateStepPayload whose StreamEventData has output set to the
final thought; assert the returned result is a ResponseIntermediateStep and that
result.thought_text equals the output string (mirrors tests for SPAN_START and
SPAN_CHUNK and exercises the _handle_span code path for
IntermediateStepPayload.event_type == IntermediateStepType.SPAN_END).
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 5909c9cd-cd44-4616-89d1-006088642e25
📒 Files selected for processing (5)
docs/source/build-workflows/workflow-configuration.mdpackages/nvidia_nat_core/src/nat/builder/thought.pypackages/nvidia_nat_core/tests/nat/builder/test_thought.pypackages/nvidia_nat_core/tests/nat/front_ends/fastapi/test_step_adaptor.pypackages/nvidia_nat_langchain/src/nat/plugins/langchain/agent/react_agent/register.py
🚧 Files skipped from review as they are similar to previous changes (2)
- packages/nvidia_nat_core/src/nat/builder/thought.py
- docs/source/build-workflows/workflow-configuration.md
f52198a to
27797af
Compare
There was a problem hiding this comment.
🧹 Nitpick comments (2)
packages/nvidia_nat_core/src/nat/builder/thought.py (1)
16-27: Add a module docstring for this new public helper module.The functions are documented, but the file itself still has no module-level docstring.
📝 Suggested addition
+"""Helpers for emitting UI-visible thought updates via intermediate steps.""" + import logging import typing import uuidAs per coding guidelines, "Provide Google-style docstrings for every public module, class, function and CLI command."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/nvidia_nat_core/src/nat/builder/thought.py` around lines 16 - 27, Add a Google-style module docstring at the top of this public helper module (packages/nvidia_nat_core/src/nat/builder/thought.py) describing the module’s purpose, the main public utilities it provides (mentioning symbols like IntermediateStepPayload, IntermediateStepType, StreamEventData and logger), expected usage, and any important notes or side-effects; place it immediately above the imports so it documents the module itself and follows the project’s docstring conventions.packages/nvidia_nat_core/tests/nat/front_ends/fastapi/test_step_adaptor.py (1)
866-909: Please cover the documentedSPAN_END/output=Nonepath.
emit_thought_end()inpackages/nvidia_nat_core/src/nat/builder/thought.pysays omitting the final text should keep the last streamed chunk, but these SPAN tests only exerciseSPAN_ENDwith an explicitdata.output. Adding that edge case here would lock down the contract.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/nvidia_nat_core/tests/nat/front_ends/fastapi/test_step_adaptor.py` around lines 866 - 909, Add a new test covering the SPAN_END with output=None case: create a span start (use test helper step_adaptor_default and IntermediateStep with IntermediateStepType.SPAN_START and StreamEventData(input="partial thought")) then send a SPAN_END payload with StreamEventData(output=None) (same UUID) and call step_adaptor_default.process on the end step; assert the returned ResponseIntermediateStep (if any) has thought_text equal to the last streamed chunk ("partial thought"). Reference emit_thought_end, IntermediateStep/IntermediateStepPayload, IntermediateStepType.SPAN_END, StreamEventData, and step_adaptor_default.process to locate the relevant logic and mirror the existing test_span_end_with_custom_thought structure.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@packages/nvidia_nat_core/src/nat/builder/thought.py`:
- Around line 16-27: Add a Google-style module docstring at the top of this
public helper module (packages/nvidia_nat_core/src/nat/builder/thought.py)
describing the module’s purpose, the main public utilities it provides
(mentioning symbols like IntermediateStepPayload, IntermediateStepType,
StreamEventData and logger), expected usage, and any important notes or
side-effects; place it immediately above the imports so it documents the module
itself and follows the project’s docstring conventions.
In `@packages/nvidia_nat_core/tests/nat/front_ends/fastapi/test_step_adaptor.py`:
- Around line 866-909: Add a new test covering the SPAN_END with output=None
case: create a span start (use test helper step_adaptor_default and
IntermediateStep with IntermediateStepType.SPAN_START and
StreamEventData(input="partial thought")) then send a SPAN_END payload with
StreamEventData(output=None) (same UUID) and call step_adaptor_default.process
on the end step; assert the returned ResponseIntermediateStep (if any) has
thought_text equal to the last streamed chunk ("partial thought"). Reference
emit_thought_end, IntermediateStep/IntermediateStepPayload,
IntermediateStepType.SPAN_END, StreamEventData, and step_adaptor_default.process
to locate the relevant logic and mirror the existing
test_span_end_with_custom_thought structure.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 4827aa50-e6d5-40ff-bcd6-f2d9355c7f4a
📒 Files selected for processing (5)
docs/source/build-workflows/workflow-configuration.mdpackages/nvidia_nat_core/src/nat/builder/thought.pypackages/nvidia_nat_core/tests/nat/builder/test_thought.pypackages/nvidia_nat_core/tests/nat/front_ends/fastapi/test_step_adaptor.pypackages/nvidia_nat_langchain/src/nat/plugins/langchain/agent/react_agent/register.py
🚧 Files skipped from review as they are similar to previous changes (2)
- docs/source/build-workflows/workflow-configuration.md
- packages/nvidia_nat_core/tests/nat/builder/test_thought.py
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
packages/nvidia_nat_langchain/tests/agent/test_base.py (1)
30-46:⚠️ Potential issue | 🔴 Critical
MockBaseAgentis missing the_runnable_configproperty, causing test failures.The
MockBaseAgentclass doesn't define_runnable_config(now a property inBaseAgent) or_make_runnable_config(). Since it doesn't callsuper().__init__(), tests at lines 133 and 175 that referencebase_agent._runnable_configwill raiseAttributeError.🐛 Proposed fix to add the missing property
class MockBaseAgent(BaseAgent): """Mock implementation of BaseAgent for testing.""" def __init__(self, detailed_logs=True, log_response_max_chars=1000): # Create simple mock objects without pydantic restrictions self.llm = Mock() self.tools = [Mock(), Mock()] self.tools[0].name = "Tool A" self.tools[1].name = "Tool B" self.callbacks = [] self.detailed_logs = detailed_logs self.log_response_max_chars = log_response_max_chars self.graph = None + + `@property` + def _runnable_config(self) -> RunnableConfig: + return self._make_runnable_config() + + def _make_runnable_config(self) -> RunnableConfig: + from langgraph.runtime import DEFAULT_RUNTIME + return RunnableConfig(callbacks=[c() for c in self.callbacks], + configurable={"__pregel_runtime": DEFAULT_RUNTIME}) async def _build_graph(self, state_schema: type) -> CompiledStateGraph: """Mock implementation.""" return Mock(spec=CompiledStateGraph)You'll also need to add
RunnableConfigback to the imports:from langchain_core.runnables import RunnableConfig🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/nvidia_nat_langchain/tests/agent/test_base.py` around lines 30 - 46, MockBaseAgent lacks the _runnable_config property (and the helper _make_runnable_config) which BaseAgent now expects; add a property method named _runnable_config on MockBaseAgent that returns a RunnableConfig instance (or a minimal mock RunnableConfig) and implement a private helper _make_runnable_config() if needed, and restore the import for RunnableConfig from langchain_core.runnables so tests that access base_agent._runnable_config (lines referencing _runnable_config) no longer raise AttributeError.
🧹 Nitpick comments (3)
packages/nvidia_nat_langchain/src/nat/plugins/langchain/agent/base.py (2)
90-92: Consider adding a brief docstring to explain the factory pattern.While
_make_runnable_configis internal, a short docstring would clarify why callbacks are instantiated fresh on each call (to avoid memory leaks and concurrency issues as noted in the commit message).📝 Suggested docstring
def _make_runnable_config(self) -> RunnableConfig: + """ + Create a fresh RunnableConfig with newly instantiated callbacks. + + Callbacks are instantiated on each call to avoid memory leaks and + concurrency issues from shared handler instances. + """ return RunnableConfig(callbacks=[c() for c in self.callbacks], configurable={"__pregel_runtime": DEFAULT_RUNTIME})🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/nvidia_nat_langchain/src/nat/plugins/langchain/agent/base.py` around lines 90 - 92, Add a short docstring to the internal method _make_runnable_config explaining that it is a factory for RunnableConfig objects and that it instantiates fresh callback instances (from self.callbacks) on every call to avoid shared-state memory leaks and concurrency issues; mention that the returned RunnableConfig also sets the configurable "__pregel_runtime" to DEFAULT_RUNTIME so callers understand the intended runtime default and why the method creates a new RunnableConfig each time.
86-92: Inconsistent usage: property exists but internal code calls the method directly.The
_runnable_configproperty (lines 86-88) delegates to_make_runnable_config(), but lines 112, 142, and 167 call_make_runnable_config()directly. For consistency, consider using the property throughout or removing it if it's only for backward compatibility.♻️ Option 1: Use the property consistently
- async for event in runnable.astream(inputs, config=self._make_runnable_config()): + async for event in runnable.astream(inputs, config=self._runnable_config):- response = await llm.ainvoke(inputs, config=self._make_runnable_config()) + response = await llm.ainvoke(inputs, config=self._runnable_config)- response = await tool.ainvoke(tool_input, config=self._make_runnable_config()) + response = await tool.ainvoke(tool_input, config=self._runnable_config)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/nvidia_nat_langchain/src/nat/plugins/langchain/agent/base.py` around lines 86 - 92, The code defines a property _runnable_config that wraps _make_runnable_config(), but other places call _make_runnable_config() directly; make usage consistent by replacing direct calls to _make_runnable_config() with the property _runnable_config (or remove the property if you prefer the method), e.g., update all call sites that invoke _make_runnable_config() (the ones referenced in the review) to access self._runnable_config so behavior and intent are uniform across the class.packages/nvidia_nat_langchain/tests/agent/test_base.py (1)
49-58: Fixture functions should use thefixture_prefix.As per coding guidelines, pytest fixtures should define the
nameargument and use thefixture_prefix for the function name.♻️ Proposed refactor for fixture naming
-@pytest.fixture -def base_agent(): +@pytest.fixture(name="base_agent") +def fixture_base_agent(): """Create a mock agent for testing with detailed logs enabled.""" return MockBaseAgent(detailed_logs=True) -@pytest.fixture -def base_agent_no_logs(): +@pytest.fixture(name="base_agent_no_logs") +def fixture_base_agent_no_logs(): """Create a mock agent for testing with detailed logs disabled.""" return MockBaseAgent(detailed_logs=False)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/nvidia_nat_langchain/tests/agent/test_base.py` around lines 49 - 58, Rename the fixtures to use the fixture_ prefix and supply the public name via the pytest.fixture name argument: change the function base_agent to fixture_base_agent and annotate with `@pytest.fixture`(name="base_agent") returning MockBaseAgent(detailed_logs=True), and change base_agent_no_logs to fixture_base_agent_no_logs annotated with `@pytest.fixture`(name="base_agent_no_logs") returning MockBaseAgent(detailed_logs=False); keep the returned values and docstrings unchanged so tests referencing "base_agent" and "base_agent_no_logs" continue to work.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@packages/nvidia_nat_langchain/tests/agent/test_base.py`:
- Around line 30-46: MockBaseAgent lacks the _runnable_config property (and the
helper _make_runnable_config) which BaseAgent now expects; add a property method
named _runnable_config on MockBaseAgent that returns a RunnableConfig instance
(or a minimal mock RunnableConfig) and implement a private helper
_make_runnable_config() if needed, and restore the import for RunnableConfig
from langchain_core.runnables so tests that access base_agent._runnable_config
(lines referencing _runnable_config) no longer raise AttributeError.
---
Nitpick comments:
In `@packages/nvidia_nat_langchain/src/nat/plugins/langchain/agent/base.py`:
- Around line 90-92: Add a short docstring to the internal method
_make_runnable_config explaining that it is a factory for RunnableConfig objects
and that it instantiates fresh callback instances (from self.callbacks) on every
call to avoid shared-state memory leaks and concurrency issues; mention that the
returned RunnableConfig also sets the configurable "__pregel_runtime" to
DEFAULT_RUNTIME so callers understand the intended runtime default and why the
method creates a new RunnableConfig each time.
- Around line 86-92: The code defines a property _runnable_config that wraps
_make_runnable_config(), but other places call _make_runnable_config() directly;
make usage consistent by replacing direct calls to _make_runnable_config() with
the property _runnable_config (or remove the property if you prefer the method),
e.g., update all call sites that invoke _make_runnable_config() (the ones
referenced in the review) to access self._runnable_config so behavior and intent
are uniform across the class.
In `@packages/nvidia_nat_langchain/tests/agent/test_base.py`:
- Around line 49-58: Rename the fixtures to use the fixture_ prefix and supply
the public name via the pytest.fixture name argument: change the function
base_agent to fixture_base_agent and annotate with
`@pytest.fixture`(name="base_agent") returning MockBaseAgent(detailed_logs=True),
and change base_agent_no_logs to fixture_base_agent_no_logs annotated with
`@pytest.fixture`(name="base_agent_no_logs") returning
MockBaseAgent(detailed_logs=False); keep the returned values and docstrings
unchanged so tests referencing "base_agent" and "base_agent_no_logs" continue to
work.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: ad666a4f-62cc-416a-b4da-e5c995e3309d
📒 Files selected for processing (3)
packages/nvidia_nat_langchain/src/nat/plugins/langchain/agent/base.pypackages/nvidia_nat_langchain/src/nat/plugins/langchain/agent/react_agent/register.pypackages/nvidia_nat_langchain/tests/agent/test_base.py
🚧 Files skipped from review as they are similar to previous changes (1)
- packages/nvidia_nat_langchain/src/nat/plugins/langchain/agent/react_agent/register.py
4953c85 to
88f5a06
Compare
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
packages/nvidia_nat_langchain/tests/agent/test_base.py (1)
133-133:⚠️ Potential issue | 🔴 CriticalTest assertion will fail due to property creating new config on each access.
The
_runnable_configproperty inBaseAgentnow calls_make_runnable_config()on every access, returning a freshRunnableConfiginstance. This assertion compares against a newly created config instance rather than the one passed during the actualainvokecall, causing the test to fail.The same issue exists at line 175 for the
_call_tooltest.🐛 Proposed fix using unittest.mock.ANY or capturing the config
Option 1 - Use
unittest.mock.ANYif you only need to verify the call was made:- base_agent.llm.ainvoke.assert_called_once_with(inputs, config=base_agent._runnable_config) + from unittest.mock import ANY + base_agent.llm.ainvoke.assert_called_once_with(inputs, config=ANY)Option 2 - Capture and verify the config structure:
- base_agent.llm.ainvoke.assert_called_once_with(inputs, config=base_agent._runnable_config) + base_agent.llm.ainvoke.assert_called_once() + call_args = base_agent.llm.ainvoke.call_args + assert call_args[0][0] == inputs + assert "config" in call_args[1] + assert isinstance(call_args[1]["config"], RunnableConfig)Apply similar fix to line 175.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/nvidia_nat_langchain/tests/agent/test_base.py` at line 133, The test fails because BaseAgent._runnable_config calls _make_runnable_config() each time and returns a new RunnableConfig instance, so asserting equality against base_agent._runnable_config is comparing different objects; update the tests at the ainvoke assertion and the _call_tool assertion to either (a) use unittest.mock.ANY for the config argument when calling base_agent.llm.ainvoke/assert_called_once_with to only verify the call happened, or (b) capture the actual call via base_agent.llm.ainvoke.call_args (or mock_calls) and extract the passed config to assert its fields/structure match expected values instead of comparing object identity; reference the test lines that call base_agent.llm.ainvoke(..., config=base_agent._runnable_config) and the _call_tool test for where to apply the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@packages/nvidia_nat_langchain/tests/agent/test_base.py`:
- Line 133: The test fails because BaseAgent._runnable_config calls
_make_runnable_config() each time and returns a new RunnableConfig instance, so
asserting equality against base_agent._runnable_config is comparing different
objects; update the tests at the ainvoke assertion and the _call_tool assertion
to either (a) use unittest.mock.ANY for the config argument when calling
base_agent.llm.ainvoke/assert_called_once_with to only verify the call happened,
or (b) capture the actual call via base_agent.llm.ainvoke.call_args (or
mock_calls) and extract the passed config to assert its fields/structure match
expected values instead of comparing object identity; reference the test lines
that call base_agent.llm.ainvoke(..., config=base_agent._runnable_config) and
the _call_tool test for where to apply the change.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 666c5078-2be0-42cc-bbd5-0b1166f0456c
📒 Files selected for processing (4)
packages/nvidia_nat_langchain/src/nat/plugins/langchain/agent/base.pypackages/nvidia_nat_langchain/src/nat/plugins/langchain/agent/dual_node.pypackages/nvidia_nat_langchain/src/nat/plugins/langchain/agent/react_agent/register.pypackages/nvidia_nat_langchain/tests/agent/test_base.py
🚧 Files skipped from review as they are similar to previous changes (1)
- packages/nvidia_nat_langchain/src/nat/plugins/langchain/agent/react_agent/register.py
…of agent reasoning Extract and display ReAct agent "Thought:" text in intermediate steps Shows "Thinking..." placeholder during LLM processing, then replaces with extracted thought text when complete. Signed-off-by: Patrick Chin <8509935+thepatrickchin@users.noreply.github.com>
Signed-off-by: Patrick Chin <8509935+thepatrickchin@users.noreply.github.com>
Signed-off-by: Patrick Chin <8509935+thepatrickchin@users.noreply.github.com>
Signed-off-by: Patrick Chin <8509935+thepatrickchin@users.noreply.github.com>
Signed-off-by: Patrick Chin <8509935+thepatrickchin@users.noreply.github.com>
Signed-off-by: Patrick Chin <8509935+thepatrickchin@users.noreply.github.com>
- prevents flicker in UI when escaped output_str is initially empty or if no thought is provided by LLM - italicize "thinking" text Signed-off-by: Patrick Chin <8509935+thepatrickchin@users.noreply.github.com>
- Added thought emission functions to enable custom thought display throughout function/tool execution - Integrated SPAN event handling in `StepAdaptor` to process custom thoughts. Signed-off-by: Patrick Chin <8509935+thepatrickchin@users.noreply.github.com>
…of thought messages Signed-off-by: Patrick Chin <8509935+thepatrickchin@users.noreply.github.com>
Ensures thought text is properly displayed in observability traces Signed-off-by: Patrick Chin <8509935+thepatrickchin@users.noreply.github.com>
- Use proper fixture naming and declaration convention - Avoid possessive 's with inanimate objects - Hardcoded name breaks consistency with emit_thought_start and emit_thought_chunk - Assert the END events in the concurrent lifecycle test. - Drop the unused make_intermediate_step fixture fom tests - Instantiate the profiler callback per request, or protect state mutations with locks and add cleanup Signed-off-by: Patrick Chin <8509935+thepatrickchin@users.noreply.github.com>
Thought process events were not displaying because LangchainProfilerHandler was not being invoked during LLM calls. Passing the handler via ainvoke() config was bypassed by _runnable_config, which was built once at graph construction time with no callbacks. Replace the stored _runnable_config attribute with _make_runnable_config(), which instantiates callback classes fresh on each LLM/tool call. Pass LangchainProfilerHandler as a class reference to the graph constructor so each invocation gets an isolated handler, also fixing a memory leak and concurrency issues from the previously shared instance. Signed-off-by: Patrick Chin <8509935+thepatrickchin@users.noreply.github.com>
88f5a06 to
cde905b
Compare
Description
This PR adds thought process outputs for improved visibility of agent reasoning and actions.
It updates the step adaptor to return thought display text for tool/function calls and ReAct LLM thoughts. Agent developers can add custom thought text for entire functions, as well as emit and stream custom thoughts from within function code for fine-grained progress updates. Refer to video below:
Screen.Recording.2026-03-06.at.6.06.44.PM.mov
This can be enabled or disabled independently of Intermediate Steps:

Please refer to NVIDIA/NeMo-Agent-Toolkit-UI#113 for the frontend implementation.
Closes #1755
By Submitting this PR I confirm:
Summary by CodeRabbit
New Features
Documentation
Tests