Skip to content

Add AG2 (formerly AutoGen) plugin package and demo#1810

Open
VasiliyRad wants to merge 8 commits intoNVIDIA:developfrom
VasiliyRad:vasiliyr/03152026
Open

Add AG2 (formerly AutoGen) plugin package and demo#1810
VasiliyRad wants to merge 8 commits intoNVIDIA:developfrom
VasiliyRad:vasiliyr/03152026

Conversation

@VasiliyRad
Copy link

@VasiliyRad VasiliyRad commented Mar 17, 2026

Adds AG2 as a supported framework in NeMo-Agent-Toolkit alongside the existing Microsoft AutoGen integration. AG2 is the community- driven fork of Microsoft AutoGen with a modernized API for multi- agent orchestration.

The two packages coexist — they target different APIs (AG2 0.11+ vs Microsoft AutoGen 0.7.x) and different package names (ag2 vs autogen-agentchat).

Changes:

  • packages/nvidia_nat_ag2/ — LLM wrappers (OpenAI, NIM, Azure),
    tool wrapper (NAT Function → AG2 Tool), profiler callback
  • examples/frameworks/nat_ag2_demo/ — 2-agent research team with
    NAT profiling integration
  • AG2 row in framework capability matrix
  • AG2 = "ag2" in LLMFrameworkEnum
  • Root pyproject.toml: ag2 optional dependency, uv source, demo entry

Addresses community demand from Issue #366.

Summary by CodeRabbit

  • New Features
    • Added AG2 framework integration with OpenAI, NIM, and Azure LLM support, tool-wrapping to expose toolkit functions to AG2, runtime profiling for LLM calls, and multi-agent workflows including research and time-aware traffic demos with runnable configs.
  • Documentation
    • Updated docs, README, capability matrix, and installation snippets to include AG2 and example usage.
  • Tests
    • Added import and enum presence tests for the AG2 plugin.
  • Chores
    • Introduced packaging and example project entries plus an optional AG2 dependency.

@VasiliyRad VasiliyRad requested a review from a team as a code owner March 17, 2026 05:05
@copy-pr-bot
Copy link

copy-pr-bot bot commented Mar 17, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@coderabbitai
Copy link

coderabbitai bot commented Mar 17, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

Walkthrough

Adds AG2 (AutoGen) integration: new AG2 plugin package (LLM wrappers, tool wrapper, profiler), example AG2 demo project with configs and tools, documentation and README updates, packaging entries and tests, and an enum value to register AG2 as a supported framework.

Changes

Cohort / File(s) Summary
Documentation & README
README.md, docs/source/components/integrations/frameworks.md
Added AG2 to README and acknowledgements; new AG2 docs, capability matrix, installation snippet, and ADK subsection.
Core Framework Enum
packages/nvidia_nat_core/src/nat/builder/framework_enum.py
Added AG2 = "ag2" to LLMFrameworkEnum.
AG2 Plugin Packaging
packages/nvidia_nat_ag2/pyproject.toml, pyproject.toml
New nvidia-nat-ag2 package pyproject; added optional ag2 dependency group and local package source entries.
AG2 Plugin Implementation
packages/nvidia_nat_ag2/src/nat/plugins/ag2/__init__.py, .../llm.py, .../tool_wrapper.py, .../callback_handler.py, .../register.py
New plugin modules: LLM client wrappers (openai/nim/azure), AG2 tool wrapper converting NAT Functions to AG2 Tools (sync/async handling), AG2ProfilerHandler that patches OpenAIWrapper.create for profiling, and side-effect register imports.
AG2 Plugin Tests
packages/nvidia_nat_ag2/tests/test_import.py, packages/nvidia_nat_ag2/tests/__init__.py
Added import/test to ensure plugin imports and that LLMFrameworkEnum includes AG2; test package init header.
Example AG2 Demo Packaging
examples/frameworks/nat_ag2_demo/pyproject.toml
New example project pyproject with entry point nat_ag2_demo = nat_ag2_demo.register.
Example AG2 Demo Configs
examples/frameworks/nat_ag2_demo/configs/*
Multiple AG2 configs added: demo run config, eval config, research config — define LLMs (NIM), function groups (MCP), tools, workflows (ag2_team, ag2_research) and evaluation settings.
Example AG2 Demo Implementation
examples/frameworks/nat_ag2_demo/src/nat_ag2_demo/__init__.py, .../ag2_team.py, .../ag2_research_team.py, .../traffic_status_tool.py, .../register.py
New example package and modules: AG2TeamConfig and ag2_team workflow, AG2ResearchConfig and ag2_research_team, traffic status tool, and registration imports.
Example AG2 Demo Docs & Data
examples/frameworks/nat_ag2_demo/README.md, .../data/toy_data.json
Added README for demo with run/eval instructions and toy dataset pointer (LFS).

Sequence Diagram(s)

sequenceDiagram
    participant Builder as Builder
    participant NAT as NAT Core
    participant AG2 as AG2 Plugin
    participant Profiler as AG2ProfilerHandler
    participant LLM as LLM Provider
    participant Tool as AG2 Tool / Function

    Builder->>AG2: register LLM/tool wrappers & functions
    Builder->>AG2: invoke ag2_team / ag2_research_team
    AG2->>Tool: convert NAT Function -> AG2 Tool (ag2_tool_wrapper)
    AG2->>NAT: schedule Function execution via Tool
    Note over Profiler,LLM: Profiler patches OpenAIWrapper.create
    Tool->>Profiler: call OpenAIWrapper.create(...) (patched)
    Profiler->>LLM: emit LLM_START -> call original create
    LLM-->>Profiler: return output
    Profiler->>Builder: emit LLM_END with usage/timing
    Profiler->>Tool: return LLM output
    Tool->>AG2: return Function result
    AG2-->>Builder: workflow returns final content
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~22 minutes

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and concisely describes the main change: adding AG2 plugin and demo, which aligns with the core modifications across the codebase.
Docstring Coverage ✅ Passed Docstring coverage is 94.44% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 10

🧹 Nitpick comments (5)
docs/source/components/integrations/frameworks.md (1)

93-94: Minor: Consider varying sentence structure.

Three successive sentences begin with "AG2." Consider rewording for better readability, e.g., "The framework provides a modern API for building collaborative agent systems..."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/source/components/integrations/frameworks.md` around lines 93 - 94, The
paragraph contains three successive sentences that all start with "AG2", hurting
readability; rewrite one or two of those sentences to vary sentence openings
(for example start one with "The framework" or "It provides") and rephrase to
keep the same meaning about community maintenance, multi-agent orchestration,
and features (GroupChat patterns, tool integration, flexible LLM configuration
and MCP integration) while ensuring the term "AG2" still appears at least once
for clarity.
packages/nvidia_nat_ag2/pyproject.toml (1)

14-18: Consider adding nvidia-nat-test as an optional test dependency.

Per coding guidelines, packages should likely list nvidia-nat-test as an optional dependency in the test extra to support testing infrastructure.

💡 Suggested addition
 [tool.setuptools_dynamic_dependencies]
 dependencies = [
     "nvidia-nat-core == {version}",
     "ag2[openai] >= 0.11.0",
 ]
+
+[project.optional-dependencies]
+test = [
+    "nvidia-nat-test == {version}",
+]

As per coding guidelines: "nvidia-nat-test should likely be listed as an optional dependency in the test extra."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/nvidia_nat_ag2/pyproject.toml` around lines 14 - 18, Add
`nvidia-nat-test` as an optional test dependency by defining a `test` extra and
including the package there (e.g., add a `project.optional-dependencies` /
`tool.setuptools_dynamic_dependencies` entry for the `test` extra containing
"nvidia-nat-test >= {version}" or the appropriate version constraint); update
the existing dependencies block (referenced by the `dependencies` symbol) to
keep runtime deps unchanged and add the new `test` extra so CI and local test
tooling can install `nvidia-nat-test` when running tests.
examples/frameworks/nat_ag2_demo/src/nat_ag2_demo/ag2_research_team.py (1)

83-87: Prefix unused variables with underscore.

The static analysis correctly identifies that ctx and last are unpacked but never used. Prefix them with underscores to indicate they are intentionally ignored.

Proposed fix
-    result, ctx, last = initiate_group_chat(
+    result, _ctx, _last = initiate_group_chat(
         pattern=pattern,
         messages=config.task,
         max_rounds=config.max_rounds,
     )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/frameworks/nat_ag2_demo/src/nat_ag2_demo/ag2_research_team.py`
around lines 83 - 87, The tuple unpack from initiate_group_chat currently binds
unused variables ctx and last; rename them to _ctx and _last (or _ , _) to
signal they are intentionally ignored. Update the call site where
initiate_group_chat is assigned (the line assigning result, ctx, last) to use
result, _ctx, _last so static analysis no longer reports unused variables while
preserving result usage.
packages/nvidia_nat_ag2/src/nat/plugins/ag2/tool_wrapper.py (1)

46-50: Consider handling potential schema generation failures.

If fn.input_schema.model_json_schema() raises an exception (e.g., due to an unserializable type), the tool wrapper will fail. A similar pattern is tested in packages/nvidia_nat_strands/tests/test_strands_tool_wrapper.py (lines 60-61).

Proposed defensive handling
+    schema = None
+    if fn.input_schema:
+        try:
+            schema = fn.input_schema.model_json_schema()
+        except Exception:
+            schema = None
+
     return Tool(
         name=name,
         description=fn.description or name,
         func_or_tool=call_function,
-        parameters_json_schema=(
-            fn.input_schema.model_json_schema()
-            if fn.input_schema
-            else None
-        ),
+        parameters_json_schema=schema,
     )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/nvidia_nat_ag2/src/nat/plugins/ag2/tool_wrapper.py` around lines 46
- 50, The parameters_json_schema assignment can raise if
fn.input_schema.model_json_schema() fails; update the code in tool_wrapper.py
where parameters_json_schema is set (the expression using
fn.input_schema.model_json_schema()) to defensively call model_json_schema()
inside a try/except, catching exceptions (e.g., TypeError/ValueError/Exception),
fallback to None on failure, and emit a warning or debug log including fn or
function name to aid diagnosis; ensure the variable name parameters_json_schema
and object fn/input_schema are referenced in the fix so reviewers can locate the
change.
examples/frameworks/nat_ag2_demo/pyproject.toml (1)

9-11: Package include pattern may exclude the base package.

The pattern nat_ag2_demo.* will only match submodules but not the base nat_ag2_demo package itself. Consider including both the base package and its submodules.

Proposed fix
 [tool.setuptools.packages.find]
 where = ["src"]
-include = ["nat_ag2_demo.*"]
+include = ["nat_ag2_demo", "nat_ag2_demo.*"]
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/frameworks/nat_ag2_demo/pyproject.toml` around lines 9 - 11, The
package include pattern currently only matches submodules (nat_ag2_demo.*) and
will omit the base package; update the setuptools find configuration to include
both the base package and its subpackages by adding the base package name
alongside the glob pattern (e.g., include both "nat_ag2_demo" and
"nat_ag2_demo.*") so that the top-level nat_ag2_demo package is packaged as
well.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@examples/frameworks/nat_ag2_demo/configs/config.yml`:
- Around line 1-12: Add the standard SPDX Apache-2.0 header at the top of this
YAML file so it begins with the required license statement; update the file
containing the "functions" block (with function name ag2_research and _type
ag2_research_team) and the "llm" block (with name default and _type nim) by
inserting the SPDX Apache-2.0 header template as the very first lines of the
file before any YAML content.

In `@examples/frameworks/nat_ag2_demo/pyproject.toml`:
- Around line 1-19: Add the SPDX license header to the top of this
pyproject.toml (the same SPDX-License-Identifier: Apache-2.0 line used in the
root pyproject.toml); open the file containing the [build-system] section and
the project entry with name = "nat-ag2-demo" and insert the SPDX header as the
very first line so the file begins with the license identifier before any TOML
tables or keys.

In `@examples/frameworks/nat_ag2_demo/src/nat_ag2_demo/ag2_research_team.py`:
- Around line 1-17: Add the required SPDX Apache-2.0 header at the top of the
file and remove the unused import of BaseModel; specifically, insert the
standard SPDX Apache-2.0 license header as the first lines of this module and
delete the "from pydantic import BaseModel" import since AG2ResearchConfig
already extends FunctionBaseConfig (see AG2ResearchConfig and FunctionBaseConfig
references) to eliminate the unused symbol.

In `@examples/frameworks/nat_ag2_demo/src/nat_ag2_demo/register.py`:
- Line 1: Add the standard SPDX Apache-2.0 license header at the top of this
source file; currently the file only contains the import statement "from .
import ag2_research_team  # noqa: F401", so prepend the required SPDX Apache-2.0
header template as the very first lines of register.py (above the import) to
satisfy the project licensing guideline.

In `@packages/nvidia_nat_ag2/src/nat/plugins/ag2/__init__.py`:
- Line 1: Add the required SPDX Apache-2.0 file header to the top of this module
(packages/nvidia_nat_ag2/src/nat/plugins/ag2/__init__.py) above the module
docstring: insert the standard SPDX-License-Identifier: Apache-2.0 line plus the
full Apache-2.0 license header block including the updated copyright year and
copyright owner, ensuring the header appears before the existing module
docstring in __init__.py so repository pre-commit checks pass.

In `@packages/nvidia_nat_ag2/src/nat/plugins/ag2/callback_handler.py`:
- Around line 1-5: Add the standard SPDX Apache-2.0 license header to the top of
this module (before the module docstring) in
packages/nvidia_nat_ag2/src/nat/plugins/ag2/callback_handler.py so the file
begins with the required license template; ensure the header appears above the
existing module docstring (the """Profiler callback handler for AG2...""" block)
and matches the project's SPDX Apache-2.0 format exactly.

In `@packages/nvidia_nat_ag2/src/nat/plugins/ag2/llm.py`:
- Around line 1-6: Add the standard SPDX Apache-2.0 license header at the very
top of the file packages/nvidia_nat_ag2/src/nat/plugins/ag2/llm.py (above the
module docstring triple quotes) so the file begins with the required SPDX
boilerplate; ensure the header matches the project’s standard template exactly
(SPDX-License-Identifier: Apache-2.0) and keep the existing module docstring and
LLM wrapper code (LLMConfig-related content) unchanged below it.
- Around line 28-32: The api_key fields in the three AG2 provider wrappers are
passing SecretStr objects instead of raw strings; import get_secret_value from
nat.data_models.common and replace the direct llm_config.api_key usage with
get_secret_value(llm_config.api_key) in openai_ag2 (config_dict["api_key"]),
nim_ag2 (config_dict["api_key"]) and azure_ag2 (config_dict["api_key"]) so the
actual secret string is used for authentication.

In `@packages/nvidia_nat_ag2/src/nat/plugins/ag2/tool_wrapper.py`:
- Around line 1-13: Add the missing SPDX Apache-2.0 license header at the very
top of this file (before the module docstring) in
packages/nvidia_nat_ag2/src/nat/plugins/ag2/tool_wrapper.py; ensure the standard
single-line SPDX identifier "SPDX-License-Identifier: Apache-2.0" is the first
line of the file so the header applies to the entire module that contains the
top-level docstring and imports (references: the module itself tool_wrapper.py
and symbols like Builder, Function, register_tool_wrapper, Tool).

In `@packages/nvidia_nat_ag2/tests/test_import.py`:
- Around line 1-13: This file is missing the required SPDX Apache-2.0 header;
add the standard SPDX Apache-2.0 license header as the first line(s) of
packages/nvidia_nat_ag2/tests/test_import.py above the existing module docstring
so the file begins with the SPDX header, preserving the rest of the file
including the test_register_module_imports() and test_framework_enum_has_ag2()
functions and their imports.

---

Nitpick comments:
In `@docs/source/components/integrations/frameworks.md`:
- Around line 93-94: The paragraph contains three successive sentences that all
start with "AG2", hurting readability; rewrite one or two of those sentences to
vary sentence openings (for example start one with "The framework" or "It
provides") and rephrase to keep the same meaning about community maintenance,
multi-agent orchestration, and features (GroupChat patterns, tool integration,
flexible LLM configuration and MCP integration) while ensuring the term "AG2"
still appears at least once for clarity.

In `@examples/frameworks/nat_ag2_demo/pyproject.toml`:
- Around line 9-11: The package include pattern currently only matches
submodules (nat_ag2_demo.*) and will omit the base package; update the
setuptools find configuration to include both the base package and its
subpackages by adding the base package name alongside the glob pattern (e.g.,
include both "nat_ag2_demo" and "nat_ag2_demo.*") so that the top-level
nat_ag2_demo package is packaged as well.

In `@examples/frameworks/nat_ag2_demo/src/nat_ag2_demo/ag2_research_team.py`:
- Around line 83-87: The tuple unpack from initiate_group_chat currently binds
unused variables ctx and last; rename them to _ctx and _last (or _ , _) to
signal they are intentionally ignored. Update the call site where
initiate_group_chat is assigned (the line assigning result, ctx, last) to use
result, _ctx, _last so static analysis no longer reports unused variables while
preserving result usage.

In `@packages/nvidia_nat_ag2/pyproject.toml`:
- Around line 14-18: Add `nvidia-nat-test` as an optional test dependency by
defining a `test` extra and including the package there (e.g., add a
`project.optional-dependencies` / `tool.setuptools_dynamic_dependencies` entry
for the `test` extra containing "nvidia-nat-test >= {version}" or the
appropriate version constraint); update the existing dependencies block
(referenced by the `dependencies` symbol) to keep runtime deps unchanged and add
the new `test` extra so CI and local test tooling can install `nvidia-nat-test`
when running tests.

In `@packages/nvidia_nat_ag2/src/nat/plugins/ag2/tool_wrapper.py`:
- Around line 46-50: The parameters_json_schema assignment can raise if
fn.input_schema.model_json_schema() fails; update the code in tool_wrapper.py
where parameters_json_schema is set (the expression using
fn.input_schema.model_json_schema()) to defensively call model_json_schema()
inside a try/except, catching exceptions (e.g., TypeError/ValueError/Exception),
fallback to None on failure, and emit a warning or debug log including fn or
function name to aid diagnosis; ensure the variable name parameters_json_schema
and object fn/input_schema are referenced in the fix so reviewers can locate the
change.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: f8baf37a-70e4-494f-9875-cae292d201ce

📥 Commits

Reviewing files that changed from the base of the PR and between 7cc5d15 and b88c316.

📒 Files selected for processing (17)
  • README.md
  • docs/source/components/integrations/frameworks.md
  • examples/frameworks/nat_ag2_demo/configs/config.yml
  • examples/frameworks/nat_ag2_demo/pyproject.toml
  • examples/frameworks/nat_ag2_demo/src/nat_ag2_demo/__init__.py
  • examples/frameworks/nat_ag2_demo/src/nat_ag2_demo/ag2_research_team.py
  • examples/frameworks/nat_ag2_demo/src/nat_ag2_demo/register.py
  • packages/nvidia_nat_ag2/pyproject.toml
  • packages/nvidia_nat_ag2/src/nat/plugins/ag2/__init__.py
  • packages/nvidia_nat_ag2/src/nat/plugins/ag2/callback_handler.py
  • packages/nvidia_nat_ag2/src/nat/plugins/ag2/llm.py
  • packages/nvidia_nat_ag2/src/nat/plugins/ag2/register.py
  • packages/nvidia_nat_ag2/src/nat/plugins/ag2/tool_wrapper.py
  • packages/nvidia_nat_ag2/tests/__init__.py
  • packages/nvidia_nat_ag2/tests/test_import.py
  • packages/nvidia_nat_core/src/nat/builder/framework_enum.py
  • pyproject.toml

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

♻️ Duplicate comments (1)
packages/nvidia_nat_ag2/src/nat/plugins/ag2/llm.py (1)

46-47: ⚠️ Potential issue | 🔴 Critical

Unwrap OptionalSecretStr before building AG2 provider config.

All three wrappers pass llm_config.api_key directly. These configs use secret-typed fields, so you should pass the raw string value instead.

🔧 Proposed fix
 from collections.abc import AsyncGenerator
 
 from autogen import LLMConfig
 
 from nat.builder import Builder
 from nat.builder.framework_enum import LLMFrameworkEnum
 from nat.cli.register_workflow import register_llm_client
+from nat.data_models.common import get_secret_value
 from nat.llm.azure_openai_llm import AzureOpenAIModelConfig
 from nat.llm.nim_llm import NIMModelConfig
 from nat.llm.openai_llm import OpenAIModelConfig
@@
     config_dict = {
         "api_type": "openai",
         "model": llm_config.model,
-        "api_key": llm_config.api_key,
+        "api_key": get_secret_value(llm_config.api_key),
     }
@@
     config_dict = {
         "api_type": "openai",
         "model": llm_config.model,
-        "api_key": llm_config.api_key,
+        "api_key": get_secret_value(llm_config.api_key),
         "base_url": (
             llm_config.api_base
             or "https://integrate.api.nvidia.com/v1"
         ),
@@
     config_dict = {
         "api_type": "azure",
         "model": llm_config.model,
-        "api_key": llm_config.api_key,
+        "api_key": get_secret_value(llm_config.api_key),
         "base_url": llm_config.api_base,
         "api_version": llm_config.api_version,
     }

Also applies to: 68-69, 96-97

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/nvidia_nat_ag2/src/nat/plugins/ag2/llm.py` around lines 46 - 47, The
AG2 provider config is being built using llm_config.api_key directly; since
api_key is an OptionalSecretStr you must unwrap it before use (call
llm_config.api_key.get_secret_value() when not None) in each provider
construction site (the three places where the dict/key "api_key" is set). Update
the config builders to check for None and pass the raw string (or None) instead
of the SecretStr wrapper so the AG2 client receives a plain string.
🧹 Nitpick comments (2)
examples/frameworks/nat_ag2_demo/src/nat_ag2_demo/ag2_research_team.py (1)

46-49: Add a minimum bound for max_rounds.

max_rounds should reject non-positive values to prevent invalid chat loop settings.

🔧 Proposed fix
     max_rounds: int = Field(
         default=10,
+        ge=1,
         description="Max GroupChat rounds.",
     )

As per coding guidelines: "Validate and sanitise all user input, especially in web or CLI interfaces."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/frameworks/nat_ag2_demo/src/nat_ag2_demo/ag2_research_team.py`
around lines 46 - 49, The max_rounds Field currently allows non-positive values;
update its validation in the model (the place where max_rounds: int = Field(...)
is defined) to enforce a minimum of 1 (e.g., add ge=1 to the Field call) or add
a Pydantic validator on max_rounds to raise a ValueError for values <= 0 so
non-positive inputs are rejected and the chat loop cannot be configured with
invalid rounds.
packages/nvidia_nat_ag2/src/nat/plugins/ag2/llm.py (1)

39-42: Add Google-style docstrings to each public wrapper function.

The module has docs, but each public wrapper should also include a Google-style docstring (Args, Yields/Returns).

As per coding guidelines: "Provide Google-style docstrings for every public module, class, function and CLI command."

Also applies to: 61-64, 85-88

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/nvidia_nat_ag2/src/nat/plugins/ag2/llm.py` around lines 39 - 42, Add
Google-style docstrings to the public wrapper functions in this module (e.g.,
openai_ag2 and the other public wrappers referenced in the review) describing
parameters and return/yield values: include an Args section that documents
llm_config: OpenAIModelConfig and _builder: Builder, and a Yields (or Returns)
section that documents the AsyncGenerator[LLMConfig, None] result; place the
docstring immediately below each function signature using the Google style with
brief one-line summary, Args, and Yields/Returns entries.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@examples/frameworks/nat_ag2_demo/src/nat_ag2_demo/ag2_research_team.py`:
- Line 98: The call to initiate_group_chat unpacks three values into result,
ctx, last but ctx and last are unused and trigger RUF059; fix this by changing
the unpacking to only capture the used value(s) (e.g., assign unused values to _
or use a throwaway tuple like result, _ctx, _last = initiate_group_chat(...) or
result, *_ = initiate_group_chat(...)) so that initiate_group_chat remains
invoked but ruff no longer flags unused variables; update any references to
ctx/last accordingly if they are actually needed elsewhere.

In `@packages/nvidia_nat_ag2/src/nat/plugins/ag2/callback_handler.py`:
- Around line 58-61: The patch currently binds a single handler instance so only
the first AG2ProfilerHandler receives events; change patch()/unpatch() to
maintain a collection (e.g., _handlers set/list) instead of a single
closure-bound handler: on patch() add the current handler instance to
AG2ProfilerHandler._handlers and only install the wrapped create once (using
_original_create/_patch_count), and in the installed wrapper iterate over
AG2ProfilerHandler._handlers to dispatch events to every registered handler;
similarly on unpatch() remove the handler from _handlers and only restore the
original create when _handlers becomes empty (update the places around
AG2ProfilerHandler._patch_lock/_patch_count/_original_create and the dispatching
logic referenced at lines ~67-70 and ~141 to use the handlers collection).
- Around line 82-83: The code is currently emitting raw kwargs/result/error via
StreamEventData(input=str(kwargs)) and similar UsageInfo fields; replace these
with a sanitized representation by introducing and calling a sanitizer (e.g.,
sanitize_event_payload(payload)) that returns a redacted dict/string before
emitting to StreamEventData or UsageInfo. Update every emission site that uses
str(kwargs), str(result), or str(error) (references: StreamEventData input and
UsageInfo payloads) to pass the sanitized output, and ensure the sanitizer
strips or masks common sensitive keys (password, token, secret, api_key, auth,
credentials) and handles nested structures safely.
- Around line 145-150: The unpatch path is decrementing
AG2ProfilerHandler._patch_count even when it's already zero, allowing underflow;
update the unpatch logic in AG2ProfilerHandler so you first check if
AG2ProfilerHandler._patch_count is > 0 before decrementing (or clamp it after
decrement), and only proceed to early-return logic when appropriate; keep the
existing guards that check AG2ProfilerHandler._original_create but ensure the
decrement is conditional to prevent negative refcounts and preserve correct
patch/unpatch lifecycle behavior.

---

Duplicate comments:
In `@packages/nvidia_nat_ag2/src/nat/plugins/ag2/llm.py`:
- Around line 46-47: The AG2 provider config is being built using
llm_config.api_key directly; since api_key is an OptionalSecretStr you must
unwrap it before use (call llm_config.api_key.get_secret_value() when not None)
in each provider construction site (the three places where the dict/key
"api_key" is set). Update the config builders to check for None and pass the raw
string (or None) instead of the SecretStr wrapper so the AG2 client receives a
plain string.

---

Nitpick comments:
In `@examples/frameworks/nat_ag2_demo/src/nat_ag2_demo/ag2_research_team.py`:
- Around line 46-49: The max_rounds Field currently allows non-positive values;
update its validation in the model (the place where max_rounds: int = Field(...)
is defined) to enforce a minimum of 1 (e.g., add ge=1 to the Field call) or add
a Pydantic validator on max_rounds to raise a ValueError for values <= 0 so
non-positive inputs are rejected and the chat loop cannot be configured with
invalid rounds.

In `@packages/nvidia_nat_ag2/src/nat/plugins/ag2/llm.py`:
- Around line 39-42: Add Google-style docstrings to the public wrapper functions
in this module (e.g., openai_ag2 and the other public wrappers referenced in the
review) describing parameters and return/yield values: include an Args section
that documents llm_config: OpenAIModelConfig and _builder: Builder, and a Yields
(or Returns) section that documents the AsyncGenerator[LLMConfig, None] result;
place the docstring immediately below each function signature using the Google
style with brief one-line summary, Args, and Yields/Returns entries.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 5f335330-49db-47ce-bbde-27ef482d660e

📥 Commits

Reviewing files that changed from the base of the PR and between b88c316 and f3d4241.

📒 Files selected for processing (13)
  • examples/frameworks/nat_ag2_demo/configs/config.yml
  • examples/frameworks/nat_ag2_demo/pyproject.toml
  • examples/frameworks/nat_ag2_demo/src/nat_ag2_demo/__init__.py
  • examples/frameworks/nat_ag2_demo/src/nat_ag2_demo/ag2_research_team.py
  • examples/frameworks/nat_ag2_demo/src/nat_ag2_demo/register.py
  • packages/nvidia_nat_ag2/pyproject.toml
  • packages/nvidia_nat_ag2/src/nat/plugins/ag2/__init__.py
  • packages/nvidia_nat_ag2/src/nat/plugins/ag2/callback_handler.py
  • packages/nvidia_nat_ag2/src/nat/plugins/ag2/llm.py
  • packages/nvidia_nat_ag2/src/nat/plugins/ag2/register.py
  • packages/nvidia_nat_ag2/src/nat/plugins/ag2/tool_wrapper.py
  • packages/nvidia_nat_ag2/tests/__init__.py
  • packages/nvidia_nat_ag2/tests/test_import.py
✅ Files skipped from review due to trivial changes (3)
  • examples/frameworks/nat_ag2_demo/src/nat_ag2_demo/init.py
  • packages/nvidia_nat_ag2/src/nat/plugins/ag2/init.py
  • packages/nvidia_nat_ag2/tests/init.py
🚧 Files skipped from review as they are similar to previous changes (5)
  • packages/nvidia_nat_ag2/src/nat/plugins/ag2/register.py
  • examples/frameworks/nat_ag2_demo/pyproject.toml
  • packages/nvidia_nat_ag2/tests/test_import.py
  • examples/frameworks/nat_ag2_demo/src/nat_ag2_demo/register.py
  • packages/nvidia_nat_ag2/pyproject.toml

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

♻️ Duplicate comments (2)
packages/nvidia_nat_ag2/src/nat/plugins/ag2/callback_handler.py (2)

58-61: ⚠️ Potential issue | 🟠 Major

patch_count implies multi-handler support, but only the first handler gets events.

patch() increments _patch_count, but the closure binds a single handler=self; later handlers are never registered for dispatch.

🔧 Suggested fix
 class AG2ProfilerHandler(BaseProfilerCallback):
     _original_create = None
     _patch_lock = threading.Lock()
     _patch_count = 0
+    _handlers: set["AG2ProfilerHandler"] = set()

     def patch(self) -> None:
         ...
         with AG2ProfilerHandler._patch_lock:
             AG2ProfilerHandler._patch_count += 1
+            AG2ProfilerHandler._handlers.add(self)
             if AG2ProfilerHandler._original_create is not None:
                 return  # Already patched
             AG2ProfilerHandler._original_create = OpenAIWrapper.create
+            original_create = AG2ProfilerHandler._original_create

-        handler = self
         def patched_create(wrapper_self: Any, *args: Any, **kwargs: Any) -> Any:
+            handlers = tuple(AG2ProfilerHandler._handlers)
             ...
-            handler.step_manager.push_intermediate_step(start_payload)
+            for handler in handlers:
+                handler.step_manager.push_intermediate_step(start_payload)
             ...
-            result = AG2ProfilerHandler._original_create(wrapper_self, *args, **kwargs)
+            result = original_create(wrapper_self, *args, **kwargs)
             ...

Also applies to: 67-70, 90-90

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/nvidia_nat_ag2/src/nat/plugins/ag2/callback_handler.py` around lines
58 - 61, The patch logic currently increments AG2ProfilerHandler._patch_count
but binds a single handler via the closure so only the first instance receives
events; change to maintain a collection (e.g., AG2ProfilerHandler._handlers =
set() or list) and on patch() add self to that collection, and on
unpatch()/close() remove self; replace the closure-bound single-callback (the
function stored in AG2ProfilerHandler._original_create / _patched_create) so it
iterates over AG2ProfilerHandler._handlers and dispatches events to each handler
instance; ensure _patch_count and restoring of the original create function
still happen (only restore when _patch_count reaches zero) and update references
to _patched_create, _original_create, _patch_count, and the new _handlers set
accordingly.

82-83: ⚠️ Potential issue | 🟠 Major

Avoid emitting raw kwargs/result/error into profiler events.

str(kwargs), str(result), and str(e) can leak secrets and PII to the event stream.

🔒 Suggested fix
+SENSITIVE_KEYS = {"password", "token", "secret", "api_key", "auth", "authorization", "credentials"}
+
+def _sanitize_payload(value: Any) -> Any:
+    if isinstance(value, dict):
+        return {
+            k: ("***REDACTED***" if str(k).lower() in SENSITIVE_KEYS else _sanitize_payload(v))
+            for k, v in value.items()
+        }
+    if isinstance(value, (list, tuple)):
+        return [_sanitize_payload(v) for v in value]
+    return value
...
-                data=StreamEventData(input=str(kwargs)),
+                data=StreamEventData(input=_sanitize_payload(kwargs)),
...
-                        data=StreamEventData(input=str(kwargs), output=str(result)),
+                        data=StreamEventData(
+                            input=_sanitize_payload(kwargs),
+                            output=_sanitize_payload(result),
+                        ),
...
-                        data=StreamEventData(input=str(kwargs), output=str(e)),
+                        data=StreamEventData(
+                            input=_sanitize_payload(kwargs),
+                            output={"error": type(e).__name__, "message": str(e)},
+                        ),

As per coding guidelines: "Validate and sanitise all user input, especially in web or CLI interfaces" and "Check for security vulnerabilities and potential issues."

Also applies to: 106-108, 127-130

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/nvidia_nat_ag2/src/nat/plugins/ag2/callback_handler.py` around lines
82 - 83, The profiler events are currently emitting raw sensitive data via
StreamEventData(input=str(kwargs)), StreamEventData(input=str(result)), and
StreamEventData(input=str(e)); replace these direct serializations with a
sanitization/summary step (e.g., implement and call a helper like
sanitize_for_event(value) or summarize_for_event(value) that redacts
secrets/PII, returns type, length, and a truncated non-sensitive preview) and
use that helper in the three places where StreamEventData is constructed; update
CallbackHandler (or the class/function containing those StreamEventData calls)
to call the sanitizer and only include non-sensitive metadata in usage/profiler
events.
🧹 Nitpick comments (1)
packages/nvidia_nat_ag2/src/nat/plugins/ag2/llm.py (1)

18-20: Wrap code entities in backticks inside the module docstring.

In the module docstring, wrap LLMConfig in backticks to match docstring lint expectations.

As per coding guidelines: "Surround code entities with backticks to avoid Vale false-positives in docstrings."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/nvidia_nat_ag2/src/nat/plugins/ag2/llm.py` around lines 18 - 20,
Update the module docstring in
packages/nvidia_nat_ag2/src/nat/plugins/ag2/llm.py to wrap the code entity
LLMConfig in backticks (`LLMConfig`) wherever it appears (e.g., the sentence
"AG2 uses LLMConfig (a configuration object) ..."). Ensure any other code
entities in that docstring are similarly backticked to satisfy the docstring
lint rule and avoid Vale false-positives.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@packages/nvidia_nat_ag2/src/nat/plugins/ag2/callback_handler.py`:
- Around line 94-96: patched_create dereferences the class variable
AG2ProfilerHandler._original_create while unpatch() can set it to None
concurrently; capture AG2ProfilerHandler._original_create into a local variable
at the start of patched_create (e.g., local_original =
AG2ProfilerHandler._original_create), check that local_original is not None
(return/raise or no-op if it is), and then call local_original(wrapper_self,
*args, **kwargs) instead of dereferencing the class attribute directly. Apply
the same change to the other patched function(s) around the 145-159 region that
similarly use AG2ProfilerHandler._original_create so they also capture and check
the local copy before invoking it.
- Around line 18-20: Update the module/docstring text in callback_handler.py to
use the project API namespace casing `nat` instead of `NAT` — e.g., change the
sentence that reads "Patches AG2's OpenAIWrapper to capture LLM call timings and
token usage for NAT's profiling pipeline." to use "nat" (or `nat`) and scan the
module for other prose references to "NAT" (particularly around the AG2
OpenAIWrapper patch or any top-level docstring) and replace them with the
correct `nat` casing.

In `@packages/nvidia_nat_ag2/src/nat/plugins/ag2/llm.py`:
- Around line 40-43: Add concise Google-style docstrings to the three public
wrapper functions openai_ag2, nim_ag2, and azure_ag2: for each function document
its purpose (brief one-line summary), Args (llm_config: OpenAIModelConfig or
equivalent, _builder: Builder), Yields (LLMConfig AsyncGenerator), and any
important behavior/side-effects; place the docstring immediately under each
function definition (openai_ag2, nim_ag2, azure_ag2) using Google-style sections
(Args, Yields, Returns/Raises if applicable) and keep them short and focused.

---

Duplicate comments:
In `@packages/nvidia_nat_ag2/src/nat/plugins/ag2/callback_handler.py`:
- Around line 58-61: The patch logic currently increments
AG2ProfilerHandler._patch_count but binds a single handler via the closure so
only the first instance receives events; change to maintain a collection (e.g.,
AG2ProfilerHandler._handlers = set() or list) and on patch() add self to that
collection, and on unpatch()/close() remove self; replace the closure-bound
single-callback (the function stored in AG2ProfilerHandler._original_create /
_patched_create) so it iterates over AG2ProfilerHandler._handlers and dispatches
events to each handler instance; ensure _patch_count and restoring of the
original create function still happen (only restore when _patch_count reaches
zero) and update references to _patched_create, _original_create, _patch_count,
and the new _handlers set accordingly.
- Around line 82-83: The profiler events are currently emitting raw sensitive
data via StreamEventData(input=str(kwargs)), StreamEventData(input=str(result)),
and StreamEventData(input=str(e)); replace these direct serializations with a
sanitization/summary step (e.g., implement and call a helper like
sanitize_for_event(value) or summarize_for_event(value) that redacts
secrets/PII, returns type, length, and a truncated non-sensitive preview) and
use that helper in the three places where StreamEventData is constructed; update
CallbackHandler (or the class/function containing those StreamEventData calls)
to call the sanitizer and only include non-sensitive metadata in usage/profiler
events.

---

Nitpick comments:
In `@packages/nvidia_nat_ag2/src/nat/plugins/ag2/llm.py`:
- Around line 18-20: Update the module docstring in
packages/nvidia_nat_ag2/src/nat/plugins/ag2/llm.py to wrap the code entity
LLMConfig in backticks (`LLMConfig`) wherever it appears (e.g., the sentence
"AG2 uses LLMConfig (a configuration object) ..."). Ensure any other code
entities in that docstring are similarly backticked to satisfy the docstring
lint rule and avoid Vale false-positives.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 145e15dc-8b31-4868-bcbe-a8a314a7ce4c

📥 Commits

Reviewing files that changed from the base of the PR and between f3d4241 and 82fd0e7.

📒 Files selected for processing (2)
  • packages/nvidia_nat_ag2/src/nat/plugins/ag2/callback_handler.py
  • packages/nvidia_nat_ag2/src/nat/plugins/ag2/llm.py

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (3)
examples/frameworks/nat_ag2_demo/src/nat_ag2_demo/ag2_research_team.py (2)

56-60: Add Args and Returns sections to the docstring.

Per coding guidelines, public functions require Google-style docstrings. The current docstring is missing the Args and Returns sections.

📝 Proposed docstring improvement
 async def ag2_research_team(
     config: AG2ResearchConfig,
     builder: Builder,
 ) -> str:
-    """Run a 2-agent research team with AG2."""
+    """Run a 2-agent research team with AG2.
+
+    Args:
+        config: Configuration specifying the research task, LLM config name,
+            and maximum chat rounds.
+        builder: NAT builder instance for resolving LLM configurations.
+
+    Returns:
+        The last substantive message from the chat history, or
+        "Research complete." if no content is found.
+    """

As per coding guidelines: "Provide Google-style docstrings for every public module, class, function and CLI command."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/frameworks/nat_ag2_demo/src/nat_ag2_demo/ag2_research_team.py`
around lines 56 - 60, The docstring for the public function ag2_research_team is
missing Google-style Args and Returns sections; update the triple-quoted
docstring of ag2_research_team to include an Args section documenting the
config: AG2ResearchConfig and builder: Builder parameters (types and brief
descriptions) and a Returns section describing the returned str (what it
represents), following the project’s Google-style docstring format.

98-102: Synchronous AG2 call in async context.

initiate_group_chat appears to be a synchronous/blocking call within an async function. For demo purposes this is acceptable, but in production scenarios this could block the event loop. Consider documenting this limitation or wrapping in asyncio.to_thread() if non-blocking behavior is needed.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/frameworks/nat_ag2_demo/src/nat_ag2_demo/ag2_research_team.py`
around lines 98 - 102, The call to initiate_group_chat is a blocking/synchronous
call inside an async function (using pattern, messages=config.task,
max_rounds=config.max_rounds) which can block the event loop; either document
this limitation or make the call non-blocking by running initiate_group_chat in
a thread (e.g., wrap it with asyncio.to_thread) and await that result, ensuring
the surrounding async function awaits the threaded call and preserves result,
_ctx, _last semantics.
packages/nvidia_nat_ag2/src/nat/plugins/ag2/callback_handler.py (1)

39-41: Add ClassVar annotation to clarify class-level mutable attribute.

The static analysis warning RUF012 flags the mutable default [] on line 41. While this is intentionally a shared class variable, adding ClassVar makes the intent explicit and silences the linter.

♻️ Proposed fix
+from typing import Any, ClassVar, Optional
 import threading
 import time
-from typing import Any
 
 ...
 
 class AG2ProfilerHandler(BaseProfilerCallback):
     """Instruments AG2 agents for profiling."""
 
-    _original_create = None
-    _patch_lock = threading.Lock()
-    _handlers: list["AG2ProfilerHandler"] = []
+    _original_create: ClassVar[Optional[Any]] = None
+    _patch_lock: ClassVar[threading.Lock] = threading.Lock()
+    _handlers: ClassVar[list["AG2ProfilerHandler"]] = []
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/nvidia_nat_ag2/src/nat/plugins/ag2/callback_handler.py` around lines
39 - 41, The mutable class-level variable _handlers should be annotated with
typing.ClassVar to make its intent explicit and silence RUF012; update the
declaration of _handlers to use ClassVar (e.g., _handlers:
ClassVar[list["AG2ProfilerHandler"]]) and add an import for ClassVar from typing
if not already present, keeping the list initializer unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@packages/nvidia_nat_ag2/src/nat/plugins/ag2/callback_handler.py`:
- Around line 67-72: The fallback in patched_create causes infinite recursion
because when AG2ProfilerHandler._original_create is None the code calls
wrapper_self.create which is still bound to patched_create; change the fallback
to not call wrapper_self.create: instead detect original is None and raise a
clear RuntimeError (or log an error and return/raise) with context, e.g. inside
patched_create check AG2ProfilerHandler._original_create and if None call
logger.error(...) and raise RuntimeError("AG2ProfilerHandler unpatched:
original_create missing") rather than calling wrapper_self.create; update
references in patched_create and ensure any tests that expect graceful handling
adjust accordingly.

---

Nitpick comments:
In `@examples/frameworks/nat_ag2_demo/src/nat_ag2_demo/ag2_research_team.py`:
- Around line 56-60: The docstring for the public function ag2_research_team is
missing Google-style Args and Returns sections; update the triple-quoted
docstring of ag2_research_team to include an Args section documenting the
config: AG2ResearchConfig and builder: Builder parameters (types and brief
descriptions) and a Returns section describing the returned str (what it
represents), following the project’s Google-style docstring format.
- Around line 98-102: The call to initiate_group_chat is a blocking/synchronous
call inside an async function (using pattern, messages=config.task,
max_rounds=config.max_rounds) which can block the event loop; either document
this limitation or make the call non-blocking by running initiate_group_chat in
a thread (e.g., wrap it with asyncio.to_thread) and await that result, ensuring
the surrounding async function awaits the threaded call and preserves result,
_ctx, _last semantics.

In `@packages/nvidia_nat_ag2/src/nat/plugins/ag2/callback_handler.py`:
- Around line 39-41: The mutable class-level variable _handlers should be
annotated with typing.ClassVar to make its intent explicit and silence RUF012;
update the declaration of _handlers to use ClassVar (e.g., _handlers:
ClassVar[list["AG2ProfilerHandler"]]) and add an import for ClassVar from typing
if not already present, keeping the list initializer unchanged.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: de8daeac-d63c-49f4-8d1d-2031c1ae83f2

📥 Commits

Reviewing files that changed from the base of the PR and between 82fd0e7 and 75d6c7f.

📒 Files selected for processing (3)
  • examples/frameworks/nat_ag2_demo/src/nat_ag2_demo/ag2_research_team.py
  • packages/nvidia_nat_ag2/src/nat/plugins/ag2/callback_handler.py
  • packages/nvidia_nat_ag2/src/nat/plugins/ag2/llm.py
🚧 Files skipped from review as they are similar to previous changes (1)
  • packages/nvidia_nat_ag2/src/nat/plugins/ag2/llm.py

@willkill07 willkill07 added feature request New feature or request non-breaking Non-breaking change labels Mar 18, 2026
Copy link
Member

@willkill07 willkill07 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here's a first round of feedback on the PR.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not a complete configuration

[tool.setuptools_dynamic_dependencies]
dependencies = [
"nvidia-nat-core == {version}",
"ag2[openai] >= 0.11.0",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We usually prefer ~=. Unbounded dependencies can break in the future.

Suggested change
"ag2[openai] >= 0.11.0",
"ag2[openai] ~= 0.11",

README.md Outdated

- 🛠️ **Building Agents**: Accelerate your agent development with tools that make it easier to get your agent into production.
- 🧩 [**Framework Agnostic:**](./docs/source/components/integrations/frameworks.md) Work side-by-side with agentic frameworks to add the instrumentation necessary for observing, profiling, and optimizing your agents. Use the toolkit with popular frameworks such as [LangChain](https://www.langchain.com/), [LlamaIndex](https://www.llamaindex.ai/), [CrewAI](https://www.crewai.com/), [Microsoft Semantic Kernel](https://learn.microsoft.com/en-us/semantic-kernel/), and [Google ADK](https://google.github.io/adk-docs/), as well as custom enterprise agentic frameworks and simple Python agents.
- 🧩 [**Framework Agnostic:**](./docs/source/components/integrations/frameworks.md) Work side-by-side with agentic frameworks to add the instrumentation necessary for observing, profiling, and optimizing your agents. Use the toolkit with popular frameworks such as [LangChain](https://www.langchain.com/), [LlamaIndex](https://www.llamaindex.ai/), [CrewAI](https://www.crewai.com/), [AG2](https://docs.ag2.ai), [Microsoft Semantic Kernel](https://learn.microsoft.com/en-us/semantic-kernel/), and [Google ADK](https://google.github.io/adk-docs/), as well as custom enterprise agentic frameworks and simple Python agents.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This list is already non-comprehensive.

Suggested change
- 🧩 [**Framework Agnostic:**](./docs/source/components/integrations/frameworks.md) Work side-by-side with agentic frameworks to add the instrumentation necessary for observing, profiling, and optimizing your agents. Use the toolkit with popular frameworks such as [LangChain](https://www.langchain.com/), [LlamaIndex](https://www.llamaindex.ai/), [CrewAI](https://www.crewai.com/), [AG2](https://docs.ag2.ai), [Microsoft Semantic Kernel](https://learn.microsoft.com/en-us/semantic-kernel/), and [Google ADK](https://google.github.io/adk-docs/), as well as custom enterprise agentic frameworks and simple Python agents.
- 🧩 [**Framework Agnostic:**](./docs/source/components/integrations/frameworks.md) Work side-by-side with agentic frameworks to add the instrumentation necessary for observing, profiling, and optimizing your agents. Use the toolkit with popular frameworks such as [LangChain](https://www.langchain.com/), [LlamaIndex](https://www.llamaindex.ai/), [CrewAI](https://www.crewai.com/), [Microsoft Semantic Kernel](https://learn.microsoft.com/en-us/semantic-kernel/), and [Google ADK](https://google.github.io/adk-docs/), as well as custom enterprise agentic frameworks and simple Python agents.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should i leave this unchanged?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes please! We have some tidying to do anyways

README.md Outdated
- [LangChain](https://github.com/langchain-ai/langchain)
- [Llama-Index](https://github.com/run-llama/llama_index)
- [Mem0ai](https://github.com/mem0ai/mem0)
- [AG2](https://github.com/ag2ai/ag2)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not in alphabetical order

import concurrent.futures
from typing import Any

from autogen.tools import Tool
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are all ag2 tools synchronous?

If this is based on Autogen, I'd consider looking at the implementation of autogen_tool_wrapper within nvidia_nat_autogen

most = [
"nvidia-nat-a2a == {version}",
"nvidia-nat-adk == {version}",
"nvidia-nat-ag2 == {version}",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does ag2 not conflict with autogen at all?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, please ensure that all uv.lock files are generated and that you run all pre-commit hooks locally.

@jsun-m jsun-m requested a review from a team as a code owner March 20, 2026 01:46
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@examples/frameworks/nat_ag2_demo/src/nat_ag2_demo/ag2_research_team.py`:
- Around line 68-69: The code imports and uses the synchronous
initiate_group_chat inside an async function; replace the import from
autogen.agentchat to use a_initiate_group_chat (the async entrypoint) instead of
initiate_group_chat and update the call site where initiate_group_chat(...) is
invoked (in the async routine that configures the group chat) to await
a_initiate_group_chat(...). Keep AutoPattern import unchanged and ensure any
error handling or returned values are handled after awaiting the async call.

In `@examples/frameworks/nat_ag2_demo/src/nat_ag2_demo/ag2_team.py`:
- Around line 126-129: The except block in ag2_team.py logs the exception with
logger.exception("Error in AG2 team workflow") but returns the raw exception
text (f"Error occurred during AG2 team workflow: {e!s}"), which can expose
sensitive/internal details; change the return to a generic, non-sensitive
message (e.g., "An internal error occurred during the AG2 team workflow") while
keeping the detailed exception only in logs (logger.exception) and, if desired,
add an internal error code or correlation id to the response for support
tracing; update the exception handler in the AG2 team workflow function where
logger.exception is called to remove inclusion of {e!s} from the returned
string.
- Around line 64-66: Replace the blocking synchronous call to
initiate_group_chat with the async variant a_initiate_group_chat and await it
inside the async function _ag2_team_workflow; update the import from
autogen.agentchat (or add) to bring in a_initiate_group_chat, and change the
call sites where initiate_group_chat() is used (e.g., the calls around lines
113–117) to await a_initiate_group_chat(...) so the event loop is not blocked.

In `@examples/frameworks/nat_ag2_demo/src/nat_ag2_demo/traffic_status_tool.py`:
- Around line 128-129: The validation currently does a numeric comparison on
hour which can raise TypeError for non-numeric input; update the guard around
the hour check (the block using the variable hour) to first validate or coerce
its type — e.g., if not isinstance(hour, int): try to convert with hour =
int(hour) in a try/except and return the same "Invalid hour" message on
ValueError/TypeError — then perform the existing range check (0 <= hour <= 23)
and return the user-facing message if out of range.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 67ee943e-b5d5-4f58-9f4f-e6f4c2a2f7ee

📥 Commits

Reviewing files that changed from the base of the PR and between 75d6c7f and 8db5e81.

📒 Files selected for processing (13)
  • examples/frameworks/nat_ag2_demo/README.md
  • examples/frameworks/nat_ag2_demo/configs/config-eval.yml
  • examples/frameworks/nat_ag2_demo/configs/config-research.yml
  • examples/frameworks/nat_ag2_demo/configs/config.yml
  • examples/frameworks/nat_ag2_demo/data/toy_data.json
  • examples/frameworks/nat_ag2_demo/pyproject.toml
  • examples/frameworks/nat_ag2_demo/src/nat_ag2_demo/ag2_research_team.py
  • examples/frameworks/nat_ag2_demo/src/nat_ag2_demo/ag2_team.py
  • examples/frameworks/nat_ag2_demo/src/nat_ag2_demo/register.py
  • examples/frameworks/nat_ag2_demo/src/nat_ag2_demo/traffic_status_tool.py
  • packages/nvidia_nat_ag2/pyproject.toml
  • packages/nvidia_nat_ag2/src/nat/plugins/ag2/llm.py
  • packages/nvidia_nat_ag2/src/nat/plugins/ag2/tool_wrapper.py
✅ Files skipped from review due to trivial changes (6)
  • examples/frameworks/nat_ag2_demo/src/nat_ag2_demo/register.py
  • examples/frameworks/nat_ag2_demo/README.md
  • examples/frameworks/nat_ag2_demo/configs/config-research.yml
  • packages/nvidia_nat_ag2/pyproject.toml
  • examples/frameworks/nat_ag2_demo/configs/config-eval.yml
  • packages/nvidia_nat_ag2/src/nat/plugins/ag2/llm.py
🚧 Files skipped from review as they are similar to previous changes (3)
  • examples/frameworks/nat_ag2_demo/pyproject.toml
  • packages/nvidia_nat_ag2/src/nat/plugins/ag2/tool_wrapper.py
  • examples/frameworks/nat_ag2_demo/configs/config.yml

@jsun-m jsun-m force-pushed the vasiliyr/03152026 branch from 0cf98b7 to b6d1e93 Compare March 20, 2026 18:01
@jsun-m jsun-m force-pushed the vasiliyr/03152026 branch from cb8d9d1 to ce90396 Compare March 20, 2026 18:30
@jsun-m
Copy link

jsun-m commented Mar 20, 2026

@willkill07 I can't request another review through github but can you take another look?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

feature request New feature or request non-breaking Non-breaking change

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants