Skip to content

chore: sync main -> v2#4885

Open
Jacksunwei wants to merge 50 commits intov2from
main
Open

chore: sync main -> v2#4885
Jacksunwei wants to merge 50 commits intov2from
main

Conversation

@Jacksunwei
Copy link
Collaborator

Automated sync of v1 changes from main into v2. The oncall is responsible for reviewing and merging this PR. Resolve conflicts in favor of the v2 implementation.

google-genai-bot and others added 30 commits March 10, 2026 21:29
…viderRegistry within CredentialManager

The flow for integrating a new auth method will be as follows. The ADK framework contributor will
1. extend the `AuthScheme` to create their own within `adk/auth/auth_scheme.py`
2. implement `BaseAuthProvider` within their dedicated directory in `adk/integrations/auth`
3. do the static registration of the new scheme and provider with AuthProviderRegistry of CredentialManager.

PiperOrigin-RevId: 881775983
PiperOrigin-RevId: 881782296
The _update_type_string function now recursively processes "properties" at any level of the schema, ensuring that all "type" fields within nested objects are correctly lowercased. This improves handling of complex

Co-authored-by: George Weale <gweale@google.com>
PiperOrigin-RevId: 882050939
Co-authored-by: Yifan Wang <wanyif@google.com>
PiperOrigin-RevId: 882192675
This change adds logic to extract and re-embed the `thought_signature` field associated with function calls in Gemini models when converting between LiteLLM's ChatCompletionMessageToolCall and ADK's types.Part

Close #4650

Co-authored-by: George Weale <gweale@google.com>
PiperOrigin-RevId: 882212223
Co-authored-by: Kathy Wu <wukathy@google.com>
PiperOrigin-RevId: 882212923
The `can_use_output_schema_with_tools` function now checks if a model is a LiteLlm instance by inspecting its type's Method Resolution Order, rather than directly importing `LiteLlm`

Co-authored-by: George Weale <gweale@google.com>
PiperOrigin-RevId: 882253446
V4 still uses deprecated Node.js 20.

Co-authored-by: Liang Wu <wuliang@google.com>
PiperOrigin-RevId: 882275971
Co-authored-by: Liang Wu <wuliang@google.com>
PiperOrigin-RevId: 882293566
Co-authored-by: Liang Wu <18244712+wuliang229@users.noreply.github.com>
Co-authored-by: Guoyi Lou <guoyilou@google.com>
PiperOrigin-RevId: 882787811
Merge #4622

Co-authored-by: Xuan Yang <xygoogle@google.com>
COPYBARA_INTEGRATE_REVIEW=#4622 from rohityan:feat/IssueWatchdog 5609370
PiperOrigin-RevId: 882987318
This change introduces a new interceptor that adds the 'https://google.github.io/adk-docs/a2a/a2a-extension/' extension to the request headers in the A2A client from the RemoteAgent side. To send this extension along with requests, the RemoteAgent has to be instantiated with the `use_legacy` flag set to False. The AgentExecutor will default to the new implementation when this extension is requested by the client, but this behavior can be disabled via the `use_legacy` flag.
The 'force_new' flag on the agent_executor side can be used to bypass the presence of the extension, and always activate the new version of the agent_executor.

PiperOrigin-RevId: 883021792
Co-authored-by: Xuan Yang <xygoogle@google.com>
PiperOrigin-RevId: 883246168
…creation due to missing version field

Co-authored-by: Achuth Narayan Rajagopal <achuthr@google.com>
PiperOrigin-RevId: 883336463
Merge #4833

Syncs version bump and CHANGELOG from release v1.27.1 to main.

COPYBARA_INTEGRATE_REVIEW=#4833 from google:release/v1.27.1 bc1b500
PiperOrigin-RevId: 883394436
Co-authored-by: Xiang (Sean) Zhou <seanzhougoogle@google.com>
PiperOrigin-RevId: 883401159
Merge #4718

### Link to Issue or Description of Change

**1. Link to an existing issue (if applicable):**

- Closes: N/A
- Related: N/A

**2. Or, if no issue exists, describe the change:**

**Problem**

ADK’s MCP integration currently does not expose the MCP sampling callback capability.
This prevents agent-side LLM sampling handlers from being used when interacting with MCP servers that support sampling.

The MCP Python SDK supports sampling callbacks, but these parameters are not propagated through the ADK MCP integration layers.

**Solution**

Add sampling callback support by propagating the parameters through the MCP stack:

- Add `sampling_callback` and `sampling_capabilities` parameters to `McpToolset`
- Forward them to `MCPSessionManager`
- Forward them to `SessionContext`
- Pass them into `ClientSession` initialization

This enables agent-side sampling handling when interacting with MCP servers.

---

### Testing Plan

**Unit Tests**

- [x] I have added or updated unit tests for my change.
- [x] All unit tests pass locally.

Added `test_mcp_sampling_callback.py` to verify that the sampling callback is correctly invoked.

Example result:
pytest tests/unittests/tools/mcp_tool/test_mcp_sampling_callback.py
1 passed

**Manual End-to-End (E2E) Tests**

Manual testing was performed using a FastMCP sampling example server where the sampling callback was invoked from the agent side and returned the expected response.

---

### Checklist

- [x] I have read the CONTRIBUTING.md document.
- [x] I have performed a self-review of my own code.
- [x] I have commented my code where necessary.
- [x] I have added tests proving the feature works.
- [x] Unit tests pass locally.
- [x] I have manually tested the change end-to-end.

---

### Additional context

This change aligns ADK MCP support with the sampling capabilities available in the MCP Python SDK and enables agent implementations to handle sampling requests via a callback.

Co-authored-by: Kathy Wu <wukathy@google.com>
COPYBARA_INTEGRATE_REVIEW=#4718 from Piyushmrya:fix-mcp-sampling-callback 18f477f
PiperOrigin-RevId: 883401178
Co-authored-by: Kathy Wu <wukathy@google.com>
PiperOrigin-RevId: 883401885
Co-authored-by: Kathy Wu <wukathy@google.com>
PiperOrigin-RevId: 883403479
Closes issue #4805

Co-authored-by: Liang Wu <wuliang@google.com>
PiperOrigin-RevId: 883403628
…-2-preview

The gemini-embedding-2-preview model requires the Vertex AI
:embedContent endpoint instead of the legacy :predict endpoint used
by older models (text-embedding-004, text-embedding-005).

In google-genai <1.64.0, embed_content() unconditionally routed to
:predict on Vertex AI, which returns FAILED_PRECONDITION for this
model.

v1.64.0 (googleapis/python-genai@af40cc6) introduced model-aware
dispatch in embed_content(): models with "gemini" in the name are
routed to :embedContent via t_is_vertex_embed_content_model(), while
older text-embedding-* models continue to use :predict.

This version also enforces a single-content-per-call limit for the
embedContent API, which is why FilesRetrieval sets embed_batch_size=1.

Co-authored-by: Xiang (Sean) Zhou <seanzhougoogle@google.com>
PiperOrigin-RevId: 883689438
…provision an Agent Engine if neither agent_engine_resource_name nor sandbox_resource_name is provided

The AgentEngineSandboxCodeExecutor now has three initialization modes:
1.  Create both an Agent Engine and sandbox if neither resource name is provided.
2.  Creating a new sandbox within a provided agent_engine_resource_name.
3.  Using a provided sandbox_resource_name.

PiperOrigin-RevId: 884088248
… execute_sql to support fine grained access controls

PiperOrigin-RevId: 884166439
The metric takes into account all the turns of the multi-turn conversation.

  The class delegates the responsibility to Vertex Gen AI Eval SDK. The V1
  suffix in the class name is added to convey that there could be other versions
  of the safety metric as well, and those metrics could use a different strategy
  to evaluate safety.

Co-authored-by: Ankur Sharma <ankusharma@google.com>
PiperOrigin-RevId: 884504910
Tool use:
The class delegates the responsibility to Vertex Gen AI Eval SDK. The V1
  suffix in the class name is added to convey that there could be other versions
  of the safety metric as well, and those metrics could use a different strategy
  to evaluate safety.
Task trajectory:
this metric is different from `Multi-Turn Overall Task Success`,
  in the sense that task success only concerns itself with the goal of whether
  the success was achieved or not. How that was achieved is not its concern.
  This metric on the other hand does care about the path that agent took to
  achieve the goal.

Co-authored-by: Ankur Sharma <ankusharma@google.com>
PiperOrigin-RevId: 884525532
Co-authored-by: Kathy Wu <wukathy@google.com>
PiperOrigin-RevId: 884557773
When using OAuth2Session with `client_secret_post`, Authlib automatically includes the client_id and client_secret in the request body. Explicitly passing `client_id` again results in a duplicate parameter in the token exchange request, which can cause issues with some OAuth providers.

Close #4782

Co-authored-by: George Weale <gweale@google.com>
PiperOrigin-RevId: 884574091
seanzhougoogle and others added 13 commits March 16, 2026 12:12
this sample can be used to test the latest gemini embedding model

Co-authored-by: Xiang (Sean) Zhou <seanzhougoogle@google.com>
PiperOrigin-RevId: 884574396
Merge #4818

**Please ensure you have read the [contribution guide](https://github.com/google/adk-python/blob/main/CONTRIBUTING.md) before creating a pull request.**

### Link to Issue or Description of Change

**2. Or, if no issue exists, describe the change:**

**Problem:**
`src/google/adk/models/google_llm.py` includes a mitigation link for `429 RESOURCE_EXHAUSTED`, but the current URL points to a broken docs anchor:
https://google.github.io/adk-docs/agents/models/#error-code-429-resource_exhausted
<img width="1067" height="640" alt="image" src="https://github.com/user-attachments/assets/8aee07da-3007-4312-93d3-161321c01f2f" />

**Solution:**
Update the link to the current Gemini-specific docs page so users are directed to the correct troubleshooting section:
https://google.github.io/adk-docs/agents/models/google-gemini/#error-code-429-resource_exhausted
<img width="1161" height="732" alt="image" src="https://github.com/user-attachments/assets/1badf7ab-9411-4f56-a719-6ba2a61ca7ce" />

### Testing Plan

This is a small string-only fix for a broken documentation link.
No unit tests were added because there does not appear to be existing test coverage for this message and the change does not affect runtime behavior beyond the emitted URL.

**Unit Tests:**

- [ ] I have added or updated unit tests for my change.
- [ ] All unit tests pass locally.

_Please include a summary of passed `pytest` results._

**Manual End-to-End (E2E) Tests:**

Confirmed the updated URL in the source points to the intended documentation page/anchor.

### Checklist

- [x] I have read the [CONTRIBUTING.md](https://github.com/google/adk-python/blob/main/CONTRIBUTING.md) document.
- [x] I have performed a self-review of my own code.
- [ ] I have commented my code, particularly in hard-to-understand areas.
- [ ] I have added tests that prove my fix is effective or that my feature works.
- [ ] New and existing unit tests pass locally with my changes.
- [x] I have manually tested my changes end-to-end.
- [ ] Any dependent changes have been merged and published in downstream modules.

### Additional context

This change only updates the documentation URL shown in the `RESOURCE_EXHAUSTED` guidance message.

COPYBARA_INTEGRATE_REVIEW=#4818 from ftnext:fix-429-doc-link 1c53345
PiperOrigin-RevId: 884581120
The previous test for unmapped LiteLLM finish_reason values was ineffective because LiteLLM's internal models normalize certain values (e.g., "eos" to "stop") before ADK processes them

Co-authored-by: George Weale <gweale@google.com>
PiperOrigin-RevId: 884678119
Co-authored-by: Xuan Yang <xygoogle@google.com>
PiperOrigin-RevId: 884686010
This change introduces a new `SpannerAdminToolset` with tools for managing Google Cloud Spanner resources. The toolset includes functions to list and get details of Spanner instances and instance configurations, and to create, list databases. The new toolset is marked as experimental. Unit tests for the new admin tools are also added

PiperOrigin-RevId: 885116111
The Vertex AI session service does not natively support persisting usage_metadata. This change serializes usage_metadata into the custom_metadata field under the key '_usage_metadata' when appending events and deserializes it back when retrieving events. This allows usage information to be round-tripped through the Vertex AI session service.

Co-authored-by: George Weale <gweale@google.com>
PiperOrigin-RevId: 885121070
…ntegration

This change enables the LiteLLM adapter to correctly parse and generate Anthropic's structured "thinking_blocks" format, which includes a "signature" for each thought block. The "signature" is crucial for Anthropic models to maintain their reasoning state across multiple turns, particularly when tool calls are made

Close #4801

Co-authored-by: George Weale <gweale@google.com>
PiperOrigin-RevId: 885131757
Co-authored-by: Xiang (Sean) Zhou <seanzhougoogle@google.com>
PiperOrigin-RevId: 885138365
Merge #4862

Syncs version bump and CHANGELOG from release v1.27.2 to main.

COPYBARA_INTEGRATE_REVIEW=#4862 from google:release/v1.27.2 70847b6
PiperOrigin-RevId: 885216591
When PR #2872 migrated RestApiTool from requests (sync) to
httpx.AsyncClient (async), the _request function was left without an
explicit timeout parameter. Unlike requests which has no default timeout,
httpx defaults to 5 seconds — causing ReadTimeout errors for any API
call exceeding that limit. This regression blocked users from upgrading
past 1.23.0.

Set timeout=None on httpx.AsyncClient to restore parity with the
previous requests-based behavior (no timeout).

Fixes #4431

Co-authored-by: Xiang (Sean) Zhou <seanzhougoogle@google.com>
PiperOrigin-RevId: 885287948
This change introduces optimistic concurrency control for session updates. Instead of automatically reloading and merging when an append is attempted on a session that has been modified in storage, the service now raises a ValueError.

Close #4751

Co-authored-by: George Weale <gweale@google.com>
PiperOrigin-RevId: 885334444
Merge #4879

COPYBARA_INTEGRATE_REVIEW=#4879 from sasha-gitg:add-v2-release-workflows-to-main f28f3d1
PiperOrigin-RevId: 885627463
@google-cla
Copy link

google-cla bot commented Mar 18, 2026

Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

View this failed invocation of the CLA check for more information.

For the most up to date status, view the checks section at the bottom of the pull request.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request focuses on synchronizing the v2 branch with the main branch, introducing new features, improving existing functionalities, and updating dependencies. It includes the addition of a new sample agent for issue monitoring, enhancements to A2A agent execution, and several updates to tool integrations and configurations.

Highlights

  • Version Synchronization: This PR synchronizes changes from the main branch into the v2 branch, ensuring v2 stays up-to-date with the latest developments.
  • Release Please Configuration: The PR updates the release-please configuration files to manage versioning and changelog generation for both the main and v2 branches.
  • New Sample Agent: Introduces a new sample agent, the ADK Issue Monitoring Agent, designed to automatically audit GitHub repository issues for spam and inappropriate content.
  • A2A Executor Implementation: Adds a new integration extension to A2A Agent Executor, and modifies the A2aAgentExecutor to check for the new version extension.
  • Tooling and Dependency Updates: Includes updates to dependencies, new Spanner admin tools, and enhancements to the Langchain and CrewAI tool integrations.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Ignored Files
  • Ignored by pattern: .github/workflows/** (13)
    • .github/workflows/analyze-releases-for-adk-docs-updates.yml
    • .github/workflows/issue-monitor.yml
    • .github/workflows/release-cherry-pick.yml
    • .github/workflows/release-cut.yml
    • .github/workflows/release-finalize.yml
    • .github/workflows/release-please.yml
    • .github/workflows/release-publish.yml
    • .github/workflows/release-v2-cherry-pick.yml
    • .github/workflows/release-v2-cut.yml
    • .github/workflows/release-v2-finalize.yml
    • .github/workflows/release-v2-please.yml
    • .github/workflows/release-v2-publish.yml
    • .github/workflows/v2-sync.yml
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new ADK Issue Monitoring Agent sample for GitHub spam detection, updates release configurations and changelogs to version 1.27.2, and refactors several core components. Key changes include updating dependency versions for google-genai and google-cloud-aiplatform, enhancing A2A agent executor and remote agent implementations with new integration extensions, and introducing new multi-turn evaluation metrics for Vertex AI. Additionally, the SpannerAdminToolset is added, and CrewaiTool and LangchainTool are moved to integrations modules. Review comments highlight potential issues with incorrect copyright years (2026 or 2025) in several new sample files, suggest updating an owner mapping in adk_triaging_agent/agent.py, recommend using a constant directly instead of a dictionary in a2a_agent_executor_impl.py, advise adding a comment to explain a new request interceptor in remote_a2a_agent.py, and point out a potential stack overflow risk in anthropic_llm.py due to unbounded recursion in JSON schema processing.

Comment on lines +1 to +13
# Copyright 2026 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The copyright header includes the year 2026. Ensure this is the correct and intended year.

Comment on lines +1 to +13
# Copyright 2026 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The copyright header includes the year 2026. Ensure this is the correct and intended year.

Comment on lines +1 to +13
# Copyright 2026 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The copyright header includes the year 2026. Ensure this is the correct and intended year.

Comment on lines +1 to +13
# Copyright 2026 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The copyright header includes the year 2026. Ensure this is the correct and intended year.

Comment on lines +1 to +13
# Copyright 2026 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The copyright header includes the year 2026. Ensure this is the correct and intended year.

Comment on lines +1 to +13
# Copyright 2026 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The copyright header includes the year 2026. Ensure this is the correct and intended year.

Comment on lines 307 to 311
_get_adk_metadata_key('session_id'): executor_context.session_id,
# TODO: Remove this metadata once the new agent executor
# is fully adopted.
_get_adk_metadata_key('agent_executor_v2'): True,
_NEW_A2A_ADK_INTEGRATION_EXTENSION: {'adk_agent_executor_v2': True},
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This change replaces the metadata key with a constant. It would be better to use the constant directly instead of creating a dictionary.

        _NEW_A2A_ADK_INTEGRATION_EXTENSION: {'adk_agent_executor_v2': True}

Comment on lines +193 to +198
if not use_legacy:
if self._config.request_interceptors is None:
self._config.request_interceptors = []
self._config.request_interceptors.append(
_new_integration_extension_interceptor
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This change adds a request interceptor when use_legacy is false. Consider adding a comment to explain why this interceptor is needed.

<!doctype html>
<!--
Copyright 2026 Google LLC
Copyright 2025 Google LLC
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The copyright header includes the year 2025. Ensure this is the correct and intended year.

Comment on lines +254 to +307
def _update_type_string(value: Any):
"""Lowercases nested JSON schema type strings for Anthropic compatibility."""
if isinstance(value, list):
for item in value:
_update_type_string(item)
return

if not isinstance(value, dict):
return

schema_type = value.get("type")
if isinstance(schema_type, str):
value["type"] = schema_type.lower()

for dict_key in (
"$defs",
"defs",
"dependentSchemas",
"patternProperties",
"properties",
):
child_dict = value.get(dict_key)
if isinstance(child_dict, dict):
for child_value in child_dict.values():
_update_type_string(child_value)

for single_key in (
"additionalProperties",
"additional_properties",
"contains",
"else",
"if",
"items",
"not",
"propertyNames",
"then",
"unevaluatedProperties",
):
child_value = value.get(single_key)
if isinstance(child_value, (dict, list)):
_update_type_string(child_value)

for list_key in (
"allOf",
"all_of",
"anyOf",
"any_of",
"oneOf",
"one_of",
"prefixItems",
):
child_list = value.get(list_key)
if isinstance(child_list, list):
_update_type_string(child_list)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This function recursively updates the type strings in the JSON schema. This could potentially lead to unexpected behavior if the schema is very complex or deeply nested. Consider adding a limit to the recursion depth to prevent stack overflow errors.

google-genai-bot and others added 7 commits March 18, 2026 13:28
…tracing into environment simulation

PiperOrigin-RevId: 885759367
…ing to LiveConnectConfig

RunConfig.response_modalities is typed as list[str] for backward
compatibility, but LiveConnectConfig.response_modalities expects
list[Modality] (an enum). Assigning strings directly causes a Pydantic
serialization warning on every live streaming session:

  PydanticSerializationUnexpectedValue(Expected `enum` - serialized
  value may not be as expected [field_name='response_modalities',
  input_value='AUDIO', input_type=str])

Convert each modality value to types.Modality at the assignment point in
basic.py using types.Modality(m), which is a no-op for values that are
already Modality enums and converts plain strings like "AUDIO" to
Modality.AUDIO. The public RunConfig interface remains list[str] so
existing callers are unaffected.

Fixes: #4869

Co-authored-by: Xiang (Sean) Zhou <seanzhougoogle@google.com>
PiperOrigin-RevId: 885781014
Co-authored-by: Kathy Wu <wukathy@google.com>
PiperOrigin-RevId: 885801247
Co-authored-by: Kathy Wu <wukathy@google.com>
PiperOrigin-RevId: 885813216
Co-authored-by: Kathy Wu <wukathy@google.com>
PiperOrigin-RevId: 885814460
Co-authored-by: Xuan Yang <xygoogle@google.com>
PiperOrigin-RevId: 885829552
Merge #4780

## Bug

`AnthropicLlm.part_to_message_block()` only serializes `FunctionResponse.response` dicts that contain a `"content"` or `"result"` key. When neither key is present the variable `content` stays as `""` and an empty `ToolResultBlockParam` is sent to Claude.

This silently drops the output of several `SkillToolset` tools:

| Tool | Keys returned | Handled before this fix? |
|---|---|---|
| `load_skill` (success) | `skill_name`, `instructions`, `frontmatter` | **No** |
| `run_skill_script` (success) | `skill_name`, `script_path`, `stdout`, `stderr`, `status` | **No** |
| Any skill tool (error) | `error`, `error_code` | **No** |
| `load_skill_resource` (success) | `skill_name`, `path`, `content` | Yes (`"content"` key) |

Because `load_skill` is the entry-point for skill instructions, Claude models using `SkillToolset` **never received skill instructions**, making the feature completely non-functional with Anthropic models.

## Fix

Added an `else` branch in `part_to_message_block()` that JSON-serializes the full response dict when neither `"content"` nor `"result"` is present:

```python
elif response_data:
    # Fallback: serialize the entire response dict as JSON so that tools
    # returning arbitrary key structures (e.g. load_skill returning
    # {"skill_name", "instructions", "frontmatter"}) are not silently
    # dropped.
    content = json.dumps(response_data)
```

This is consistent with how Gemini handles it — the Gemini integration passes `types.Part` objects directly to the Google GenAI SDK which serializes them natively, so there is no key-based filtering at all.

## Testing plan

Added 4 new unit tests to `tests/unittests/models/test_anthropic_llm.py`:

- `test_part_to_message_block_arbitrary_dict_serialized_as_json` — covers the `load_skill` response shape
- `test_part_to_message_block_run_skill_script_response` — covers the `run_skill_script` response shape
- `test_part_to_message_block_error_response_not_dropped` — covers error dict responses
- `test_part_to_message_block_empty_response_stays_empty` — ensures empty dict still produces empty content (no regression)

All 35 tests in `test_anthropic_llm.py` pass:

```
35 passed in 7.32s
```

Run with:
```bash
uv sync --extra test
pytest tests/unittests/models/test_anthropic_llm.py -v
```

Co-authored-by: Kathy Wu <wukathy@google.com>
COPYBARA_INTEGRATE_REVIEW=#4780 from akashbangad:fix/anthropic-llm-skill-toolset-fallback c23ad37
PiperOrigin-RevId: 885831845
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.