Conversation
…arify usage in multimodal tasks
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! 此拉取请求主要解决了两个关键问题:一是提升了对非标准OpenAI API响应的兼容性,特别是处理了可能缺失的 Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Hey - 我发现了 1 个问题,并留下了一些高层次的反馈:
- 在
_query_stream的错误处理逻辑中,你现在会在 warning 级别、并带有exc_info=True的情况下记录整个chunk对象;建议考虑截断/结构化这些数据,或者将其日志级别降低到 debug,以避免日志过大以及潜在的敏感内容泄露。 _iter_llm_responses_with_fallback中新的空响应回退条件目前只检查resp.completion_text和resp.tools_call_args;如果还有其他非文本输出(例如图片、函数调用 ID 或元数据)也可以代表有效回复,你可能需要把它们也纳入“空响应”的判断中,以避免误触发回退逻辑。
给 AI Agent 的提示
Please address the comments from this code review:
## Overall Comments
- In the `_query_stream` error handler you now log the entire `chunk` object at warning level with `exc_info=True`; consider either truncating/structuring this data or lowering it to debug to avoid excessively large logs and potential leakage of sensitive content.
- The new empty-response fallback condition in `_iter_llm_responses_with_fallback` only checks `resp.completion_text` and `resp.tools_call_args`; if other non-textual outputs (e.g., images, function call ids, or metadata) can represent a valid reply, you may want to include them in the emptiness check to avoid incorrectly triggering fallback.
## Individual Comments
### Comment 1
<location path="astrbot/core/provider/sources/openai_source.py" line_range="318-324" />
<code_context>
+ if tool_call.type is None:
+ tool_call.type = "function"
+
try:
state.handle_chunk(chunk)
except Exception as e:
- logger.warning("Saving chunk state error: " + str(e))
+ logger.warning(
+ f"Saving chunk state error: {type(e).__name__}: {e}. Chunk data: {chunk}",
+ exc_info=True,
+ )
</code_context>
<issue_to_address>
**🚨 suggestion (security):** Logging full chunk objects on errors may be heavy and potentially sensitive; consider truncating or summarizing.
Logging the full `chunk` can inflate log volume and may include user content you don’t want stored verbatim. Consider logging a truncated/summary view (e.g., `repr(chunk)[:N]` or just key identifiers) or moving the full payload behind a debug-level log, while keeping the exception type/message at warning level.
```suggestion
try:
state.handle_chunk(chunk)
except Exception as e:
# Log a truncated summary of the chunk at warning level to avoid
# excessive log volume and potential sensitive data exposure.
max_chunk_log_len = 500
chunk_repr = repr(chunk)
if len(chunk_repr) > max_chunk_log_len:
chunk_repr = chunk_repr[:max_chunk_log_len] + "... [truncated]"
logger.warning(
"Saving chunk state error: %s: %s. Chunk summary (repr, max %d chars): %s",
type(e).__name__,
e,
max_chunk_log_len,
chunk_repr,
exc_info=True,
)
# Keep full payload only at debug level for deeper troubleshooting.
logger.debug("Full chunk data for error: %r", chunk, exc_info=True)
```
</issue_to_address>帮我变得更有用!请在每条评论上点 👍 或 👎,我会根据反馈改进后续的审查质量。
Original comment in English
Hey - I've found 1 issue, and left some high level feedback:
- In the
_query_streamerror handler you now log the entirechunkobject at warning level withexc_info=True; consider either truncating/structuring this data or lowering it to debug to avoid excessively large logs and potential leakage of sensitive content. - The new empty-response fallback condition in
_iter_llm_responses_with_fallbackonly checksresp.completion_textandresp.tools_call_args; if other non-textual outputs (e.g., images, function call ids, or metadata) can represent a valid reply, you may want to include them in the emptiness check to avoid incorrectly triggering fallback.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- In the `_query_stream` error handler you now log the entire `chunk` object at warning level with `exc_info=True`; consider either truncating/structuring this data or lowering it to debug to avoid excessively large logs and potential leakage of sensitive content.
- The new empty-response fallback condition in `_iter_llm_responses_with_fallback` only checks `resp.completion_text` and `resp.tools_call_args`; if other non-textual outputs (e.g., images, function call ids, or metadata) can represent a valid reply, you may want to include them in the emptiness check to avoid incorrectly triggering fallback.
## Individual Comments
### Comment 1
<location path="astrbot/core/provider/sources/openai_source.py" line_range="318-324" />
<code_context>
+ if tool_call.type is None:
+ tool_call.type = "function"
+
try:
state.handle_chunk(chunk)
except Exception as e:
- logger.warning("Saving chunk state error: " + str(e))
+ logger.warning(
+ f"Saving chunk state error: {type(e).__name__}: {e}. Chunk data: {chunk}",
+ exc_info=True,
+ )
</code_context>
<issue_to_address>
**🚨 suggestion (security):** Logging full chunk objects on errors may be heavy and potentially sensitive; consider truncating or summarizing.
Logging the full `chunk` can inflate log volume and may include user content you don’t want stored verbatim. Consider logging a truncated/summary view (e.g., `repr(chunk)[:N]` or just key identifiers) or moving the full payload behind a debug-level log, while keeping the exception type/message at warning level.
```suggestion
try:
state.handle_chunk(chunk)
except Exception as e:
# Log a truncated summary of the chunk at warning level to avoid
# excessive log volume and potential sensitive data exposure.
max_chunk_log_len = 500
chunk_repr = repr(chunk)
if len(chunk_repr) > max_chunk_log_len:
chunk_repr = chunk_repr[:max_chunk_log_len] + "... [truncated]"
logger.warning(
"Saving chunk state error: %s: %s. Chunk summary (repr, max %d chars): %s",
type(e).__name__,
e,
max_chunk_log_len,
chunk_repr,
exc_info=True,
)
# Keep full payload only at debug level for deeper troubleshooting.
logger.debug("Full chunk data for error: %r", chunk, exc_info=True)
```
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
There was a problem hiding this comment.
Code Review
This Pull Request introduces compatibility fixes for non-standard OpenAI adapter responses in openai_source.py, ensuring tool_call.type is not empty in streaming responses, and improves error logging. It also adds a fallback mechanism in tool_loop_agent_runner.py for non-streaming requests with empty upstream responses, enhancing system robustness and debuggability. A potential security vulnerability was identified in openai_source.py where logging raw LLM response chunks could lead to the leakage of sensitive information or PII. It is recommended to sanitize or remove this raw data from log messages. Additionally, there is an opportunity to improve code readability by reducing nesting, which has been detailed in a specific review comment.
|
@sourcery-ai review |
There was a problem hiding this comment.
Hey - 我发现了 1 个问题,并给出了一些整体性的反馈:
- 在规范化
tool_call.type时,建议在你修补缺失的 type 时使用 debug 级别日志记录,这样在不静默掩盖格式错误响应的前提下,仍然能诊断出上游的异常行为。 - 围绕
state.handle_chunk(chunk)的宽泛的except Exception as e可能会掩盖编程错误;如果可能的话,将其收窄为你预期会从格式错误的 chunk 中产生的特定断言/SDK 异常,或者在记录日志后重新抛出那些非预期的异常。
给 AI Agent 的提示词
Please address the comments from this code review:
## Overall Comments
- When normalizing `tool_call.type`, consider logging at debug level whenever you patch a missing type so that unexpected upstream behavior can be diagnosed without silently masking malformed responses.
- The broad `except Exception as e` around `state.handle_chunk(chunk)` may hide programming errors; if possible, narrow this to the specific assertion/SDK errors you expect from malformed chunks or re-raise unexpected exceptions after logging.
## Individual Comments
### Comment 1
<location path="astrbot/core/provider/sources/openai_source.py" line_range="321-324" />
<code_context>
+ if getattr(tool_call, "type", None) in (None, ""):
+ tool_call.type = "function"
+
try:
state.handle_chunk(chunk)
except Exception as e:
- logger.warning("Saving chunk state error: " + str(e))
+ logger.warning(f"Saving chunk state error: {e}")
if len(chunk.choices) == 0:
continue
</code_context>
<issue_to_address>
**suggestion:** Logging the exception without a traceback makes debugging stream issues harder.
Since this block is specifically guarding `state.handle_chunk(chunk)` and may catch assertion errors from malformed tool calls, logging only `{e}` drops the stack trace and context. Use `logger.exception("Saving chunk state error")` or `logger.warning("Saving chunk state error", exc_info=True)` so the traceback is preserved while the loop continues.
```suggestion
try:
state.handle_chunk(chunk)
except Exception:
logger.exception("Saving chunk state error")
```
</issue_to_address>帮我变得更有用!请在每条评论上点击 👍 或 👎,我会利用这些反馈来改进对你代码的评审。
Original comment in English
Hey - I've found 1 issue, and left some high level feedback:
- When normalizing
tool_call.type, consider logging at debug level whenever you patch a missing type so that unexpected upstream behavior can be diagnosed without silently masking malformed responses. - The broad
except Exception as earoundstate.handle_chunk(chunk)may hide programming errors; if possible, narrow this to the specific assertion/SDK errors you expect from malformed chunks or re-raise unexpected exceptions after logging.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- When normalizing `tool_call.type`, consider logging at debug level whenever you patch a missing type so that unexpected upstream behavior can be diagnosed without silently masking malformed responses.
- The broad `except Exception as e` around `state.handle_chunk(chunk)` may hide programming errors; if possible, narrow this to the specific assertion/SDK errors you expect from malformed chunks or re-raise unexpected exceptions after logging.
## Individual Comments
### Comment 1
<location path="astrbot/core/provider/sources/openai_source.py" line_range="321-324" />
<code_context>
+ if getattr(tool_call, "type", None) in (None, ""):
+ tool_call.type = "function"
+
try:
state.handle_chunk(chunk)
except Exception as e:
- logger.warning("Saving chunk state error: " + str(e))
+ logger.warning(f"Saving chunk state error: {e}")
if len(chunk.choices) == 0:
continue
</code_context>
<issue_to_address>
**suggestion:** Logging the exception without a traceback makes debugging stream issues harder.
Since this block is specifically guarding `state.handle_chunk(chunk)` and may catch assertion errors from malformed tool calls, logging only `{e}` drops the stack trace and context. Use `logger.exception("Saving chunk state error")` or `logger.warning("Saving chunk state error", exc_info=True)` so the traceback is preserved while the loop continues.
```suggestion
try:
state.handle_chunk(chunk)
except Exception:
logger.exception("Saving chunk state error")
```
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
|
@sourcery-ai review |
There was a problem hiding this comment.
Hey - 我发现了 1 个问题,并给出了一些总体反馈:
- 在对
choice.delta.tool_calls做兼容性循环处理时,建议同时兼容对象形式和字典形式的 tool_call 载荷(例如通过hasattr/isinstance(dict)或一个小工具函数),这样在适配器返回普通字典或非 Pydantic 对象时,给tool_call.type = "function"赋值就不会出错。 - 新增的
handle_chunk错误日志目前会同时输出一个带exc_info的 debug 日志,以及一条内容几乎相同的 warning 日志;你可以考虑把它们合并为一次结构化日志调用,以避免重复噪音,同时保留完整的异常上下文。
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- In the compatibility loop over `choice.delta.tool_calls`, consider handling both object- and dict-like tool_call payloads (e.g., via `hasattr` / `isinstance(dict)` or a small helper) so that assigning `tool_call.type = "function"` doesn't break when adapters return plain dicts or non-Pydantic objects.
- The new logging on `handle_chunk` errors currently emits both a debug log (with `exc_info`) and a warning log with nearly identical messages; you might want to consolidate these into a single structured log call to avoid redundant noise while keeping full exception context.
## Individual Comments
### Comment 1
<location path="astrbot/core/provider/sources/openai_source.py" line_range="314-323" />
<code_context>
+ for choice in chunk.choices or []:
</code_context>
<issue_to_address>
**issue (bug_risk):** Use a consistent `chunk.choices` null/empty check to avoid a potential `TypeError`.
In this block, you guard `chunk.choices` with `chunk.choices or []`, but a few lines later you call `len(chunk.choices)` without a null check. If a provider returns `choices=None`, this will raise a `TypeError`. Please update the later check to follow the same pattern, e.g.:
```python
if not chunk.choices:
continue
```
</issue_to_address>帮我变得更有用!请在每条评论上点 👍 或 👎,我会根据你的反馈改进后续的代码审查。
Original comment in English
Hey - I've found 1 issue, and left some high level feedback:
- In the compatibility loop over
choice.delta.tool_calls, consider handling both object- and dict-like tool_call payloads (e.g., viahasattr/isinstance(dict)or a small helper) so that assigningtool_call.type = "function"doesn't break when adapters return plain dicts or non-Pydantic objects. - The new logging on
handle_chunkerrors currently emits both a debug log (withexc_info) and a warning log with nearly identical messages; you might want to consolidate these into a single structured log call to avoid redundant noise while keeping full exception context.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- In the compatibility loop over `choice.delta.tool_calls`, consider handling both object- and dict-like tool_call payloads (e.g., via `hasattr` / `isinstance(dict)` or a small helper) so that assigning `tool_call.type = "function"` doesn't break when adapters return plain dicts or non-Pydantic objects.
- The new logging on `handle_chunk` errors currently emits both a debug log (with `exc_info`) and a warning log with nearly identical messages; you might want to consolidate these into a single structured log call to avoid redundant noise while keeping full exception context.
## Individual Comments
### Comment 1
<location path="astrbot/core/provider/sources/openai_source.py" line_range="314-323" />
<code_context>
+ for choice in chunk.choices or []:
</code_context>
<issue_to_address>
**issue (bug_risk):** Use a consistent `chunk.choices` null/empty check to avoid a potential `TypeError`.
In this block, you guard `chunk.choices` with `chunk.choices or []`, but a few lines later you call `len(chunk.choices)` without a null check. If a provider returns `choices=None`, this will raise a `TypeError`. Please update the later check to follow the same pattern, e.g.:
```python
if not chunk.choices:
continue
```
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
|
@sourcery-ai review |
There was a problem hiding this comment.
Hey - 我在这里给了一些整体上的反馈:
- 在修改
tool_call.type时,考虑先用hasattr(tool_call, "type")进行判断(或者使用setattr),以避免当上游对象偏离预期结构时触发潜在的 AttributeError。 - 条件
if not chunk.choices or len(chunk.choices) == 0:是冗余的;if not chunk.choices:就足够了,而且更加清晰。 - 新增的错误日志同时对同一个异常记录了带堆栈的
debug日志和warning日志;可以考虑只保留一个日志级别,或者让两条日志信息更有区分度,以避免噪音过大、日志重复。
给 AI Agent 的提示词
Please address the comments from this code review:
## Overall Comments
- When patching `tool_call.type`, consider guarding with `hasattr(tool_call, "type")` (or using `setattr`) to avoid potential AttributeError if upstream objects diverge from the expected schema.
- The condition `if not chunk.choices or len(chunk.choices) == 0:` is redundant; `if not chunk.choices:` is sufficient and slightly clearer.
- The new error logging logs both a `debug` message (with stack trace) and a `warning` for the same exception; consider keeping only one log level or making the messages distinct to avoid noisy, duplicated logs.帮我变得更有用!请在每条评论上点 👍 或 👎,我会根据你的反馈改进后续的代码评审。
Original comment in English
Hey - I've left some high level feedback:
- When patching
tool_call.type, consider guarding withhasattr(tool_call, "type")(or usingsetattr) to avoid potential AttributeError if upstream objects diverge from the expected schema. - The condition
if not chunk.choices or len(chunk.choices) == 0:is redundant;if not chunk.choices:is sufficient and slightly clearer. - The new error logging logs both a
debugmessage (with stack trace) and awarningfor the same exception; consider keeping only one log level or making the messages distinct to avoid noisy, duplicated logs.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- When patching `tool_call.type`, consider guarding with `hasattr(tool_call, "type")` (or using `setattr`) to avoid potential AttributeError if upstream objects diverge from the expected schema.
- The condition `if not chunk.choices or len(chunk.choices) == 0:` is redundant; `if not chunk.choices:` is sufficient and slightly clearer.
- The new error logging logs both a `debug` message (with stack trace) and a `warning` for the same exception; consider keeping only one log level or making the messages distinct to avoid noisy, duplicated logs.Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
|
@sourcery-ai review |
There was a problem hiding this comment.
Hey - 我发现了 1 个问题,并留下了一些高层次的反馈:
- 用于在
not resp.completion_text and not resp.tools_call_args and not is_last_candidate条件下触发回退的启发式逻辑被标注为“非流模式专用(non-stream-only)”,但当前在_iter_llm_responses_with_fallback中对每一个resp都会应用。建议显式地仅在非流式响应(或具有终止标记的响应)上启用这一逻辑,这样一来,流式响应的早期/部分分片或只包含角色信息的分片就不会导致过早回退。 - 在为缺失的
tool_call.type填充值的兼容性循环中,你只处理了hasattr(tool_call, "type")且该属性值为空的情况;建议至少对完全缺少type属性的 tool call 对象进行日志记录或防护,以便它们能在调试中被发现,而不是被静默跳过。
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- The heuristic for triggering fallback on `not resp.completion_text and not resp.tools_call_args and not is_last_candidate` is labeled as non-stream-only but is currently applied to every `resp` in `_iter_llm_responses_with_fallback`; consider explicitly gating this on non-streaming responses (or a terminal response flag) so that early/partial streaming chunks or role-only chunks do not cause premature fallback.
- In the compatibility loop that fills missing `tool_call.type`, you only handle the case where `hasattr(tool_call, "type")` and the value is empty; it may be useful to at least log or guard against tool call objects that completely lack a `type` attribute so they are visible in debugging rather than silently skipped.
## Individual Comments
### Comment 1
<location path="astrbot/core/agent/runners/tool_loop_agent_runner.py" line_range="246-249" />
<code_context>
async for resp in self._iter_llm_responses(include_model=idx == 0):
+ # 如果回复为空且无工具调用 且不是最后一个回退渠道 则引发fallback(仅适配非流)
+ # 此处不应判断整个消息链是否为空 因为消息链包含整个对话流 而空回复可能发生在任何阶段
+ if (
+ not resp.completion_text
+ and not resp.tools_call_args
+ and not is_last_candidate
+ ):
+ logger.warning(
</code_context>
<issue_to_address>
**issue (bug_risk):** Empty-response fallback condition can incorrectly trigger for streaming chunks.
This condition executes before `resp.is_chunk` is checked and only inspects `completion_text` and `tools_call_args`. For streaming responses, many intermediate chunks legitimately have empty text and no tool calls, so this could incorrectly trigger fallback mid-stream. Since the comment notes this is “仅适配非流”, please guard it with a non-stream check (e.g. `if not resp.is_chunk and ...`) so streaming responses aren’t aborted prematurely.
</issue_to_address>Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Original comment in English
Hey - I've found 1 issue, and left some high level feedback:
- The heuristic for triggering fallback on
not resp.completion_text and not resp.tools_call_args and not is_last_candidateis labeled as non-stream-only but is currently applied to everyrespin_iter_llm_responses_with_fallback; consider explicitly gating this on non-streaming responses (or a terminal response flag) so that early/partial streaming chunks or role-only chunks do not cause premature fallback. - In the compatibility loop that fills missing
tool_call.type, you only handle the case wherehasattr(tool_call, "type")and the value is empty; it may be useful to at least log or guard against tool call objects that completely lack atypeattribute so they are visible in debugging rather than silently skipped.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- The heuristic for triggering fallback on `not resp.completion_text and not resp.tools_call_args and not is_last_candidate` is labeled as non-stream-only but is currently applied to every `resp` in `_iter_llm_responses_with_fallback`; consider explicitly gating this on non-streaming responses (or a terminal response flag) so that early/partial streaming chunks or role-only chunks do not cause premature fallback.
- In the compatibility loop that fills missing `tool_call.type`, you only handle the case where `hasattr(tool_call, "type")` and the value is empty; it may be useful to at least log or guard against tool call objects that completely lack a `type` attribute so they are visible in debugging rather than silently skipped.
## Individual Comments
### Comment 1
<location path="astrbot/core/agent/runners/tool_loop_agent_runner.py" line_range="246-249" />
<code_context>
async for resp in self._iter_llm_responses(include_model=idx == 0):
+ # 如果回复为空且无工具调用 且不是最后一个回退渠道 则引发fallback(仅适配非流)
+ # 此处不应判断整个消息链是否为空 因为消息链包含整个对话流 而空回复可能发生在任何阶段
+ if (
+ not resp.completion_text
+ and not resp.tools_call_args
+ and not is_last_candidate
+ ):
+ logger.warning(
</code_context>
<issue_to_address>
**issue (bug_risk):** Empty-response fallback condition can incorrectly trigger for streaming chunks.
This condition executes before `resp.is_chunk` is checked and only inspects `completion_text` and `tools_call_args`. For streaming responses, many intermediate chunks legitimately have empty text and no tool calls, so this could incorrectly trigger fallback mid-stream. Since the comment notes this is “仅适配非流”, please guard it with a non-stream check (e.g. `if not resp.is_chunk and ...`) so streaming responses aren’t aborted prematurely.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
|
@sourcery-ai review |
There was a problem hiding this comment.
Hey - 我发现了 2 个问题,并给出了一些总体层面的反馈:
tool_call.type的自动修复会在每个 chunk 上执行,并且每次都会记录一条 warning 日志;建议将其日志级别降为 debug,或者加入足够的上下文信息(例如:模型/服务提供方 id、chunk 索引),以避免日志噪音,并更容易追踪是哪一个上游在异常行为。- 空响应回退(fallback)的检查目前只检测
completion_text和tools_call_args;你可能需要先对文本进行规范化/去除空白字符,并显式处理诸如仅包含空白字符的响应或其他非文本负载的情况,以避免在模型实际已经回复时误触发回退。 - 在助手消息为空时直接抛出裸的
RuntimeError会改变错误语义;建议使用更具体的异常类型(或复用项目中已有的特定异常),以便调用方能够将这一情况与其他运行时错误区分开来。
面向 AI Agent 的提示词
Please address the comments from this code review:
## Overall Comments
- The `tool_call.type` auto-fix runs on every chunk and logs a warning each time; consider downgrading to debug or adding enough context (e.g., model/provider id, chunk index) to avoid noisy logs and make it easier to trace which upstream is misbehaving.
- The empty-response fallback check only inspects `completion_text` and `tools_call_args`; you may want to normalize/strip text and explicitly handle cases like whitespace-only responses or other non-text payloads to avoid triggering fallback when the model actually replied.
- Raising a bare `RuntimeError` on empty assistant messages changes error semantics; consider using a more specific exception type (or reusing an existing project-specific one) so callers can distinguish this condition from other runtime failures.
## Individual Comments
### Comment 1
<location path="astrbot/core/provider/sources/openai_source.py" line_range="307-313" />
<code_context>
+ # 可能会缺失 type 字段。由于 openai SDK 的 ChatCompletionStreamState.handle_chunk
+ # 内部有 assert tool.type == "function" 的断言,缺少该字段会导致 AssertionError。
+ # 因此,若检测到 tool_call 且 type 为空,在此处手动补全为 "function"。
+ for choice in chunk.choices or []:
+ if not choice.delta or not choice.delta.tool_calls:
+ continue
+ for tool_call in choice.delta.tool_calls:
+ if hasattr(tool_call, "type") and tool_call.type in (None, ""):
+ logger.warning(f"tool_call.type is empty, manually set to 'function'")
+ tool_call.type = "function"
+
</code_context>
<issue_to_address>
**suggestion (performance):** 考虑降低 `tool_call.type` 修正日志的级别或对其进行限流,以避免日志泛滥。
对于那些经常省略 `tool_call.type` 的服务提供方来说,这段代码会在每个 chunk 上记录一条 warning 日志,在规模化运行时可能会淹没日志。由于这是一个预期中的兼容性修复,建议使用 `debug` 级别,或者在每个请求中只记录一次,以在保持诊断能力的同时,让日志更加有用。
```suggestion
state = ChatCompletionStreamState()
# 标记本次请求中是否已经记录过一次 tool_call.type 纠正的警告日志,
# 避免在流式多次返回时产生过多重复日志。
tool_type_warning_logged = False
async for chunk in stream:
# 兼容处理:部分非标准聚合平台(如通过newapi适配层转接的 Gemini)在流式返回 tool_calls 时,
# 可能会缺失 type 字段。由于 openai SDK 的 ChatCompletionStreamState.handle_chunk
# 内部有 assert tool.type == "function" 的断言,缺少该字段会导致 AssertionError。
# 因此,若检测到 tool_call 且 type 为空,在此处手动补全为 "function"。
```
</issue_to_address>
### Comment 2
<location path="astrbot/core/agent/runners/tool_loop_agent_runner.py" line_range="246-255" />
<code_context>
async for resp in self._iter_llm_responses(include_model=idx == 0):
+ # 如果回复为空且无工具调用 且不是最后一个回退渠道 则引发fallback
+ # 此处不应判断整个消息链是否为空 因为消息链包含整个对话流 而空回复可能发生在任何阶段
+ if (
+ not resp.completion_text
+ and not resp.tools_call_args
+ and not is_last_candidate
+ ):
+ logger.warning(
+ "Chat Model %s returns empty response, trying fallback to next provider.",
+ candidate_id,
+ )
+ break
+
if resp.is_chunk:
</code_context>
<issue_to_address>
**issue (bug_risk):** 空响应回退条件会在流式 chunk 上触发,从而过早中断流式传输。
由于这一检查发生在 `resp.is_chunk` 之前,它会匹配到第一个流式 chunk(该 chunk 通常没有 `completion_text` 或 tool 调用),立刻跳出循环并触发回退。这实际上会让流式能力失效。该逻辑应当只在最终聚合的响应上运行(例如 `if not resp.is_chunk and ...`),或者被移动到流式循环之外/之后,以避免中间 chunk 提前终止流。
</issue_to_address>帮我变得更有用!请在每条评论上点 👍 或 👎,我会根据你的反馈改进后续的代码评审。
Original comment in English
Hey - I've found 2 issues, and left some high level feedback:
- The
tool_call.typeauto-fix runs on every chunk and logs a warning each time; consider downgrading to debug or adding enough context (e.g., model/provider id, chunk index) to avoid noisy logs and make it easier to trace which upstream is misbehaving. - The empty-response fallback check only inspects
completion_textandtools_call_args; you may want to normalize/strip text and explicitly handle cases like whitespace-only responses or other non-text payloads to avoid triggering fallback when the model actually replied. - Raising a bare
RuntimeErroron empty assistant messages changes error semantics; consider using a more specific exception type (or reusing an existing project-specific one) so callers can distinguish this condition from other runtime failures.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- The `tool_call.type` auto-fix runs on every chunk and logs a warning each time; consider downgrading to debug or adding enough context (e.g., model/provider id, chunk index) to avoid noisy logs and make it easier to trace which upstream is misbehaving.
- The empty-response fallback check only inspects `completion_text` and `tools_call_args`; you may want to normalize/strip text and explicitly handle cases like whitespace-only responses or other non-text payloads to avoid triggering fallback when the model actually replied.
- Raising a bare `RuntimeError` on empty assistant messages changes error semantics; consider using a more specific exception type (or reusing an existing project-specific one) so callers can distinguish this condition from other runtime failures.
## Individual Comments
### Comment 1
<location path="astrbot/core/provider/sources/openai_source.py" line_range="307-313" />
<code_context>
+ # 可能会缺失 type 字段。由于 openai SDK 的 ChatCompletionStreamState.handle_chunk
+ # 内部有 assert tool.type == "function" 的断言,缺少该字段会导致 AssertionError。
+ # 因此,若检测到 tool_call 且 type 为空,在此处手动补全为 "function"。
+ for choice in chunk.choices or []:
+ if not choice.delta or not choice.delta.tool_calls:
+ continue
+ for tool_call in choice.delta.tool_calls:
+ if hasattr(tool_call, "type") and tool_call.type in (None, ""):
+ logger.warning(f"tool_call.type is empty, manually set to 'function'")
+ tool_call.type = "function"
+
</code_context>
<issue_to_address>
**suggestion (performance):** Consider downgrading or rate-limiting the tool_call.type correction warning to avoid log spam.
With providers that routinely omit `tool_call.type`, this will log a warning for every chunk and can overwhelm logs at scale. Since this is an expected compatibility fix, consider using `debug` or logging once per request to keep logs useful while still enabling diagnosis.
```suggestion
state = ChatCompletionStreamState()
# 标记本次请求中是否已经记录过一次 tool_call.type 纠正的警告日志,
# 避免在流式多次返回时产生过多重复日志。
tool_type_warning_logged = False
async for chunk in stream:
# 兼容处理:部分非标准聚合平台(如通过newapi适配层转接的 Gemini)在流式返回 tool_calls 时,
# 可能会缺失 type 字段。由于 openai SDK 的 ChatCompletionStreamState.handle_chunk
# 内部有 assert tool.type == "function" 的断言,缺少该字段会导致 AssertionError。
# 因此,若检测到 tool_call 且 type 为空,在此处手动补全为 "function"。
```
</issue_to_address>
### Comment 2
<location path="astrbot/core/agent/runners/tool_loop_agent_runner.py" line_range="246-255" />
<code_context>
async for resp in self._iter_llm_responses(include_model=idx == 0):
+ # 如果回复为空且无工具调用 且不是最后一个回退渠道 则引发fallback
+ # 此处不应判断整个消息链是否为空 因为消息链包含整个对话流 而空回复可能发生在任何阶段
+ if (
+ not resp.completion_text
+ and not resp.tools_call_args
+ and not is_last_candidate
+ ):
+ logger.warning(
+ "Chat Model %s returns empty response, trying fallback to next provider.",
+ candidate_id,
+ )
+ break
+
if resp.is_chunk:
</code_context>
<issue_to_address>
**issue (bug_risk):** Empty-response fallback condition will trigger on streaming chunks and prematurely break streaming.
Because this check runs before `resp.is_chunk`, it will match the very first streaming chunk (which normally has no `completion_text` or tool calls), immediately breaking the loop and triggering fallback. This effectively disables streaming. This logic should only run on the final, aggregated response (e.g. `if not resp.is_chunk and ...`), or be moved outside/after the streaming loop so intermediate chunks don’t short-circuit the stream.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
增加新的空回复异常类 并使用该异常报告空回复错误 修改openai适配器中的兼容日志 降级为debug且输出详细信息
|
@sourcery-ai review |
There was a problem hiding this comment.
Hey - 我发现了 3 个问题,并给出了一些高层次的反馈:
_iter_llm_responses_with_fallback中用于检测 “空” LLM 回复的逻辑已经变得相当复杂;建议将这套启发式(包括对result_chain的扫描以及工具调用检查)抽取到一个小的辅助函数中,这样判定标准就可以集中管理,并且更容易在其他调用点复用 / 调整。- 在
_query_stream中归一化tool_call.type时,你已经在 debug 日志中记录了模型和 provider;建议在Saving chunk state error的 warning 日志中也包含相同的标识信息,这样在生产环境调试时,更容易将归一化逻辑与后续失败进行关联。
给 AI Agent 使用的提示词
请根据本次代码评审中的评论进行修改:
## 总体评论
- `_iter_llm_responses_with_fallback` 中用于检测 “空” LLM 回复的逻辑已经变得相当复杂;建议将这套启发式(包括对 `result_chain` 的扫描以及工具调用检查)抽取到一个小的辅助函数中,这样判定标准就可以集中管理,并且更容易在其他调用点复用 / 调整。
- 在 `_query_stream` 中归一化 `tool_call.type` 时,你已经在 debug 日志中记录了模型和 provider;建议在 `Saving chunk state error` 的 warning 日志中也包含相同的标识信息,这样在生产环境调试时,更容易将归一化逻辑与后续失败进行关联。
## 单独评论
### 评论 1
<location path="astrbot/core/agent/runners/tool_loop_agent_runner.py" line_range="541" />
<code_context>
"LLM returned empty assistant message with no tool calls."
)
+ # 若所有fallback使用完毕后依然为空回复 则显示执行报错 避免静默
+ raise LLMEmptyResponseError("LLM returned empty assistant message with no tool calls.")
+
self.run_context.messages.append(Message(role="assistant", content=parts))
</code_context>
<issue_to_address>
**suggestion:** 建议在 LLMEmptyResponseError 中包含更多上下文信息,以便更容易调试
抛出特定的 `LLMEmptyResponseError` 是有帮助的,但当前异常缺少触发它的具体模型 / provider 或运行实例的信息。请在错误消息(或异常属性)中加入 model id、provider id 和/或会话 / 运行 id 等标识,这样日志和异常处理逻辑就能更快定位到出问题的后端。
建议的实现如下:
```python
logger.warning(
"LLM returned empty assistant message with no tool calls."
)
# 若所有fallback使用完毕后依然为空回复 则显示执行报错 避免静默
base_msg = "LLM returned empty assistant message with no tool calls."
# 尝试附加更多上下文信息,便于排查问题
model_id = getattr(self.run_context, "model_id", None)
provider_id = getattr(self.run_context, "provider_id", None)
run_id = getattr(self.run_context, "run_id", None)
ctx_parts = []
if model_id is not None:
ctx_parts.append(f"model_id={model_id}")
if provider_id is not None:
ctx_parts.append(f"provider_id={provider_id}")
if run_id is not None:
ctx_parts.append(f"run_id={run_id}")
if ctx_parts:
base_msg = f"{base_msg} Context: " + ", ".join(ctx_parts) + "."
raise LLMEmptyResponseError(base_msg)
self.run_context.messages.append(Message(role="assistant", content=parts))
```
如果你的 `run_context` 使用了不同的属性名称来表示 model / provider / run 标识,请相应地调整 `model_id`、`provider_id` 和 `run_id` 的获取方式(例如 `self.run_context.model`、`self.run_context.provider`、`self.run_context.id`)。你可能也希望在处理这个异常的日志管线上,镜像记录相同的上下文信息(例如结构化日志)。
</issue_to_address>
### 评论 2
<location path="astrbot/core/provider/sources/openai_source.py" line_range="328-335" />
<code_context>
has_stream_output = False
try:
async for resp in self._iter_llm_responses(include_model=idx == 0):
+ # 如果回复为空且无工具调用 且不是最后一个回退渠道 则引发fallback
</code_context>
<issue_to_address>
**suggestion:** 建议在处理 state.handle_chunk 异常时减少重复日志
当 `state.handle_chunk` 失败时,这里会同时打印 `logger.debug(..., exc_info=True)` 和 `logger.warning(...)`,在高负载场景下可能会产生噪声较大、重复的日志条目。更推荐只保留一条日志:要么使用一个 warning 级别的 `logger.exception`,要么在 “带堆栈的 debug 日志” 与 “不带堆栈的 warning 日志” 中择其一,这样既能保持日志可读性,又便于调试。
```suggestion
try:
state.handle_chunk(chunk)
except Exception as e:
logger.warning(
"Saving chunk state error while handling LLM chunk.",
exc_info=True,
)
```
</issue_to_address>
### 评论 3
<location path="astrbot/core/agent/runners/tool_loop_agent_runner.py" line_range="245" />
<code_context>
has_stream_output = False
try:
async for resp in self._iter_llm_responses(include_model=idx == 0):
+ # 如果回复为空且无工具调用 且不是最后一个回退渠道 则引发fallback
+ # 此处不应判断整个消息链是否为空 因为消息链包含整个对话流 而空回复可能发生在任何阶段
</code_context>
<issue_to_address>
**issue (complexity):** 建议将新增的空响应检测逻辑提取到一个辅助方法中,并在主循环中调用,以保持 `_iter_llm_responses_with_fallback` 专注于高层控制流。
你可以保留当前新增的行为,但将新增的控制流和领域规则集中到一个辅助函数中,这样 `_iter_llm_responses_with_fallback` 会更易读。
### 1. 将 “空响应” 判断逻辑抽取为辅助函数
把归一化和 `result_chain` 遍历的逻辑移动到一个独立的方法中:
```python
def _has_meaningful_content(self, resp: LLMResponse, *, allow_tools: bool = True) -> bool:
# 去除空白字符后判断 completion / reasoning
if (resp.completion_text or "").strip():
return True
if (resp.reasoning_content or "").strip():
return True
# 工具调用是否视为有效内容
if allow_tools and resp.tools_call_args:
return True
# 检查 result_chain 中是否有非空内容
if resp.result_chain and resp.result_chain.chain:
for comp in resp.result_chain.chain:
# 跳过空的 Plain 组件
if isinstance(comp, Comp.Plain):
if comp.text and comp.text.strip():
return True
else:
# 非 Plain 组件(如图片、语音等)视为有效内容
return True
return False
```
### 2. 简化主循环
用一个更高层的条件分支替换内联的逻辑块:
```python
async for resp in self._iter_llm_responses(include_model=idx == 0):
if not self._has_meaningful_content(resp, allow_tools=True) and not is_last_candidate:
logger.warning(
"Chat Model %s returns empty response, trying fallback to next provider.",
candidate_id,
)
break
if resp.is_chunk:
has_stream_output = True
yield resp
continue
if (
resp.role == "err"
and not has_stream_output
and (not is_last_candidate)
):
...
```
这样可以让循环主要关注控制流(流式输出、fallback、错误处理),并把 “什么算空响应?” 的规则集中到一个地方,方便未来调整,且在其他地方复用时更不易出错。
</issue_to_address>帮我变得更有用!请在每条评论上点 👍 或 👎,我会根据反馈改进后续的评审质量。
Original comment in English
Hey - I've found 3 issues, and left some high level feedback:
- The logic for detecting an "empty" LLM response in
_iter_llm_responses_with_fallbackis getting quite involved; consider extracting this heuristic (including theresult_chainscan and tool-call checks) into a small helper function so the criteria are centralized and easier to reuse/adjust for other call sites. - When normalizing
tool_call.typein_query_stream, you're already logging the model and provider in the debug log — it may be useful to include the same identifiers in the warning log forSaving chunk state errorto make correlating normalization and downstream failures easier during production debugging.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- The logic for detecting an "empty" LLM response in `_iter_llm_responses_with_fallback` is getting quite involved; consider extracting this heuristic (including the `result_chain` scan and tool-call checks) into a small helper function so the criteria are centralized and easier to reuse/adjust for other call sites.
- When normalizing `tool_call.type` in `_query_stream`, you're already logging the model and provider in the debug log — it may be useful to include the same identifiers in the warning log for `Saving chunk state error` to make correlating normalization and downstream failures easier during production debugging.
## Individual Comments
### Comment 1
<location path="astrbot/core/agent/runners/tool_loop_agent_runner.py" line_range="541" />
<code_context>
"LLM returned empty assistant message with no tool calls."
)
+ # 若所有fallback使用完毕后依然为空回复 则显示执行报错 避免静默
+ raise LLMEmptyResponseError("LLM returned empty assistant message with no tool calls.")
+
self.run_context.messages.append(Message(role="assistant", content=parts))
</code_context>
<issue_to_address>
**suggestion:** Consider including more context in LLMEmptyResponseError for easier debugging
Raising a specific `LLMEmptyResponseError` is helpful, but the exception currently lacks context about which model/provider or run triggered it. Please enrich the error (message or attributes) with identifiers such as model id, provider id, and/or conversation/run id so logs and handlers can quickly pinpoint the failing backend.
Suggested implementation:
```python
logger.warning(
"LLM returned empty assistant message with no tool calls."
)
# 若所有fallback使用完毕后依然为空回复 则显示执行报错 避免静默
base_msg = "LLM returned empty assistant message with no tool calls."
# 尝试附加更多上下文信息,便于排查问题
model_id = getattr(self.run_context, "model_id", None)
provider_id = getattr(self.run_context, "provider_id", None)
run_id = getattr(self.run_context, "run_id", None)
ctx_parts = []
if model_id is not None:
ctx_parts.append(f"model_id={model_id}")
if provider_id is not None:
ctx_parts.append(f"provider_id={provider_id}")
if run_id is not None:
ctx_parts.append(f"run_id={run_id}")
if ctx_parts:
base_msg = f"{base_msg} Context: " + ", ".join(ctx_parts) + "."
raise LLMEmptyResponseError(base_msg)
self.run_context.messages.append(Message(role="assistant", content=parts))
```
If your `run_context` uses different attribute names for model/provider/run identifiers, adjust `model_id`, `provider_id`, and `run_id` lookups to match (e.g. `self.run_context.model`, `self.run_context.provider`, `self.run_context.id`). You may also want to mirror the same contextual information in your logging pipeline (e.g. structured logs) wherever this exception is handled.
</issue_to_address>
### Comment 2
<location path="astrbot/core/provider/sources/openai_source.py" line_range="328-335" />
<code_context>
has_stream_output = False
try:
async for resp in self._iter_llm_responses(include_model=idx == 0):
+ # 如果回复为空且无工具调用 且不是最后一个回退渠道 则引发fallback
</code_context>
<issue_to_address>
**suggestion:** Reduce duplicate logging when handling state.handle_chunk exceptions
On `state.handle_chunk` failure this will log both `logger.debug(..., exc_info=True)` and `logger.warning(...)`, which can create noisy, duplicated entries under load. Prefer a single log: either one `logger.exception` at warning level, or just one of debug-with-traceback vs warning-without, so logs stay readable but still useful for debugging.
```suggestion
try:
state.handle_chunk(chunk)
except Exception as e:
logger.warning(
"Saving chunk state error while handling LLM chunk.",
exc_info=True,
)
```
</issue_to_address>
### Comment 3
<location path="astrbot/core/agent/runners/tool_loop_agent_runner.py" line_range="245" />
<code_context>
has_stream_output = False
try:
async for resp in self._iter_llm_responses(include_model=idx == 0):
+ # 如果回复为空且无工具调用 且不是最后一个回退渠道 则引发fallback
+ # 此处不应判断整个消息链是否为空 因为消息链包含整个对话流 而空回复可能发生在任何阶段
</code_context>
<issue_to_address>
**issue (complexity):** Consider extracting the new empty-response detection logic into a helper method and calling it from the main loop to keep `_iter_llm_responses_with_fallback` focused on high-level control flow.
You can keep the new behavior but localize the added control-flow and domain rules into a helper so `_iter_llm_responses_with_fallback` stays readable.
### 1. Extract the "empty response" logic into a helper
Move the normalization and `result_chain` traversal into a dedicated method:
```python
def _has_meaningful_content(self, resp: LLMResponse, *, allow_tools: bool = True) -> bool:
# 去除空白字符后判断 completion / reasoning
if (resp.completion_text or "").strip():
return True
if (resp.reasoning_content or "").strip():
return True
# 工具调用是否视为有效内容
if allow_tools and resp.tools_call_args:
return True
# 检查 result_chain 中是否有非空内容
if resp.result_chain and resp.result_chain.chain:
for comp in resp.result_chain.chain:
# 跳过空的 Plain 组件
if isinstance(comp, Comp.Plain):
if comp.text and comp.text.strip():
return True
else:
# 非 Plain 组件(如图片、语音等)视为有效内容
return True
return False
```
### 2. Simplify the main loop
Replace the inline block with a single, high-level conditional:
```python
async for resp in self._iter_llm_responses(include_model=idx == 0):
if not self._has_meaningful_content(resp, allow_tools=True) and not is_last_candidate:
logger.warning(
"Chat Model %s returns empty response, trying fallback to next provider.",
candidate_id,
)
break
if resp.is_chunk:
has_stream_output = True
yield resp
continue
if (
resp.role == "err"
and not has_stream_output
and (not is_last_candidate)
):
...
```
This keeps the loop focused on control flow (streaming, fallback, error handling) and centralizes the “what counts as empty?” rule in one place, making future adjustments less error-prone and reusable elsewhere if needed.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
SourceryAI
left a comment
There was a problem hiding this comment.
Hey - 我发现了 3 个问题,并留下了一些高层面的反馈:
_iter_llm_responses_with_fallback中的空响应检测逻辑已经变得比较复杂(字符串裁剪、result_chain检查、tools_call_args、fallback 检查);建议将这部分提取成一个小的辅助函数(例如_has_meaningful_content(resp)),以提升可读性并方便后续调整。- 在
step中抛出LLMEmptyResponseError会把行为从“记录告警”改为“抛出异常”;请仔细确认上层是否已经正确捕获并暴露此新异常类型,以确保用户能看到清晰的提示信息,而不是意料之外的通用错误。 - 对于新的
LLMEmptyResponseError以及 “empty response, trying fallback” 这条日志,建议在异常信息中也包含candidate_id/provider id,以便与日志上下文保持一致,同时更容易追踪是哪一个后端产生了空输出。
给 AI Agents 的提示词
请根据本次代码评审中的评论进行修改:
## Overall Comments(整体评论)
- `_iter_llm_responses_with_fallback` 中的空响应检测逻辑已经变得比较复杂(字符串裁剪、`result_chain` 检查、`tools_call_args`、fallback 检查);建议将这部分提取成一个小的辅助函数(例如 `_has_meaningful_content(resp)`),以提升可读性并方便后续调整。
- 在 `step` 中抛出 `LLMEmptyResponseError` 会把行为从“记录告警”改为“抛出异常”;请仔细确认上层是否已经正确捕获并暴露此新异常类型,以确保用户能看到清晰的提示信息,而不是意料之外的通用错误。
- 对于新的 `LLMEmptyResponseError` 以及 “empty response, trying fallback” 这条日志,建议在异常信息中也包含 `candidate_id`/provider id,以便与日志上下文保持一致,同时更容易追踪是哪一个后端产生了空输出。
## Individual Comments(单条评论)
### Comment 1
<location path="astrbot/core/agent/runners/tool_loop_agent_runner.py" line_range="249-258" />
<code_context>
+ completion_text_stripped = (resp.completion_text or "").strip()
</code_context>
<issue_to_address>
**issue:** 当部分 provider 在真正输出内容之前会先发送仅含元数据的 chunk 时,空响应 fallback 可能会被误触发。
目前,只要某个 chunk 的 `completion_text`(以及相关字段)在去除空白后为空,就会触发 fallback。对于会先发出仅含元数据/心跳、而没有实际内容的 chunk 的 provider,这会导致在真正内容到达前就过早中断流并切换到下一个候选。请对这段逻辑加上保护,使其只在流结束时触发,或在我们已经确认该候选没有产生任何有效内容之后再触发(例如,通过跟踪该候选是否已经出现过非空内容)。
</issue_to_address>
### Comment 2
<location path="astrbot/core/provider/sources/openai_source.py" line_range="328-335" />
<code_context>
has_stream_output = False
try:
async for resp in self._iter_llm_responses(include_model=idx == 0):
+ # 如果回复为空且无工具调用 且不是最后一个回退渠道 则引发fallback
</code_context>
<issue_to_address>
**suggestion:** 在高流量场景下,同时记录两次相同的异常(debug + warning)可能会让日志变得嘈杂。
当前同时以 debug(带 `exc_info=True`)和 warning 等级记录同一个 `handle_chunk` 异常,可能会使日志信息冗余,影响线上问题排查。可以考虑只记录一次(例如使用带 `exc_info=True` 的 warning,或只保留一条 debug 日志),这样既能保留堆栈信息,又不会重复输出相同的消息。
```suggestion
try:
state.handle_chunk(chunk)
except Exception as e:
logger.warning(
f"Saving chunk state error: {type(e).__name__}: {e}.",
exc_info=True,
)
```
</issue_to_address>
### Comment 3
<location path="astrbot/core/agent/runners/tool_loop_agent_runner.py" line_range="245" />
<code_context>
has_stream_output = False
try:
async for resp in self._iter_llm_responses(include_model=idx == 0):
+ # 如果回复为空且无工具调用 且不是最后一个回退渠道 则引发fallback
+ # 此处不应判断整个消息链是否为空 因为消息链包含整个对话流 而空回复可能发生在任何阶段
</code_context>
<issue_to_address>
**issue (complexity):** 建议把“是否为有意义响应”的检查逻辑抽取成一个可复用的辅助函数,让流式循环只使用一个高层条件来处理 fallback 和错误逻辑。
通过把“有意义响应”的判断封装到一个辅助函数中,并在循环中用一个高层条件来控制逻辑,可以保留当前所有新行为,同时降低热点路径上的本地复杂度。
例如:
```python
def _has_meaningful_response_content(resp: LLMResponse) -> bool:
# normalize text
if (resp.completion_text or "").strip():
return True
if (resp.reasoning_content or "").strip():
return True
# tools
if resp.tools_call_args:
return True
# result_chain
chain = getattr(resp, "result_chain", None)
if not chain or not getattr(chain, "chain", None):
return False
for comp in chain.chain:
# non‑Plain components are always meaningful
if not isinstance(comp, Comp.Plain):
return True
# Plain with non‑empty text is meaningful
if comp.text and comp.text.strip():
return True
return False
```
然后流式循环可以简化为:
```python
async for resp in self._iter_llm_responses(include_model=idx == 0):
if (
not _has_meaningful_response_content(resp)
and not is_last_candidate
):
logger.warning(
"Chat Model %s returns empty response, trying fallback to next provider.",
candidate_id,
)
break
if resp.is_chunk:
has_stream_output = True
yield resp
continue
if (
resp.role == "err"
and not has_stream_output
and not is_last_candidate
):
...
```
这样既保留了你目前所有的检查逻辑(`completion_text`、`reasoning_content`、`tools_call_args`、`result_chain` 内容),又让循环聚焦于控制流,使 fallback 条件更自说明、也更易于单元测试。
如果你希望在后处理阶段的空消息检查也保持一致,可以复用同一个辅助函数:
```python
if not _has_meaningful_response_content(llm_resp):
logger.warning(
"LLM returned empty assistant message with no tool calls."
)
raise LLMEmptyResponseError(
"LLM returned empty assistant message with no tool calls."
)
```
</issue_to_address>Hi @a61995987! 👋
感谢你通过评论 @sourcery-ai review 来试用 Sourcery!🚀
安装 sourcery-ai bot 后即可在每个 Pull Request 上自动获得代码评审 ✨
欢迎帮助我变得更有用!请在每条评论上点 👍 或 👎,我会根据你的反馈改进评审质量。Original comment in English
Hey - I've found 3 issues, and left some high level feedback:
- The empty-response detection logic in
_iter_llm_responses_with_fallbackis getting quite involved (string stripping, result_chain inspection, tools_call_args, fallback check); consider extracting this into a small helper (e.g._has_meaningful_content(resp)) to improve readability and make future adjustments easier. - Raising
LLMEmptyResponseErrorinstepchanges behavior from a logged warning to an exception; double-check that this new exception type is caught and surfaced appropriately in higher layers so users see a clear message instead of an unexpected generic error. - For the new
LLMEmptyResponseErrorand the 'empty response, trying fallback' log, consider including thecandidate_id/provider id in the exception message as well to match the logging context and simplify tracing which backend produced the empty output.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- The empty-response detection logic in `_iter_llm_responses_with_fallback` is getting quite involved (string stripping, result_chain inspection, tools_call_args, fallback check); consider extracting this into a small helper (e.g. `_has_meaningful_content(resp)`) to improve readability and make future adjustments easier.
- Raising `LLMEmptyResponseError` in `step` changes behavior from a logged warning to an exception; double-check that this new exception type is caught and surfaced appropriately in higher layers so users see a clear message instead of an unexpected generic error.
- For the new `LLMEmptyResponseError` and the 'empty response, trying fallback' log, consider including the `candidate_id`/provider id in the exception message as well to match the logging context and simplify tracing which backend produced the empty output.
## Individual Comments
### Comment 1
<location path="astrbot/core/agent/runners/tool_loop_agent_runner.py" line_range="249-258" />
<code_context>
+ completion_text_stripped = (resp.completion_text or "").strip()
</code_context>
<issue_to_address>
**issue:** Empty-response fallback might misfire on providers that send initial metadata-only chunks before real content.
Right now, any chunk where `completion_text` (and related fields) strip to empty will trigger fallback. For providers that emit metadata/heartbeat chunks with no content before real output, this will prematurely break the stream and fail over. Please gate this logic so it only triggers once the stream is complete or after we’ve confirmed no meaningful content has been produced (e.g., by tracking whether any non-empty content has been seen for this candidate).
</issue_to_address>
### Comment 2
<location path="astrbot/core/provider/sources/openai_source.py" line_range="328-335" />
<code_context>
has_stream_output = False
try:
async for resp in self._iter_llm_responses(include_model=idx == 0):
+ # 如果回复为空且无工具调用 且不是最后一个回退渠道 则引发fallback
</code_context>
<issue_to_address>
**suggestion:** Logging the same exception twice (debug + warning) may create noisy logs in high-traffic scenarios.
Logging the same `handle_chunk` exception at both debug (with `exc_info=True`) and warning levels can clutter logs and hinder issue triage in production. Consider logging it only once (either warning with `exc_info=True` or a single debug entry) so you keep the stack trace without duplicating messages.
```suggestion
try:
state.handle_chunk(chunk)
except Exception as e:
logger.warning(
f"Saving chunk state error: {type(e).__name__}: {e}.",
exc_info=True,
)
```
</issue_to_address>
### Comment 3
<location path="astrbot/core/agent/runners/tool_loop_agent_runner.py" line_range="245" />
<code_context>
has_stream_output = False
try:
async for resp in self._iter_llm_responses(include_model=idx == 0):
+ # 如果回复为空且无工具调用 且不是最后一个回退渠道 则引发fallback
+ # 此处不应判断整个消息链是否为空 因为消息链包含整个对话流 而空回复可能发生在任何阶段
</code_context>
<issue_to_address>
**issue (complexity):** Consider extracting the meaningful-response checks into a reusable helper function so the streaming loop uses a single, high-level condition for fallback and error handling.
You can keep all the new behavior while reducing local complexity in the hot path by extracting the “meaningful response” classification into a helper and using a single high‑level condition in the loop.
For example:
```python
def _has_meaningful_response_content(resp: LLMResponse) -> bool:
# normalize text
if (resp.completion_text or "").strip():
return True
if (resp.reasoning_content or "").strip():
return True
# tools
if resp.tools_call_args:
return True
# result_chain
chain = getattr(resp, "result_chain", None)
if not chain or not getattr(chain, "chain", None):
return False
for comp in chain.chain:
# non‑Plain components are always meaningful
if not isinstance(comp, Comp.Plain):
return True
# Plain with non‑empty text is meaningful
if comp.text and comp.text.strip():
return True
return False
```
Then the streaming loop becomes:
```python
async for resp in self._iter_llm_responses(include_model=idx == 0):
if (
not _has_meaningful_response_content(resp)
and not is_last_candidate
):
logger.warning(
"Chat Model %s returns empty response, trying fallback to next provider.",
candidate_id,
)
break
if resp.is_chunk:
has_stream_output = True
yield resp
continue
if (
resp.role == "err"
and not has_stream_output
and not is_last_candidate
):
...
```
This preserves all your current checks (completion_text, reasoning_content, tools_call_args, result_chain contents) but keeps the loop focused on control flow and makes the fallback condition self‑documenting and unit‑testable.
If you’d like to keep the post‑processing empty‑message check consistent, you can reuse the same helper there as well:
```python
if not _has_meaningful_response_content(llm_resp):
logger.warning(
"LLM returned empty assistant message with no tool calls."
)
raise LLMEmptyResponseError(
"LLM returned empty assistant message with no tool calls."
)
```
</issue_to_address>Hi @a61995987! 👋
Thanks for trying out Sourcery by commenting with @sourcery-ai review! 🚀
Install the sourcery-ai bot to get automatic code reviews on every pull request ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.精简空回复异常记录 增加兼容处理中的日志信息记录
|
@sourcery-ai review |
There was a problem hiding this comment.
Hey - 我发现了 1 个问题,并留下了一些整体性的反馈:
_query_stream中按 chunk 调用的logger.debug(用于补全tool_call.type)在长时间流式请求中可能会非常嘈杂;可以考虑增加速率限制、采样,或仅在每个请求首次出现时记录日志。_is_empty_llm_response启发式目前作为ToolLoopAgentRunner的实例方法实现;如果你预期在其他地方复用这段逻辑(例如其他 runner 或非流式路径),可以考虑将它移动到一个共享的工具函数中,或者改为带有清晰类型标注的@staticmethod,以便在不同调用点保持一致行为。
给 AI Agent 的提示
请根据本次代码评审中的评论进行修改:
## 总体评论
- `_query_stream` 中按 chunk 调用的 `logger.debug`(用于补全 `tool_call.type`)在长时间流式请求中可能会非常嘈杂;可以考虑增加速率限制、采样,或仅在每个请求首次出现时记录日志。
- `_is_empty_llm_response` 启发式目前作为 `ToolLoopAgentRunner` 的实例方法实现;如果你预期在其他地方复用这段逻辑(例如其他 runner 或非流式路径),可以考虑将它移动到一个共享的工具函数中,或者改为带有清晰类型标注的 `@staticmethod`,以便在不同调用点保持一致行为。
## 逐条评论
### 评论 1
<location path="astrbot/core/provider/sources/openai_source.py" line_range="315-325" />
<code_context>
+ # 可能会缺失 type 字段。由于 openai SDK 的 ChatCompletionStreamState.handle_chunk
+ # 内部有 assert tool.type == "function" 的断言,缺少该字段会导致 AssertionError。
+ # 因此,若检测到 tool_call 且 type 为空,在此处手动补全为 "function"。
+ for choice in chunk.choices or []:
+ if not choice.delta or not choice.delta.tool_calls:
+ continue
+ for tool_call in choice.delta.tool_calls:
+ if hasattr(tool_call, "type") and tool_call.type in (None, ""):
+ logger.debug(
+ f"[{self.get_model()}] tool_call.type is empty in chunk {chunk_index} "
</code_context>
<issue_to_address>
**suggestion (bug_risk):** 扩大 `tool_call.type` 归一化处理的范围,以覆盖属性完全缺失的情况。
兼容性说明中提到,有些 provider 可能会完全省略 `type` 字段,而不仅仅是将其设为 `None` 或空字符串。当前的 `hasattr(tool_call, "type") and tool_call.type in (None, "")` 检查只能处理后者。如果 `type` 缺失,`handle_chunk` 仍然可能触发断言。使用 `getattr(tool_call, "type", None)` 可以同时处理两种情况,例如:
```python
tool_type = getattr(tool_call, "type", None)
if tool_type in (None, ""):
logger.debug(
"[%s] tool_call.type is empty in chunk %d (provider: %s), manually set to 'function'",
self.get_model(),
chunk_index,
self.provider_config.get("id", "unknown"),
)
tool_call.type = "function"
```
```suggestion
for choice in chunk.choices or []:
if not choice.delta or not choice.delta.tool_calls:
continue
for tool_call in choice.delta.tool_calls:
tool_type = getattr(tool_call, "type", None)
if tool_type in (None, ""):
logger.debug(
"[%s] tool_call.type is empty in chunk %d (provider: %s), manually set to 'function'",
self.get_model(),
chunk_index,
self.provider_config.get("id", "unknown"),
)
tool_call.type = "function"
```
</issue_to_address>帮我变得更有用!请在每条评论上点 👍 或 👎,我会根据你的反馈改进后续评审。
Original comment in English
Hey - I've found 1 issue, and left some high level feedback:
- The per-chunk
logger.debugin_query_streamfor fillingtool_call.typemay become very noisy for long streams; consider adding a rate limit, sampling, or logging only on the first occurrence per request. - The
_is_empty_llm_responseheuristic is embedded as an instance method onToolLoopAgentRunner; if you expect to reuse this logic elsewhere (e.g., other runners or non-streaming paths), consider moving it to a shared utility or making it a@staticmethodwith clear type hints to keep the behavior consistent across call sites.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- The per-chunk `logger.debug` in `_query_stream` for filling `tool_call.type` may become very noisy for long streams; consider adding a rate limit, sampling, or logging only on the first occurrence per request.
- The `_is_empty_llm_response` heuristic is embedded as an instance method on `ToolLoopAgentRunner`; if you expect to reuse this logic elsewhere (e.g., other runners or non-streaming paths), consider moving it to a shared utility or making it a `@staticmethod` with clear type hints to keep the behavior consistent across call sites.
## Individual Comments
### Comment 1
<location path="astrbot/core/provider/sources/openai_source.py" line_range="315-325" />
<code_context>
+ # 可能会缺失 type 字段。由于 openai SDK 的 ChatCompletionStreamState.handle_chunk
+ # 内部有 assert tool.type == "function" 的断言,缺少该字段会导致 AssertionError。
+ # 因此,若检测到 tool_call 且 type 为空,在此处手动补全为 "function"。
+ for choice in chunk.choices or []:
+ if not choice.delta or not choice.delta.tool_calls:
+ continue
+ for tool_call in choice.delta.tool_calls:
+ if hasattr(tool_call, "type") and tool_call.type in (None, ""):
+ logger.debug(
+ f"[{self.get_model()}] tool_call.type is empty in chunk {chunk_index} "
</code_context>
<issue_to_address>
**suggestion (bug_risk):** Broaden the `tool_call.type` normalization to also cover the case where the attribute is missing entirely.
The compatibility note says some providers may omit the `type` field entirely, not just set it to `None`/empty. The current `hasattr(tool_call, "type") and tool_call.type in (None, "")` check only fixes the latter. If `type` is missing, `handle_chunk` can still hit the assertion. Using `getattr(tool_call, "type", None)` instead would handle both cases, e.g.:
```python
tool_type = getattr(tool_call, "type", None)
if tool_type in (None, ""):
logger.debug(
"[%s] tool_call.type is empty in chunk %d (provider: %s), manually set to 'function'",
self.get_model(),
chunk_index,
self.provider_config.get("id", "unknown"),
)
tool_call.type = "function"
```
```suggestion
for choice in chunk.choices or []:
if not choice.delta or not choice.delta.tool_calls:
continue
for tool_call in choice.delta.tool_calls:
tool_type = getattr(tool_call, "type", None)
if tool_type in (None, ""):
logger.debug(
"[%s] tool_call.type is empty in chunk %d (provider: %s), manually set to 'function'",
self.get_model(),
chunk_index,
self.provider_config.get("id", "unknown"),
)
tool_call.type = "function"
```
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
fix:兼容openai适配器的非标返回
fix:将上游空回复纳入回退重试
Modifications / 改动点
fix:兼容openai适配器的非标返回:
修改文件:
astrbot\core\provider\sources\openai_source.py_query_stream方法中的async for chunk in stream循环中增加了兼容非标准返回处理的处理 补全可能缺失的补全tool_call.type字段fix:将上游空回复纳入回退重试:
修改文件:
astrbot\core\agent\runners\tool_loop_agent_runner.py_is_empty_llm_response_iter_llm_responses_with_fallback方法中的async for resp in self._iter_llm_responses(include_model=idx == 0)循环中使用_is_empty_llm_response来判断回复是否合法 并在不合法时触发回退机制修改文件:
astrbot\core\exceptions.pyScreenshots or Test Results / 运行截图或测试结果
现在开启流式后可以正常解析非标返回(上游为newapi的gemini)

空回复显示报错:

空回复回退日志:

Checklist / 检查清单
requirements.txt和pyproject.toml文件相应位置。/ I have ensured that no new dependencies are introduced, OR if new dependencies are introduced, they have been added to the appropriate locations inrequirements.txtandpyproject.toml.由 Sourcery 提供的摘要
处理非标准的 OpenAI 流式响应,并为上游返回空回复的情况添加回退行为。
Bug 修复:
tool_call.type字段,以保持兼容性。功能增强:
Original summary in English
Summary by Sourcery
Handle non-standard OpenAI streaming responses and add fallback behavior for empty upstream replies.
Bug Fixes:
Enhancements:
由 Sourcery 提供的摘要
改进 OpenAI 流式处理和回退检测在 LLM 响应中的健壮性与诊断能力。
错误修复(Bug Fixes):
tool_call.type字段来规范化非标准的 OpenAI 流式数据块,以保持兼容性。功能增强(Enhancements):
Original summary in English
Summary by Sourcery
提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。增强项:
Original summary in English
Summary by Sourcery
Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.
Bug Fixes:
Enhancements:
Bug 修复:
type字段来规范化流式工具调用负载,从而保持与非标准的 OpenAI 兼容适配器的兼容性。增强:
Original summary in English
Summary by Sourcery
提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。增强项:
Original summary in English
Summary by Sourcery
Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.
Bug Fixes:
Enhancements:
Original summary in English
Summary by Sourcery
提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。增强项:
Original summary in English
Summary by Sourcery
Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.
Bug Fixes:
Enhancements:
Bug 修复:
type字段来规范化流式工具调用负载,从而保持与非标准的 OpenAI 兼容适配器的兼容性。增强:
Original summary in English
Summary by Sourcery
提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。增强项:
Original summary in English
Summary by Sourcery
Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.
Bug Fixes:
Enhancements:
Bug 修复:
tool_call.type字段来规范化流式工具调用载荷,以保持与非标准 OpenAI 适配器的兼容性。增强项:
Original summary in English
Summary by Sourcery
提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。增强项:
Original summary in English
Summary by Sourcery
Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.
Bug Fixes:
Enhancements:
Bug 修复:
type字段来规范化流式工具调用负载,从而保持与非标准的 OpenAI 兼容适配器的兼容性。增强:
Original summary in English
Summary by Sourcery
提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。增强项:
Original summary in English
Summary by Sourcery
Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.
Bug Fixes:
Enhancements:
Original summary in English
Summary by Sourcery
提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。增强项:
Original summary in English
Summary by Sourcery
Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.
Bug Fixes:
Enhancements:
Bug 修复:
type字段来规范化流式工具调用负载,从而保持与非标准的 OpenAI 兼容适配器的兼容性。增强:
Original summary in English
Summary by Sourcery
提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。增强项:
Original summary in English
Summary by Sourcery
Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.
Bug Fixes:
Enhancements:
Bug 修复:
tool_call.type字段来规范化 OpenAI 流式数据块,从而保持与非标准适配器的兼容性。增强功能:
Original summary in English
Summary by Sourcery
提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。增强项:
Original summary in English
Summary by Sourcery
Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.
Bug Fixes:
Enhancements:
Bug 修复:
type字段来规范化流式工具调用负载,从而保持与非标准的 OpenAI 兼容适配器的兼容性。增强:
Original summary in English
Summary by Sourcery
提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。增强项:
Original summary in English
Summary by Sourcery
Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.
Bug Fixes:
Enhancements:
Original summary in English
Summary by Sourcery
提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。增强项:
Original summary in English
Summary by Sourcery
Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.
Bug Fixes:
Enhancements:
Bug 修复:
type字段来规范化流式工具调用负载,从而保持与非标准的 OpenAI 兼容适配器的兼容性。增强:
Original summary in English
Summary by Sourcery
提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。增强项:
Original summary in English
Summary by Sourcery
Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.
Bug Fixes:
Enhancements:
Bug 修复:
tool_call.type字段来规范化流式工具调用载荷,以保持与非标准 OpenAI 适配器的兼容性。增强项:
Original summary in English
Summary by Sourcery
提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。增强项:
Original summary in English
Summary by Sourcery
Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.
Bug Fixes:
Enhancements:
Bug 修复:
type字段来规范化流式工具调用负载,从而保持与非标准的 OpenAI 兼容适配器的兼容性。增强:
Original summary in English
Summary by Sourcery
提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。增强项:
Original summary in English
Summary by Sourcery
Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.
Bug Fixes:
Enhancements:
Original summary in English
Summary by Sourcery
提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。增强项:
Original summary in English
Summary by Sourcery
Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.
Bug Fixes:
Enhancements:
Bug 修复:
type字段来规范化流式工具调用负载,从而保持与非标准的 OpenAI 兼容适配器的兼容性。增强:
Original summary in English
Summary by Sourcery
提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。增强项:
Original summary in English
Summary by Sourcery
Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.
Bug Fixes:
Enhancements:
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 流式响应,以保持兼容性。增强:
Original summary in English
Summary by Sourcery
提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。增强项:
Original summary in English
Summary by Sourcery
Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.
Bug Fixes:
Enhancements:
Bug 修复:
type字段来规范化流式工具调用负载,从而保持与非标准的 OpenAI 兼容适配器的兼容性。增强:
Original summary in English
Summary by Sourcery
提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。增强项:
Original summary in English
Summary by Sourcery
Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.
Bug Fixes:
Enhancements:
Original summary in English
Summary by Sourcery
提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。增强项:
Original summary in English
Summary by Sourcery
Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.
Bug Fixes:
Enhancements:
Bug 修复:
type字段来规范化流式工具调用负载,从而保持与非标准的 OpenAI 兼容适配器的兼容性。增强:
Original summary in English
Summary by Sourcery
提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。增强项:
Original summary in English
Summary by Sourcery
Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.
Bug Fixes:
Enhancements:
Bug 修复:
tool_call.type字段来规范化流式工具调用载荷,以保持与非标准 OpenAI 适配器的兼容性。增强项:
Original summary in English
Summary by Sourcery
提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。增强项:
Original summary in English
Summary by Sourcery
Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.
Bug Fixes:
Enhancements:
Bug 修复:
type字段来规范化流式工具调用负载,从而保持与非标准的 OpenAI 兼容适配器的兼容性。增强:
Original summary in English
Summary by Sourcery
提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。增强项:
Original summary in English
Summary by Sourcery
Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.
Bug Fixes:
Enhancements:
Original summary in English
Summary by Sourcery
提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。增强项:
Original summary in English
Summary by Sourcery
Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.
Bug Fixes:
Enhancements:
Bug 修复:
type字段来规范化流式工具调用负载,从而保持与非标准的 OpenAI 兼容适配器的兼容性。增强:
Original summary in English
Summary by Sourcery
提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。增强项:
Original summary in English
Summary by Sourcery
Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.
Bug Fixes:
Enhancements:
Bug 修复:
tool_call.type字段来规范化 OpenAI 流式数据块,从而保持与非标准适配器的兼容性。增强功能:
Original summary in English
Summary by Sourcery
提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。增强项:
Original summary in English
Summary by Sourcery
Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.
Bug Fixes:
Enhancements:
Bug 修复:
type字段来规范化流式工具调用负载,从而保持与非标准的 OpenAI 兼容适配器的兼容性。增强:
Original summary in English
Summary by Sourcery
提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。增强项:
Original summary in English
Summary by Sourcery
Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.
Bug Fixes:
Enhancements:
Original summary in English
Summary by Sourcery
提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。增强项:
Original summary in English
Summary by Sourcery
Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.
Bug Fixes:
Enhancements:
Bug 修复:
type字段来规范化流式工具调用负载,从而保持与非标准的 OpenAI 兼容适配器的兼容性。增强:
Original summary in English
Summary by Sourcery
提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。增强项:
Original summary in English
Summary by Sourcery
Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.
Bug Fixes:
Enhancements:
Bug 修复:
tool_call.type字段来规范化流式工具调用载荷,以保持与非标准 OpenAI 适配器的兼容性。增强项:
Original summary in English
Summary by Sourcery
提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。增强项:
Original summary in English
Summary by Sourcery
Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.
Bug Fixes:
Enhancements:
Bug 修复:
type字段来规范化流式工具调用负载,从而保持与非标准的 OpenAI 兼容适配器的兼容性。增强:
Original summary in English
Summary by Sourcery
提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。增强项:
Original summary in English
Summary by Sourcery
Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.
Bug Fixes:
Enhancements:
Original summary in English
Summary by Sourcery
提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。增强项:
Original summary in English
Summary by Sourcery
Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.
Bug Fixes:
Enhancements:
Bug 修复:
type字段来规范化流式工具调用负载,从而保持与非标准的 OpenAI 兼容适配器的兼容性。增强:
Original summary in English
Summary by Sourcery
提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。
Bug 修复:
tool_call.type字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。增强项:
Original summary in English
Summary by Sourcery
Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.
Bug Fixes:
Enhancements: