feat: implements base optimization logic in the optimization sdk#116
feat: implements base optimization logic in the optimization sdk#116andrewklatzke wants to merge 11 commits intomainfrom
Conversation
| start_idx = response_str.find('{', start_idx + 1) | ||
| if start_idx == -1: | ||
| break | ||
| brace_count = 0 |
There was a problem hiding this comment.
Balanced-brace scanner retry uses stale start index
Medium Severity
The balanced-brace scanning fallback in extract_json_from_response has broken retry logic. When a balanced {…} block fails json.loads, start_idx is updated to the next { after the original start (which is before i), and brace_count is reset to 0, but the for loop continues from i + 1 — already past the new start_idx. This means the scanner never re-processes characters from the updated start position. On the next time brace_count hits 0, response_str[start_idx:i + 1] spans from the stale inner start_idx to the new closing brace, producing a garbage substring that won't parse. The retry path effectively never works.
| }, | ||
| "required": ["passed", "rationale"], | ||
| }, | ||
| ) |
There was a problem hiding this comment.
There was a problem hiding this comment.
Placeholder for now; we want to allow both boolean/range evals
| if self.judges is None and self.on_turn is None: | ||
| raise ValueError("Either judges or on_turn must be provided") | ||
| if self.judge_model is None: | ||
| raise ValueError("judge_model must be provided") |
There was a problem hiding this comment.
Missing validation for empty variable_choices list
Medium Severity
__post_init__ validates that context_choices and model_choices each have at least one element, but no equivalent check exists for variable_choices. An empty list passes validation but causes random.choice() to raise an IndexError at runtime in _run_optimization and _create_optimization_context.
There was a problem hiding this comment.
This should not be validated; it's valid for there to be no variables
| ) | ||
| return judge_results | ||
|
|
||
| async def _evaluate_config_judge( |
There was a problem hiding this comment.
Any reason we are not using the built in create_judge method from the ai sdk?
There was a problem hiding this comment.
This makes an underlying call to:
def _judge_config(
self,
judge_key: str,
context: Context,
default: AIJudgeConfigDefault,
variables: Dict[str, Any],
) -> AIJudgeConfig:
"""
Fetch a judge configuration from the LaunchDarkly client.
Thin wrapper around LDAIClient.judge_config so callers do not need a
direct reference to the client.
:param judge_key: The key for the judge configuration in LaunchDarkly
:param context: The evaluation context
:param default: Fallback config when the flag is disabled or unreachable
:param variables: Template variables for instruction interpolation
:return: The resolved AIJudgeConfig
"""
return self._ldClient.judge_config(judge_key, context, default, variables)Where we get the judge config directly. This method is doing a lot of setup/manipulation of the messages. We don't want the auto-evaluating judges as they defer down to the user to execute. Figured using just the config directly here made more sense as we don't want any of the "auto" functionality
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
There are 4 total unresolved issues (including 3 from previous reviews).
Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, have a team admin enable autofix in the Cursor dashboard.


Requirements
Related issues
Pulls in the optimize method from the moonshot branch, updates it to be more production-ready, adds tests, logically splits up code into more manageable chunks.
Describe the solution you've provided
This is the initial implementation of the optimization method that we're pulling into this SDK. Right now this services the same surface area as the moonshot - namely, it implements the optimize_from_options() method while leaving the optimize_from_config() method unimplemented. This also does not handle additional features we'll be adding, such as the ability to compare to ground-truth responses or the ability to post back to LaunchDarkly.
The logs are set to debug level, so enabling them will allow you to trace along with the progress.
Using the manual/passing options implementation of this looks like:
Note
Medium Risk
Medium risk due to a large new async control-flow surface (agent calls, judge calls, variation generation, tool handling, JSON extraction) that can affect correctness and error handling, though it doesn’t change auth/data persistence paths.
Overview
Adds a real
OptimizationClient(replacing the placeholderApiAgentOptimizationClient) that runs an optimization loop: call the agent with per-iteration variable interpolation, evaluate the response via one or more judges (LDjudge_config-backed or inline acceptance statements), and, on failure, ask an LLM to return a structured improved configuration for the next attempt.Introduces new public dataclasses (
OptimizationOptions,OptimizationContext,OptimizationJudge,AIJudgeCallConfig,ToolDefinition, etc.) plus utility helpers for structured-output tools and robust JSON extraction, and wires status/result callbacks throughout the loop.Adds extensive test coverage for tool handling, judge evaluation paths, variation prompt generation, and end-to-end success/failure behavior; updates package exports and smoke tests to reflect the new API.
Written by Cursor Bugbot for commit e2ff561. This will update automatically on new commits. Configure here.