Skip to content

feat: partially implements optimize_from_config#119

Open
andrewklatzke wants to merge 2 commits intoaklatzke/AIC-2071/optimize-method-move-promptsfrom
aklatzke/AIC-1794/optimize-method-from-ld
Open

feat: partially implements optimize_from_config#119
andrewklatzke wants to merge 2 commits intoaklatzke/AIC-2071/optimize-method-move-promptsfrom
aklatzke/AIC-1794/optimize-method-from-ld

Conversation

@andrewklatzke
Copy link
Copy Markdown
Contributor

@andrewklatzke andrewklatzke commented Apr 1, 2026

Requirements

  • I have added test coverage for new or changed functionality
  • I have followed the repository's pull request submission guidelines
  • I have validated my changes against all supported platform versions

Describe the solution you've provided

Implements an LDApiClient that can talk to the Launchdarkly API to fetch configs and post results back to the optimization process. Runs is not yet implemented, so the post back will silently fail for now. Properly pulls the config from the LD and maps it to the correct options to run the optimization from the provided config key (with some options for handling the call, etc.).

This also adds validation on the response/input for the POST call

Additional context

There are two paths for running an optimization:

  • Passing options directly to the SDK
  • Pulling a pre-configured set of options from the LaunchDarkly API

This implements the second path. None of the "core" logic for the optimization has changed.

Call to this method looks like:

    result = await client.optimize_from_config("test-optimization", OptimizationFromConfigOptions(
        project_key="default", 
        context_choices=[
            context_builder("user-123"),
        ],
        handle_agent_call=handle_agent_call,
        handle_judge_call=handle_judge_call,
        base_url="https://ld-stg.launchdarkly.com/", # optional, defaults to app.
    ))

@andrewklatzke andrewklatzke requested a review from jsonbailey April 1, 2026 22:40
@andrewklatzke andrewklatzke requested a review from a team as a code owner April 1, 2026 22:40
Copy link
Copy Markdown

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 1 potential issue.

Fix All in Cursor

Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, have a team admin enable autofix in the Cursor dashboard.

"The optimization config has no acceptance statements, judges, or ground truth "
"responses, and no on_turn callback was provided. At least one is required to "
"evaluate optimization results."
)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ground truth validation bypasses check but fails downstream

Medium Severity

The early validation in _build_options_from_config treats groundTruthResponses as a valid evaluation criterion (has_ground_truth bypasses the "no criteria" check), but ground truth is never passed to OptimizationOptions and that dataclass's __post_init__ only checks for judges or on_turn. When a config has ground truth but no judges and no on_turn, the early check passes, then OptimizationOptions(judges=judges or None, ...) converts the empty dict to None, and __post_init__ raises with the less helpful message "Either judges or on_turn must be provided" instead of the descriptive one mentioning ground truth.

Additional Locations (1)
Fix in Cursor Fix in Web

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not yet implemented

@andrewklatzke andrewklatzke changed the title Aklatzke/aic 1794/optimize method from ld feat: partially implements optimize_from_config Apr 1, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants