Skip to content

chore: move prompts to own file#117

Open
andrewklatzke wants to merge 2 commits intoaklatzke/AIC-1990/optimize-methodfrom
aklatzke/AIC-2071/optimize-method-move-prompts
Open

chore: move prompts to own file#117
andrewklatzke wants to merge 2 commits intoaklatzke/AIC-1990/optimize-methodfrom
aklatzke/AIC-2071/optimize-method-move-prompts

Conversation

@andrewklatzke
Copy link
Copy Markdown
Contributor

@andrewklatzke andrewklatzke commented Mar 31, 2026

Requirements

  • I have added test coverage for new or changed functionality (N/A)
  • I have followed the repository's pull request submission guidelines
  • I have validated my changes against all supported platform versions

Describe the solution you've provided

Moves the prompts from the client.py -> prompts.py file. This is just logically breaking up the class and removing things that can be pulled out. This removes ~400 lines from client.py.

Additionally just adds an additional message to the failure error with the response length (seeing some failures where OpenAI is returning an empty response and this is valuable debug info). There was also a small prompt tweak in the tools section to help ameliorate this error.

Describe alternatives you've considered

I went down the route of implementing via string.Template with multi line strings but ran into the following:

  • Due to the heavy string interpolation, using the template sections just added more bloat/misdirection vs. the join approach
  • Using multi-line strings with f""" would work, but require additional escaping for the literal variables, (ie {{id}} needs to become {{{{id}}}}
  • Using multi line strings would be too long for our linter, and our linter (pycodestyle) does not have a standard for file-level or block-level ignores

Additional context

@jsonbailey and I talked about splitting this up while reviewing the PR


Note

Medium Risk
Mostly a refactor, but it changes the exact prompt strings used for judging and variation generation, which can affect optimization behavior and LLM output stability.

Overview
Moves prompt-building logic out of OptimizationClient into a new ldai_optimization/prompts.py module, and updates the client/tests to call these shared helpers for message history, reasoning history, and variation prompt generation.

Also tweaks prompt formatting (e.g., explicit System: / Evaluation history: sections and stronger tool-return instructions) and improves variation structured-output parse failures by including the response length in the thrown ValueError.

Written by Cursor Bugbot for commit ea43575. This will update automatically on new commits. Configure here.

@andrewklatzke andrewklatzke requested a review from a team as a code owner March 31, 2026 19:31
Copy link
Copy Markdown

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 1 potential issue.

Fix All in Cursor

Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, have a team admin enable autofix in the Cursor dashboard.

"The improved configuration MUST produce responses that satisfy ALL of the following criteria.",
"These criteria are non-negotiable — every generated variation will be evaluated against them.",
"All variables must be used in the new instructions."
"",
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Acceptance prompt line concatenation mistake

Low Severity

In variation_prompt_acceptance_criteria, two adjacent string literals are missing a comma, so Python concatenates them. This drops the intended separation before the criteria list in prompts.py, changing the generated prompt structure unexpectedly and making the acceptance section less clearly delimited.

Fix in Cursor Fix in Web

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants