Fix no results returned when no discrete variables are present in MindtPy#3861
Fix no results returned when no discrete variables are present in MindtPy#3861bernalde wants to merge 29 commits intoPyomo:mainfrom
Conversation
…dtPy, add test case for this bug fixing
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
…nto fix/mindtpy-fix
Co-authored-by: Tarik Levent Guler <64302098+tarikLG@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Fix no results returned when no discrete variables are present in Min…
Black the format of pyomo/pyomo/contrib/mindtpy/algorithm_base_class.py
|
Hi team, I’ve investigated the current CI failures across Linux, macOS, and Windows. The failures in pyomo/contrib/solver/tests/solvers are not caused by the code changes in this PR. The root cause is an expired GAMS license in the test environment. It appears the license might have expired a few days ago, which is why we are seeing identical failures across all platforms. Interestingly, the tests for MindtPy are still passing (likely because they use different solver paths or have different fallback mechanisms), but the core solver tests are blocked. Once the GAMS license is renewed in the CI environment, these tests should return to green. |
|
@Toflamus we are aware of the issue and discussed it during the developer call today. This is indeed an infrastructure issue and we're working on getting it fixed. |
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #3861 +/- ##
==========================================
- Coverage 89.93% 89.93% -0.01%
==========================================
Files 902 902
Lines 106393 106416 +23
==========================================
+ Hits 95683 95703 +20
- Misses 10710 10713 +3
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
jsiirola
left a comment
There was a problem hiding this comment.
One question about results bounds, but otherwise, this looks good to me.
| # explicit bounds, infer them from the objective value. For a direct | ||
| # continuous optimal solve, primal==dual. |
There was a problem hiding this comment.
Doesn't that imply convexity? I feel like you are making a promise that may not hold for the actual model you are solving...
There was a problem hiding this comment.
Good catch — the "primal==dual" phrasing is too strong. What this fallback actually relies on is not convexity but the solver's own claim: if the solver reports tc.optimal and didn't populate its own bounds, we use the objective value as the best available information for both bounds.
That said, you're right that for a local NLP solver like IPOPT, tc.optimal really only guarantees local optimality, so setting both bounds equal overstates what we actually know. Do you want me to fix the comment?
There was a problem hiding this comment.
Absolutely! Good catch. We cannot promise dual==primal, so in general we should inherit the dual and primal from the subsolver if they provide any. If they don't provide a dual (such as IPOPT) we only report the primal
There was a problem hiding this comment.
I'll let others weigh in, but I think I would prefer to change the results to only return what we know; in this case, only return the objective value for the primal (feasible) bound and then return None for the dual (infeasible) bound.
There was a problem hiding this comment.
Btw, just to mention, the "primal==dual" phrasing in the comment is admittedly loose, but the behavior here is safe. A few points:
-
probis the result-reporting object (self.results.problem), not any subproblem bound used algorithmically. This code only runs in the short-circuit path (no discrete variables), so these bounds never feed back into any decomposition logic. It's purely for populating the solver results that get returned to the user. We're just filling in thelower_bound,upper_boundfields so the returnedSolverResultsisn't incomplete. -
This only fires in the short-circuit path — when
model_is_valid()detects zero discrete variables and solves the original model directly as an LP/NLP. MindtPy then returns False and never enters the decomposition loop. So these bounds don't influence any cuts or iterations. -
The fallback only triggers when the solver claims
tc.optimalbut didn't populate its own bounds. We're just mirroring what the solver already told us — if it says optimal, using the objective value as the reported bound is the best available information.
There was a problem hiding this comment.
Anyway, I agree that we should only keep what we know.
The fallback in _mirror_direct_solve_results was setting both lower and upper bounds from obj_val (assuming primal==dual), which requires convexity/global optimality that local NLP solvers like IPOPT cannot guarantee. Now only infers the primal bound: upper bound for minimization, lower bound for maximization. Added unit and integration tests for 100% coverage of the modified method. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Fix bound inference to only set primal bound, not dual bound
| # Fallback: if the solver reports optimal termination but does not | ||
| # provide explicit bounds, infer the *primal* bound from the objective | ||
| # value. A feasible solution only proves one side of the bound: | ||
| # - minimization → upper bound (any feasible point is an UB) | ||
| # - maximization → lower bound (any feasible point is a LB) | ||
| # We cannot infer the dual bound without a guarantee of global | ||
| # optimality (e.g. convexity), which a local NLP solver does not give. | ||
| if ( | ||
| lb is None or ub is None | ||
| ) and self.results.solver.termination_condition == tc.optimal: | ||
| obj_val = value(obj.expr, exception=False) | ||
| if obj_val is not None: | ||
| if obj.sense == minimize: | ||
| if ub is None: | ||
| prob.upper_bound = obj_val | ||
| else: | ||
| if lb is None: | ||
| prob.lower_bound = obj_val |
There was a problem hiding this comment.
I'm confused about this: Isn't the results object from the solver already reporting whatever bounds it got? Why can't you just pass those through? If it's local, it should only have the primal bound, and not the dual bound, which is exactly what you want.
There was a problem hiding this comment.
The code does pass through bounds from the solver first (lines 385-391). The fallback logic here only activates when the solver doesn't provide bounds at all. This happens with NLP solvers like IPOPT, which never populate lower_bound or upper_bound in their results objects (you can confirm this in sol.py - the sol reader sets constraint/variable/objective counts but never sets bounds). So the fallback is necessary to infer the primal bound from the objective value when the solver reports optimal but provides no bound information at all.
| from pyomo.opt import TerminationCondition | ||
| from pyomo.opt import TerminationCondition as tc |
There was a problem hiding this comment.
Looks like these could be consolidated.
…_discrete.py Addresses review comment by emma58: consolidated the redundant imports 'from pyomo.opt import TerminationCondition' and 'from pyomo.opt import TerminationCondition as tc' into a single import statement, and updated all uses to use the 'tc' alias.
jsiirola
left a comment
There was a problem hiding this comment.
This looks good. I do have a couple design questions to think about for the future:
model_is_validis a really expensive implementation. In particular, computing the polynomial degree for every expression is pretty much the same cost as doing a full LP/NL write. Is that duplicate effort really necessary?- if the model is a QP, and the
mip_solvercan handle QPs, shouldn't we use it and not thenlp_solver?
I wonder if a future improvement would be to skip the polynomial degree checks and just try solving with the MIP solver. If that fails, then fall back on the NLP solver?
Fixes #3855 .
Fixes #3855
Summary/Motivation:
This PR fixes an issue where MindtPy can short-circuit on “no discrete decisions” (LP/NLP) and then fails to reliably return a proper SolverResults and/or load primal values onto the input model, even when the direct LP/NLP solve succeeds. This behavior breaks downstream meta-solvers (e.g., GDPopt subproblem solves) that depend on Var.value to capture an incumbent.
Reference: #3855
MindtPy contains a validation/short-circuit path intended to directly solve models that do not require decomposition (e.g., LP/NLP, or models where all discrete variables are fixed). In this path, MindtPy may:
return None from solve() (bare return)
Changes proposed in this PR:
Legal Acknowledgement
By contributing to this software project, I have read the contribution guide and agree to the following terms and conditions for my contribution: