Open
Conversation
linopy/constants.py — Added DEFAULT_LABEL_DTYPE = np.int32 linopy/model.py — Variable and constraint label assignment now uses np.arange(..., dtype=DEFAULT_LABEL_DTYPE) with overflow guards that raise ValueError if labels exceed int32 max. linopy/expressions.py — _term coord assignment and all .astype(int) for vars arrays now use DEFAULT_LABEL_DTYPE (int32). linopy/common.py — fill_missing_coords uses np.arange(..., dtype=DEFAULT_LABEL_DTYPE). Polars schema inference now checks array.dtype.itemsize instead of the old OS/numpy-version hack. test/test_constraints.py — Updated 2 dtype assertions to use np.issubdtype instead of == int. test/test_dtypes.py (new) — 7 tests covering int32 labels, expression vars, solve correctness, and overflow guards.
|
Important Review skippedAuto reviews are disabled on base/target branches other than the default branch. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
…k to int64 via astype(int), now use DEFAULT_LABEL_DTYPE. Also Variables.to_dataframe arange for map_labels. - linopy/constraints.py: Constraints.to_dataframe arange for map_labels. - linopy/common.py: save_join outer-join fallback was casting to int64.
* Fix multiplication of constant-only LinearExpression When multiplying a constant-only LinearExpression with another expression, the code would fail with IndexError when trying to access _term=0 on an empty term dimension. The fix correctly returns a LinearExpression (not QuadraticExpression) since multiplying by a constant preserves linearity. * fix: add type casts for mypy * fix: use cast instead of isinstance for runtime type check * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
…ons on small) (PyPSA#564) * perf: use Polars streaming engine for LP file writing Extract _format_and_write() helper that uses lazy().collect(engine="streaming") with automatic fallback, replacing 7 instances of df.select(concat_str(...)).write_csv(...). * fix: log warning with traceback when Polars streaming fallback triggers * perf: speed up LP constraint writing by replacing concat+sort with join Replace the vertical concat + sort approach in Constraint.to_polars() with an inner join, so every row has all columns populated. This removes the need for the group_by validation step in constraints_to_file() and simplifies the formatting expressions by eliminating null checks on coeffs/vars columns. * fix: missing space in lp file * perf: skip group_terms when unnecessary and avoid xarray broadcast for short DataFrame - Skip group_terms_polars when _term dim size is 1 (no duplicate vars) - Build the short DataFrame (labels, rhs, sign) directly with numpy instead of going through xarray.broadcast + to_polars - Add sign column via pl.lit when uniform (common case), avoiding costly numpy string array → polars conversion Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * perf: skip group_terms in LinearExpression.to_polars when no duplicate vars Check n_unique before running the expensive group_by+sum. When all variable references are unique (common case for objectives), this saves ~31ms per 320k terms. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * perf: reduce per-constraint overhead in Constraint.to_polars() Replace np.unique with faster numpy equality check for sign uniformity. Eliminate redundant filter_nulls_polars and check_has_nulls_polars on the short DataFrame by applying the labels mask directly during construction. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * fix: handle empty constraint slices in sign_flat check Guard against IndexError when sign_flat is empty (no valid labels) by checking len(sign_flat) > 0 before accessing sign_flat[0]. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * docs: add LP write speed improvement to release notes Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * bench: add LP write benchmark script with plotting * bench: larger model * perf: Add maybe_group_terms_polars() helper in common.py that checks for duplicate (labels, vars) pairs before calling group_terms_polars. Use it in both Constraint.to_polars() and LinearExpression.to_polars() to avoid expensive group_by when terms already reference distinct variables * Add variance to plot * test: add coverage for streaming fallback and maybe_group_terms_polars * fix: mypy * fix: mypy * Move kwargs into method for readability * Remove fallback and pin polars >=1.31 * Remove the benchmark_lp_writer.py --------- Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
* Add auto mask option to model.py * Also capture rhs * Add benchmark_auto_mask.py * Use faster numpy operation * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * ruff and release notes * Optimize mask application and null expression check Performance improvements: - Use np.where() instead of xarray where() for mask application (~38x faster) - Use max() == -1 instead of all() == -1 for null expression check (~30% faster) These optimizations make auto_mask have minimal overhead compared to manual masking. * Fix mask broadcasting for numpy where in add_constraints The switch from xarray's where() to numpy's where() broke dimension-aware broadcasting. A 1D mask with shape (10,) was being broadcast to (1, 10) instead of (10, 1), applying to the wrong dimension. Fix: Explicitly broadcast mask to match data.labels shape before using np.where. --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Fabian Hofmann <fab.hof@gmx.de>
* add dummy text * change linopy version discovery * remove redundnat comments --------- Co-authored-by: Robbie Muir <robbie.muir@gmail.com>
…yPSA#575) * fix: revert np.where to xarray.where * trigger
* reinsert broadcasting of masks * update release notes * consolidate broadcast mask into new function, add tests for subsets * align test logic to broadcasting * Reinsert broadcasted mask (PyPSA#581) * 1. Moved the dimension subset check into broadcast_mask 2. Added a brief docstring to broadcast_mask * Add tests for superset dims --------- Co-authored-by: FBumann <117816358+FBumann@users.noreply.github.com>
Replace dead maths.ed.ac.uk links with highs.dev and correct options URL. Use "HiGHS" consistently in docstrings.
* Add Knitro solver support - Add Knitro detection to available_solvers list - Implement Knitro solver class with MPS/LP file support - Add solver capabilities for Knitro (quadratic, LP names, no solution file) - Add tests for Knitro solver functionality - Map Knitro status codes to linopy Status system * Fix Knitro solver integration * Document Knitro and improve file loading * code: add check to solve mypy issue * code: remove unnecessary candidate loaders * code: remove unnecessary candidate loaders * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * code: use just KN_read_problem for lp * add read_options * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * code: update KN_read_problem calling * code: new changes from Daniele Lerede * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * code: add reported runtime * code: remove unnecessary code * doc: update README.md and realease_notes * code: add new unit tests for Knitro * code: add new unit tests for Knitro * code: add test for lp for knitro * code: add test for lp for knitro * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * code: add-back again skip * code: remove uncomment to skipif * add namedtuple * include pre-commit checks * fix type checking * simplify Knitro solver class Remove excessive error handling, getattr usage, and unpack_value_and_rc. Use direct Knitro API calls, extract _set_option and _extract_values helpers. Add missing INTEGER_VARIABLES and READ_MODEL_FROM_FILE capabilities. Fix test variable names and remove dead warmstart/basis no-ops. * code: update pyproject.toml and solver attributes * code: update KN attribute dependence * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Co-authored-by: Fabrizio Finozzi <fabrizio.finozzi.business@gmail.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
…sos features (PyPSA#549) * The SOS constraint reformulation feature has been implemented successfully. Here's a summary: Implementation Summary New File: linopy/sos_reformulation.py Core reformulation functions: - validate_bounds_for_reformulation() - Validates that variables have finite bounds - compute_big_m_values() - Computes Big-M values from variable bounds - reformulate_sos1() - Reformulates SOS1 constraints using binary indicators and Big-M constraints - reformulate_sos2() - Reformulates SOS2 constraints using segment indicators and adjacency constraints - reformulate_all_sos() - Reformulates all SOS constraints in a model Modified: linopy/model.py - Added import for reformulate_all_sos - Added reformulate_sos_constraints() method to Model class - Added reformulate_sos: bool = False parameter to solve() method - Updated SOS constraint check to automatically reformulate when reformulate_sos=True and solver doesn't support SOS natively New Test File: test/test_sos_reformulation.py 36 comprehensive tests covering: - Bound validation (finite/infinite) - Big-M computation - SOS1 reformulation (basic, negative bounds, multi-dimensional) - SOS2 reformulation (basic, trivial cases, adjacency) - Integration with solve() and HiGHS - Equivalence with native Gurobi SOS support - Edge cases (zero bounds, multiple SOS, custom prefix) Usage Example m = linopy.Model() x = m.add_variables(lower=0, upper=1, coords=[pd.Index([0, 1, 2], name='i')], name='x') m.add_sos_constraints(x, sos_type=1, sos_dim='i') m.add_objective(x.sum(), sense='max') # Works with HiGHS (which doesn't support SOS natively) m.solve(solver_name='highs', reformulate_sos=True) * Documentation Summary New Section: "SOS Reformulation for Unsupported Solvers" Added a comprehensive section (~300 lines) covering: 1. Enabling Reformulation - Shows reformulate_sos=True parameter and manual reformulate_sos_constraints() method 2. Requirements - Explains finite bounds requirement for Big-M method 3. Mathematical Formulation - Clear LaTeX math for both: - SOS1: Binary indicators y_i, upper/lower linking constraints, cardinality constraint - SOS2: Segment indicators z_j, first/middle/last element constraints, cardinality constraint 4. Interpretation - Explains how the constraints work intuitively with examples 5. Auxiliary Variables and Constraints - Documents the naming convention (_sos_reform_ prefix) 6. Multi-dimensional Variables - Shows how broadcasting works 7. Edge Cases Table - Lists all handled edge cases (single-element, zero bounds, all-positive, etc.) 8. Performance Considerations - Trade-offs between native SOS and reformulation 9. Complete Example - Piecewise linear approximation of x² with HiGHS 10. API Reference - Added method signatures for: - Model.add_sos_constraints() - Model.remove_sos_constraints() - Model.reformulate_sos_constraints() - Variables.sos property * Added Tests for Multi-dimensional SOS Unit Tests - test_sos2_multidimensional: Tests that SOS2 reformulation with multi-dimensional variables (i, j) correctly creates: - Segment indicators z with shape (i: n-1, j: m) - Cardinality constraint preserves the j dimension Integration Tests - test_multidimensional_sos2_with_highs: Solves a multi-dimensional SOS2 problem with HiGHS and verifies: - Optimal objective value (4 total - two adjacent non-zeros per column) - SOS2 constraint satisfied for each j: at most 2 non-zeros, and if 2, they're adjacent Test Results test_sos1_multidimensional PASSED test_sos2_multidimensional PASSED test_multidimensional_sos1_with_highs PASSED test_multidimensional_sos2_with_highs PASSED The implementation correctly handles multi-dimensional variables by leveraging xarray's broadcasting - the SOS constraint is applied along the sos_dim for each combination of the other dimensions. * Add custom big_m parameter for SOS reformulation Allow users to specify custom Big-M values in add_sos_constraints() for tighter LP relaxations when variable bounds are conservative. - Add big_m parameter: scalar or tuple(upper, lower) - Store as variable attrs (big_m_upper, big_m_lower) - Skip bound validation when custom big_m provided - Scalar-only design ensures NetCDF persistence works correctly For per-element Big-M values, users should adjust variable bounds directly. * Add custom big_m parameter for SOS reformulation Allow users to specify custom Big-M values in add_sos_constraints() for tighter LP relaxations when variable bounds are conservative. - Add big_m parameter: scalar or tuple(upper, lower) - Store as variable attrs (big_m_upper, big_m_lower) for NetCDF persistence - Use tighter of big_m and variable bounds: min() for upper, max() for lower - Skip bound validation when custom big_m provided (allows infinite bounds) Scalar-only design ensures NetCDF persistence works correctly. For per-element Big-M values, users should adjust variable bounds directly. * Simplification summary: ┌──────────────────────┬───────────┬───────────┬───────────┐ │ File │ Before │ After │ Reduction │ ├──────────────────────┼───────────┼───────────┼───────────┤ │ sos_reformulation.py │ 377 lines │ 223 lines │ 41% │ ├──────────────────────┼───────────┼───────────┼───────────┤ │ sos-constraints.rst │ 647 lines │ 164 lines │ 75% │ └──────────────────────┴───────────┴───────────┴───────────┘ Code changes: - Merged validate_bounds_for_reformulation into compute_big_m_values - Factored out add_linking_constraints helper in SOS2 - Used np.minimum/np.maximum instead of xr.where - Kept proper docstrings with Parameters/Returns sections Doc changes: - Removed: Variable Representation, LP File Export, Common Patterns, Performance Considerations - Trimmed: Examples to one each, Mathematical formulation to equations only - Condensed: API reference, multi-dimensional explanation * Revert some docs changes to be more surgical * Add math to docs * Improve docs * Code simplifications: 1. sos_reformulation.py (230 → 203 lines): - compute_big_m_values now returns single DataArray (not tuple) - Removed all lower bound handling - only supports non-negative variables - Removed add_linking_constraints helper function - Simplified SOS1/SOS2 to only add upper linking constraints 2. model.py: - Simplified big_m parameter from float | tuple[float, float] | None to float | None - Removed big_m_lower attribute handling 3. Documentation (sos-constraints.rst): - Updated big_m type signature - Removed asymmetric Big-M example - Added explicit requirement that variables must have non-negative lower bounds 4. Tests (46 → 38 tests): - Removed tests for negative bounds - Removed tests for tuple big_m - Added tests for negative lower bound validation error Rationale: The mathematical formulation in the docs assumes x ∈ ℝⁿ₊ (non-negative reals). This matches 99%+ of SOS use cases (selection indicators, piecewise linear weights). The simplified code is now consistent with the documented formulation. * Fix mypy * Fix mypy * Add constants for sos attr keys * Add release notes * Fix SOS reformulation: undo after solve, validate big_m, vectorize - solve() now undoes SOS reformulation after solving, preserving model state - Validate big_m > 0 in add_sos_constraints (fail fast) - Vectorize SOS2 middle constraints, eliminate duplicate compute_big_m_values - Warn when reformulate_sos=True is ignored for SOS-capable solvers - Add tests for model immutability, double solve, big_m validation, undo * tiny refac, plus uncovered test * refac: move reformulating function to module * Fix SOS reformulation: rollback, skipped attrs, undo in solve, sort coords - Remove SOS attrs for skipped variables (size<=1, M==0) so solvers don't see them as SOS constraints - Wrap reformulation loop in try/except for transactional rollback - Move undo into finally block in Model.solve() for exception safety - Sort variables by coord values before building adjacency constraints to match native SOS weight-based ordering * update release notes [skip ci] --------- Co-authored-by: Fabian Hofmann <fab.hof@gmx.de>
…tive) (PyPSA#576) * feat: add piecewise linear constraint API Add `add_piecewise_constraint` method to Model class that creates piecewise linear constraints using SOS2 formulation. Features: - Single Variable or LinearExpression support - Dict of Variables/Expressions for linking multiple quantities - Auto-detection of link_dim from breakpoints coordinates - NaN-based masking with skip_nan_check option for performance - Counter-based name generation for efficiency The SOS2 formulation creates: 1. Lambda variables with bounds [0, 1] for each breakpoint 2. SOS2 constraint ensuring at most two adjacent lambdas are non-zero 3. Convexity constraint: sum(lambda) = 1 4. Linking constraints: expr = sum(lambda * breakpoints) * Fix lambda coords * rename to add_piecewise_constraints * rename to add_piecewise_constraints * fix types (mypy) * linopy/constants.py — Added PWL_DELTA_SUFFIX = "_delta" and PWL_FILL_SUFFIX = "_fill". linopy/model.py — - Added method: str = "sos2" parameter to add_piecewise_constraints() - Updated docstring with the new parameter and incremental formulation notes - Refactored: extracted _add_pwl_sos2() (existing SOS2 logic) and added _add_pwl_incremental() (new delta formulation) - Added _check_strict_monotonicity() static method - method="auto" checks monotonicity and picks accordingly - Numeric coordinate validation only enforced for SOS2 test/test_piecewise_constraints.py — Added TestIncrementalFormulation (10 tests) covering: single variable, two breakpoints, dict case, non-monotonic error, decreasing monotonic, auto-select incremental/sos2, invalid method, extra coordinates. Added TestIncrementalSolverIntegration (Gurobi-gated). * 1. Step sizes: replaced manual loop + xr.concat with breakpoints.diff(dim).rename() 2. Filling-order constraints: replaced per-segment individual add_constraints calls with a single vectorized constraint via xr.concat + LinearExpression 3. Mask computation: replaced loop over segments with vectorized slice + rename 4. Coordinate lists: unified extra_coords/lambda_coords — lambda_coords = extra_coords + [bp_dim_index], eliminating duplicate list comprehensions * rewrite filling order constraint * Fix monotonicity check * Summary Files Modified 1. linopy/constants.py — Added 3 constants: - PWL_BINARY_SUFFIX = "_binary" - PWL_SELECT_SUFFIX = "_select" - DEFAULT_SEGMENT_DIM = "segment" 2. linopy/model.py — Three changes: - Updated imports to include the new constants - Updated _resolve_pwl_link_dim with an optional exclude_dims parameter (backward-compatible) so auto-detection skips both dim and segment_dim - Added _add_dpwl_sos2 private method implementing the disaggregated convex combination formulation (binary indicators, per-segment SOS2 lambdas, convexity, and linking constraints) - Added add_disjunctive_piecewise_constraints public method with full validation, mask computation, and dispatch 3. test/test_piecewise_constraints.py — Added 7 test classes with 17 tests: - TestDisjunctiveBasicSingleVariable (3 tests) — equal segments, NaN padding, single-breakpoint segments - TestDisjunctiveDictOfVariables (2 tests) — dict with segments, auto-detect link_dim - TestDisjunctiveExtraDimensions (1 test) — extra generator dimension - TestDisjunctiveValidationErrors (5 tests) — missing dim, missing segment_dim, same dim/segment_dim, non-numeric coords, invalid expr - TestDisjunctiveNameGeneration (2 tests) — shared counter, custom name - TestDisjunctiveLPFileOutput (1 test) — LP file contains SOS2 + binary sections - TestDisjunctiveSolverIntegration (3 tests) — min/max picks correct segment, dict case with solver * docs: add piecewise linear constraints documentation Create dedicated documentation page covering all three PWL formulations: SOS2 (convex combination), incremental (delta), and disjunctive (disaggregated convex combination). Includes math formulations, usage examples, comparison table, generated variables reference, and solver compatibility. Update index.rst, api.rst, and sos-constraints.rst. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * test: improve disjunctive piecewise linear test coverage Add 17 new tests covering masking details, expression inputs, multi-dimensional cases, multi-breakpoint segments, and parametrized multi-solver testing. Disjunctive tests go from 17 to 34 unique methods. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * docs: Add notebook to showcase piecewise linear constraint * Add cross reference to notebook * Improve notebook * docs: add release notes and cross-reference for PWL constraints Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix mypy issue in test * Improve docs about incremental * refactor and add tests * fix: reject non-trailing NaN in incremental piecewise formulation Validate that NaN breakpoints are trailing-only along dim. For method='incremental', raise ValueError on gaps. For method='auto', fall back to SOS2 instead. Add _has_trailing_nan_only helper. * further refactor * extract piecewise linear logic into linopy/piecewise.py Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * feat: allow broadcasted mask * fix merge conflict in release notes * refactor: remove link_dim from piecewise constraint API The linking dimension is now always auto-detected from breakpoint coordinates matching the expression dict keys, simplifying the public API of add_piecewise_constraints and add_disjunctive_piecewise_constraints. * refactor: use LinExprLike type alias and consolidate piecewise validation Extract _validate_piecewise_expr helper to replace duplicated isinstance checks in _auto_broadcast_breakpoints and _resolve_expr. Add LinExprLike type alias to types.py. Update docs, tests, and breakpoints factory. * fix: resolve mypy errors in piecewise module * update release notes [skip ci] --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> Co-authored-by: Fabian Hofmann <fab.hof@gmx.de>
* feat: add reformulate_sos='auto' support to solve() - Accept 'auto' as string literal in reformulate_sos parameter (line 1230) - When reformulate_sos='auto' and solver lacks SOS support, silently reformulate - When reformulate_sos='auto' and solver supports SOS natively, pass through without warning - Update error message to mention both True and 'auto' options (line 1424) - Add comprehensive test suite with 5 new test cases covering all scenarios - All 57 SOS reformulation tests pass * fix: improve reformulate_sos validation, DRY up branching, strengthen tests Validate reformulate_sos input early, collapse duplicate True/auto branches, fix docstring type notation, add tests for invalid values and no-SOS no-op, strengthen SOS2 test to actually verify adjacency constraint enforcement. * fix: resolve mypy errors in piecewise and SOS reformulation tests Widen segment types from list[list[float]] to list[Sequence[float]] and add missing type annotations in test fixtures.
…PSA#589) Co-authored-by: Fabian Hofmann <fab.hof@gmx.de>
Bumps the github-actions group with 3 updates: [actions/download-artifact](https://github.com/actions/download-artifact), [actions/upload-artifact](https://github.com/actions/upload-artifact) and [crazy-max/ghaction-chocolatey](https://github.com/crazy-max/ghaction-chocolatey). Updates `actions/download-artifact` from 7 to 8 - [Release notes](https://github.com/actions/download-artifact/releases) - [Commits](actions/download-artifact@v7...v8) Updates `actions/upload-artifact` from 6 to 7 - [Release notes](https://github.com/actions/upload-artifact/releases) - [Commits](actions/upload-artifact@v6...v7) Updates `crazy-max/ghaction-chocolatey` from 3 to 4 - [Release notes](https://github.com/crazy-max/ghaction-chocolatey/releases) - [Commits](crazy-max/ghaction-chocolatey@v3...v4) --- updated-dependencies: - dependency-name: actions/download-artifact dependency-version: '8' dependency-type: direct:production update-type: version-update:semver-major dependency-group: github-actions - dependency-name: actions/upload-artifact dependency-version: '7' dependency-type: direct:production update-type: version-update:semver-major dependency-group: github-actions - dependency-name: crazy-max/ghaction-chocolatey dependency-version: '4' dependency-type: direct:production update-type: version-update:semver-major dependency-group: github-actions ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
* code: expose knitro context and modify _extract_values * doc: update release_notes.rst * code: include pre-commit checks
* enable quadratic for win with scip * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Add release note * Drop reference to SCIP bug --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Fabian Hofmann <fab.hof@gmx.de>
* Refactor piecewise constraints: add piecewise/segments/slopes_to_points API, LP formulation for convex/concave cases, and simplify tests * piecewise: replace bp_dim/seg_dim params with constants, remove dead code, improve errors * Fix piecewise linear constraints: add binary indicators to incremental formulation, add domain bounds to LP formulation - Incremental method now uses binary indicator variables with link/order constraints to enforce proper segment filling order (Markowitz & Manne) - LP method now adds x ∈ [min(xᵢ), max(xᵢ)] domain bound constraints to prevent extrapolation beyond breakpoints * update signatures of breakpoints and segments, apply convexity check only where needed * update doc * Reject interior NaN and skip_nan_check+NaN in piecewise formulations Validate trailing-NaN-only for SOS2 and disjunctive methods to prevent corrupted adjacency. Fail fast when skip_nan_check=True but breakpoints actually contain NaN. * Allow piecewise() on either side of comparison operators Support reversed syntax (y == piecewise(...)) via __le__/__ge__/__eq__ dispatch in BaseExpression and ScalarLinearExpression. Fix LP example to use power == demand for more illustrative results. * Fix mypy type errors for piecewise constraint types - Add @overload to comparison operators (__le__, __ge__, __eq__) in BaseExpression and Variable to distinguish PiecewiseExpression from SideLike return types - Update ConstraintLike type alias to include PiecewiseConstraintDescriptor - Fix PiecewiseConstraintDescriptor.lhs type from object to LinExprLike - Fix dict/sequence type mismatches in _dict_to_array, _dict_segments_to_array, _segments_list_to_array - Remove unused type: ignore comments - Narrow ScalarLinearExpression/ScalarVariable return types to not include PiecewiseConstraintDescriptor (impossible at runtime) * rename header of jupyter notebook * doc: rename notebook again * feat: add active parameter to piecewise linear constraints (PyPSA#604) * feat: add `active` parameter to piecewise linear constraints Add an `active` parameter to the `piecewise()` function that accepts a binary variable to gate piecewise linear functions on/off. This enables unit commitment formulations where a commitment binary controls the operating range. The parameter modifies each formulation method as follows: - Incremental: δ_i ≤ active (tightened bounds) + base terms × active - SOS2: Σλ_i = active (instead of 1) - Disjunctive: Σz_k = active (instead of 1) When active=0, all auxiliary variables are forced to zero, collapsing x and y to zero. When active=1, the normal PWL domain is active. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * docs: tighten active parameter docstrings Clarify that zero-forcing is the only linear formulation possible — relaxing the constraint would require big-M or indicator constraints. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * docs: add active parameter to release notes Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: resolve mypy type errors for x_base/y_base assignment Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * docs: add unit commitment example to piecewise notebook Example 6 demonstrates the active parameter with a gas unit that stays off at t=1 (low demand) and commits at t=2,3 (high demand), showing power=0 and fuel=0 when the commitment binary is off. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Update notebook * test: comprehensive active parameter test coverage Add tests for gaps identified in review: - Inequality + active (incremental and SOS2, on and off) - auto method selection + active (equality and auto-LP rejection) - active with LinearExpression (not just Variable) - active with NaN-masked breakpoints - LP file output comparison (active vs plain) - Multi-dimensional solver test (per-entity on/off) - SOS2 non-zero base + active off - SOS2 inequality + active off - Disjunctive active on (solver) - Fix: reject active when auto resolves to LP 159 tests pass (was 122). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * refactor: extract PWL_ACTIVE_BOUND_SUFFIX constant Move the active bound constraint name suffix to constants.py, consistent with all other PWL suffix constants. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * test: remove redundant active parameter tests Keep only tests that exercise unique code paths or verify distinct mathematical properties. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> --------- Co-authored-by: FBumann <117816358+FBumann@users.noreply.github.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
…sets and supersets (PyPSA#572) * refac: introduce consistent convention for linopy operations with subsets and supersets * move scalar addition to add_constant * add overwriting logic to add constant * add join parameter to control alignment in operations * Add le, ge, eq methods with join parameter for constraints Add le(), ge(), eq() methods to LinearExpression and Variable classes, mirroring the pattern of add/sub/mul/div methods. These methods support the join parameter for flexible coordinate alignment when creating constraints. * Extract constant alignment logic into _align_constant helper Consolidate repetitive alignment handling in _add_constant and _apply_constant_op into a single _align_constant method. This eliminates code duplication and makes the alignment behavior (handling join parameter, fill_value, size-aware defaults) testable and maintainable in one place. * update notebooks * update release notes * fix types * add regression test * fix numpy array dim mismatch in constraints and add RHS dim tests numpy_to_dataarray no longer inflates ndim beyond arr.ndim, fixing lower-dim numpy arrays as constraint RHS. Also reject higher-dim constant arrays (numpy/pandas) consistently with DataArray behavior. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * remove pandas reindexing warning * Fix mypy errors: type ignores for xr.align/merge, match override signature, add test type hints * remove outdated warning tests * reintroduce expansions of extra rhs dims, fix multiindex alignment * refactor test fixtures and use sign constants * add tests for pandas series subset/superset * test: add TestMissingValues for same-shape constants with NaN entries * Fix broken test imports, stray docstring char, and incorrect test assertion from fixture refactor * Fill NaN with neutral elements in expression arithmetic, preserve NaN as 'no constraint' in RHS - Fill NaN with 0 (add/sub) or fill_value (mul/div) in _add_constant/_apply_constant_op - Fill NaN coefficients with 0 in Variable.to_linexpr - Restore NaN mask in to_constraint() so subset RHS still signals unconstrained positions * Fix CI doctest collection by deferring linopy import in test/conftest.py --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* Strengthen masked IIS regression test * Fix Xpress IIS mapping for masked constraints * Fix typing in masked IIS regression test
…PSA#601) * handle missing dual values when barrier solution has no crossover * Add release notes --------- Co-authored-by: Fabian Hofmann <fab.hof@gmx.de>
* Add semi-continous variables as an option * Run the pre-commit * Fix mypy issues * Add release notes note * Fabian feedback * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Missing to_culpdx --------- Co-authored-by: Fabian Hofmann <fab.hof@gmx.de> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
…ords. Here's what changed: - test_linear_expression_sum / test_linear_expression_sum_with_const: v.loc[:9].add(v.loc[10:], join="override") → v.loc[:9] + v.loc[10:].assign_coords(dim_2=v.loc[:9].coords["dim_2"]) - test_add_join_override → test_add_positional_assign_coords: uses v + disjoint.assign_coords(...) - test_add_constant_join_override → test_add_constant_positional: now uses different coords [5,6,7] + assign_coords to make the test meaningful - test_same_shape_add_join_override → test_same_shape_add_assign_coords: uses + c.to_linexpr().assign_coords(...) - test_add_constant_override_positional → test_add_constant_positional_different_coords: expr + other.assign_coords(...) - test_sub_constant_override → test_sub_constant_positional: expr - other.assign_coords(...) - test_mul_constant_override_positional → test_mul_constant_positional: expr * other.assign_coords(...) - test_div_constant_override_positional → test_div_constant_positional: expr / other.assign_coords(...) - test_variable_mul_override → test_variable_mul_positional: a * other.assign_coords(...) - test_variable_div_override → test_variable_div_positional: a / other.assign_coords(...) - test_add_same_coords_all_joins: removed "override" from loop, added assign_coords variant - test_add_scalar_with_explicit_join → test_add_scalar: simplified to expr + 10
- Move DEFAULT_LABEL_DTYPE from constants.py into options["label_dtype"] - Widen OptionSettings types from int to Any - Add validation: label_dtype only accepts np.int32 or np.int64 - Fix matrices.py empty clabels fallback to use configured dtype - Fix f-string quoting and trailing spaces in overflow error messages - Add -> None annotations and importorskip guard in test_dtypes.py - Add tests for int64 override and invalid dtype rejection - Add release notes entry Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Dimension coordinates (fill_missing_coords, _term coord) are small index arrays, not the large label/vars arrays that benefit from int32. xarray's index creation is slower with int32 than the default int64, causing a 13-38% build regression. Revert these to default int while keeping int32 for labels and vars where the memory savings matter. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Changes proposed in this Pull Request
Cut memory for internal integer arrays (labels, vars indices,
_termcoords) by ~20% and improve build speed on large models by defaulting toint32instead ofint64.What changed
linopy/constants.py: AddedDEFAULT_LABEL_DTYPE = np.int32linopy/model.py: Variable and constraint label assignment usesnp.arange(..., dtype=DEFAULT_LABEL_DTYPE)with overflow guard that raisesValueErrorif labels exceed int32 max (~2.1 billion)linopy/expressions.py:_termcoord assignment and.astype(int)for vars arrays now useDEFAULT_LABEL_DTYPElinopy/common.py:fill_missing_coordsuses int32 arange; polars schema infersInt32/Int64based on actual array dtype instead of OS/numpy-version heuristictest/test_constraints.py: Updated dtype assertions to usenp.issubdtype(compatible with both int32 and int64)test/test_dtypes.py(new): Tests for int32 labels, expression vars, solve correctness, and overflow guarddev-scripts/benchmark_lp_writer.py(new): Benchmark script supporting--phase memory|build|lp_writewith--plotcomparison modeBenchmark results
Reproduce with:
Memory (dataset
.nbytes)Consistent 1.25x reduction across all problem sizes (e.g. 640 MB → 512 MB at 8M vars). The
labelsandvarsarrays shrink 50% (int64 → int32) whilelower/upper/coeffs/rhsstay float64.Build speed
No regression on small/medium models. ~2x speedup at largest sizes (4.5M–8M vars) due to reduced memory pressure.
Similar results on real pypsa model
Checklist
doc.doc/release_notes.rstof the upcoming release is included.