Skip to content

Small corrections to LBFGSData constructor#414

Open
pkofod wants to merge 1 commit intoJuliaSmoothOptimizers:mainfrom
pkofod:patch-1
Open

Small corrections to LBFGSData constructor#414
pkofod wants to merge 1 commit intoJuliaSmoothOptimizers:mainfrom
pkofod:patch-1

Conversation

@pkofod
Copy link
Copy Markdown

@pkofod pkofod commented Mar 12, 2026

mem is clamped to positive integers. I'm not sure if it's intentional that no errors are thrown here. In the existing code the memory would be set to 1, but all the variables needed to maintain the approximation were set to something not positive, for example

julia>  LinearOperators.LBFGSData(Float64, 1;mem=-1, inverse=true)
ERROR: ArgumentError: invalid GenericMemory size: too large for system address width
Stacktrace:
 [1] GenericMemory
   @ ./boot.jl:516 [inlined]
 [2] Array
   @ ./boot.jl:578 [inlined]
 [3] Array
   @ ./boot.jl:591 [inlined]
 [4] zeros
   @ ./array.jl:589 [inlined]
 [5] zeros(::Type{Float64}, dims::Int64)
   @ Base ./array.jl:585
 [6] LinearOperators.LBFGSData(T::Type, n::Int64; mem::Int64, scaling::Bool, damped::Bool, inverse::Bool, σ₂::Float64, σ₃::Float64)
   @ LinearOperators ~/.julia/packages/LinearOperators/p7GRU/src/lbfgs.jl:36
 [7] top-level scope
   @ REPL[33]:1

Then, a small mostly theoretical thing. The vectors a and b are Vector{Vector{T)) but if inverse == true you get just a Vector{T}. It works at runtime because julia will convert it by accident because it just widens the type and tries to copy all memory, but there is 0 memory to copy. I just figured it was clearer to actually specify the type directly and not rely on the convert behavior.

`mem` is clamped to positive integers. I'm not sure if it's intentional that no errors are thrown here. In the existing code the memory would be set to 1, but all the variables needed to maintain the approximation were set to something not positive, for example
```
julia>  LinearOperators.LBFGSData(Float64, 1;mem=-1, inverse=true)
ERROR: ArgumentError: invalid GenericMemory size: too large for system address width
Stacktrace:
 [1] GenericMemory
   @ ./boot.jl:516 [inlined]
 [2] Array
   @ ./boot.jl:578 [inlined]
 [3] Array
   @ ./boot.jl:591 [inlined]
 [4] zeros
   @ ./array.jl:589 [inlined]
 [5] zeros(::Type{Float64}, dims::Int64)
   @ Base ./array.jl:585
 [6] LinearOperators.LBFGSData(T::Type, n::Int64; mem::Int64, scaling::Bool, damped::Bool, inverse::Bool, σ₂::Float64, σ₃::Float64)
   @ LinearOperators ~/.julia/packages/LinearOperators/p7GRU/src/lbfgs.jl:36
 [7] top-level scope
   @ REPL[33]:1
```

Then, a small mostly theoretical thing. The vectors `a` and `b` are Vector{Vector{T)) but if inverse == true you get just a Vector{T}. It works at runtime because julia will convert it by accident because it just widens the type and tries to copy all memory, but there is 0 memory to copy. I just figured it was clearer to actually specify the type directly and not rely on the convert behavior.
@codecov
Copy link
Copy Markdown

codecov bot commented Mar 16, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 94.83%. Comparing base (32dbc5e) to head (0131eaa).
⚠️ Report is 77 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #414      +/-   ##
==========================================
- Coverage   95.00%   94.83%   -0.17%     
==========================================
  Files          17       21       +4     
  Lines        1100     1180      +80     
==========================================
+ Hits         1045     1119      +74     
- Misses         55       61       +6     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant