Skip to content

Changes for non-CPU array support#375

Draft
kshyatt wants to merge 7 commits intomainfrom
ksh/cu
Draft

Changes for non-CPU array support#375
kshyatt wants to merge 7 commits intomainfrom
ksh/cu

Conversation

@kshyatt
Copy link
Member

@kshyatt kshyatt commented Feb 6, 2026

Mostly allowing passing an array type instead of just an element type to allow GPU arrays to back MPS tensors

@kshyatt kshyatt requested a review from lkdvos February 6, 2026 10:32
@codecov
Copy link

codecov bot commented Feb 11, 2026

Codecov Report

❌ Patch coverage is 0% with 33 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
src/states/infinitemps.jl 0.00% 11 Missing ⚠️
src/states/ortho.jl 0.00% 6 Missing ⚠️
src/states/abstractmps.jl 0.00% 4 Missing ⚠️
src/states/quasiparticle_state.jl 0.00% 4 Missing ⚠️
src/algorithms/toolbox.jl 0.00% 2 Missing ⚠️
src/operators/mpohamiltonian.jl 0.00% 2 Missing ⚠️
src/states/orthoview.jl 0.00% 2 Missing ⚠️
ext/MPSKitAdaptExt.jl 0.00% 1 Missing ⚠️
src/operators/multilinempo.jl 0.00% 1 Missing ⚠️
Files with missing lines Coverage Δ
ext/MPSKitAdaptExt.jl 4.54% <0.00%> (-47.84%) ⬇️
src/operators/multilinempo.jl 0.00% <0.00%> (-30.00%) ⬇️
src/algorithms/toolbox.jl 0.00% <0.00%> (-96.51%) ⬇️
src/operators/mpohamiltonian.jl 0.00% <0.00%> (-91.98%) ⬇️
src/states/orthoview.jl 0.00% <0.00%> (-89.69%) ⬇️
src/states/abstractmps.jl 2.27% <0.00%> (-52.28%) ⬇️
src/states/quasiparticle_state.jl 0.00% <0.00%> (-81.07%) ⬇️
src/states/ortho.jl 0.00% <0.00%> (-83.68%) ⬇️
src/states/infinitemps.jl 0.00% <0.00%> (-78.27%) ⬇️

... and 72 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@github-actions
Copy link
Contributor

github-actions bot commented Feb 11, 2026

Your PR no longer requires formatting changes. Thank you for your contribution!

@kshyatt kshyatt force-pushed the ksh/cu branch 4 times, most recently from 046f4d0 to 44d6fe6 Compare February 11, 2026 13:57
@kshyatt kshyatt force-pushed the ksh/cu branch 2 times, most recently from 11674d6 to d4e1a03 Compare February 18, 2026 12:47
@kshyatt
Copy link
Member Author

kshyatt commented Feb 18, 2026

Had to rebase on top of main which introduced many conflicts, most of the examples but for 2 are now working. The problem with examples cu_quantum1d/ no.s 2 & 3 is some very annoying conversion from BlockTensorMap slamming into TensorKit's conversion rules.

@lkdvos lkdvos marked this pull request as draft February 26, 2026 18:15
@lkdvos
Copy link
Member

lkdvos commented Feb 26, 2026

(just converted to draft to avoid rerunning all the tests for the various rebases)

@kshyatt
Copy link
Member Author

kshyatt commented Feb 26, 2026

(just converted to draft to avoid rerunning all the tests for the various rebases)

Oh good call, thank you!

@kshyatt
Copy link
Member Author

kshyatt commented Mar 2, 2026

FWIW all this is why I normally don't cherry-pick off other branches 😇

Comment on lines +105 to +106
TensorKit.storagetype(imps::InfiniteMPS{A, B}) where {A, B} = storagetype(A)
TensorKit.storagetype(::Type{InfiniteMPS{A, B}}) where {A, B} = storagetype(A)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is already defined for AbstractMPS now

Suggested change
TensorKit.storagetype(imps::InfiniteMPS{A, B}) where {A, B} = storagetype(A)
TensorKit.storagetype(::Type{InfiniteMPS{A, B}}) where {A, B} = storagetype(A)

kwargs...
) where {S <: IndexSpace}
return InfiniteMPS(MPSTensor.(pspaces, circshift(Dspaces, 1), Dspaces); kwargs...)
return InfiniteMPS(MPSTensor.(ComplexF64, pspaces, circshift(Dspaces, 1), Dspaces); kwargs...)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks a bit odd to me, I would prefer to not have to fill out defaults in more spaces and keep that centralized to the MPSTensor calls, and additionally I think you had to add a method to support this signature that you added here?


for j in Iterators.reverse((i + 1):center)
v.parent.Cs[j], tmp = lq_compact!(_transpose_tail(v.parent.ACs[j]; copy = true); positive = true)
v.parent.Cs[j], tmp = lq_compact!(_transpose_tail(v.parent.ACs[j]; copy = true), Defaults.alg_lq())
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change sounds slightly counterproductive to me, by only having positive = true, we are selecting the algorithm at runtime, which should pick out the correct one no matter what kind of array we are using, while explicitly filling out the algorithm requires us to make a choice of LAPACK_ or CUSOLVER_, ...

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Then you need to merge the Driver PR at MatrixAlgebraKit because the way you have this currently set up, it forces the LAPACK algorithm to be used with no choice by the user, irrespective of the backing array type

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does that mean that if you call lq_compact!(CUDA.rand(2, 2)) it defaults to LAPACK_HouseholderLQ? Because that seems to imply we are missing a MatrixAlgebraKit.default_algorithm implementation somewhere

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No it means deep in the bowels of MPSKit someone hard coded that the LAPACK method be used, this happens dozens of times over the QKH ecosystem

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I agree with this, but I think this is one of the very few places where we didn't do that, and the change suggested here does make it use the LAPACK one since Defaults.alg_lq() is the LAPACK one

Copy link
Member

@lkdvos lkdvos left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since I stole the commits from here, I thought I'd might as well also go through and indicate some merge changes that were probably no longer needed, and meanwhile left some more comments


for j in center:i
v.parent.ALs[j], v.parent.Cs[j + 1] = qr_compact(v.parent.ACs[j]; positive = true)
v.parent.ALs[j], v.parent.Cs[j + 1] = qr_compact(v.parent.ACs[j], Defaults.alg_qr())
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same comment here

Comment on lines +17 to +18
TensorKit.storagetype(l::LeftGaugedQP{S, T1, T2, E}) where {S, T1, T2, E} = storagetype(S)
TensorKit.storagetype(::Type{LeftGaugedQP{S, T1, T2, E}}) where {S, T1, T2, E} = storagetype(S)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
TensorKit.storagetype(l::LeftGaugedQP{S, T1, T2, E}) where {S, T1, T2, E} = storagetype(S)
TensorKit.storagetype(::Type{LeftGaugedQP{S, T1, T2, E}}) where {S, T1, T2, E} = storagetype(S)

Should be defined in terms of the generic QP

Comment on lines +30 to +31
TensorKit.storagetype(l::RightGaugedQP{S, T1, T2, E}) where {S, T1, T2, E} = storagetype(S)
TensorKit.storagetype(::Type{RightGaugedQP{S, T1, T2, E}}) where {S, T1, T2, E} = storagetype(S)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
TensorKit.storagetype(l::RightGaugedQP{S, T1, T2, E}) where {S, T1, T2, E} = storagetype(S)
TensorKit.storagetype(::Type{RightGaugedQP{S, T1, T2, E}}) where {S, T1, T2, E} = storagetype(S)

@lkdvos
Copy link
Member

lkdvos commented Mar 2, 2026

FWIW all this is why I normally don't cherry-pick off other branches 😇

I know, since this is my bad I thought I would try and help fix it 😨

@kshyatt
Copy link
Member Author

kshyatt commented Mar 2, 2026

It might take me a few days to get back to all this as I have a few other things in flight

kshyatt and others added 3 commits March 3, 2026 14:29
Co-authored-by: Lukas Devos <ldevos98@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants