Conversation
|
Your PR no longer requires formatting changes. Thank you for your contribution! |
046f4d0 to
44d6fe6
Compare
11674d6 to
d4e1a03
Compare
|
Had to rebase on top of |
|
(just converted to draft to avoid rerunning all the tests for the various rebases) |
Oh good call, thank you! |
|
FWIW all this is why I normally don't cherry-pick off other branches 😇 |
| TensorKit.storagetype(imps::InfiniteMPS{A, B}) where {A, B} = storagetype(A) | ||
| TensorKit.storagetype(::Type{InfiniteMPS{A, B}}) where {A, B} = storagetype(A) |
There was a problem hiding this comment.
I think this is already defined for AbstractMPS now
| TensorKit.storagetype(imps::InfiniteMPS{A, B}) where {A, B} = storagetype(A) | |
| TensorKit.storagetype(::Type{InfiniteMPS{A, B}}) where {A, B} = storagetype(A) |
| kwargs... | ||
| ) where {S <: IndexSpace} | ||
| return InfiniteMPS(MPSTensor.(pspaces, circshift(Dspaces, 1), Dspaces); kwargs...) | ||
| return InfiniteMPS(MPSTensor.(ComplexF64, pspaces, circshift(Dspaces, 1), Dspaces); kwargs...) |
There was a problem hiding this comment.
This looks a bit odd to me, I would prefer to not have to fill out defaults in more spaces and keep that centralized to the MPSTensor calls, and additionally I think you had to add a method to support this signature that you added here?
|
|
||
| for j in Iterators.reverse((i + 1):center) | ||
| v.parent.Cs[j], tmp = lq_compact!(_transpose_tail(v.parent.ACs[j]; copy = true); positive = true) | ||
| v.parent.Cs[j], tmp = lq_compact!(_transpose_tail(v.parent.ACs[j]; copy = true), Defaults.alg_lq()) |
There was a problem hiding this comment.
This change sounds slightly counterproductive to me, by only having positive = true, we are selecting the algorithm at runtime, which should pick out the correct one no matter what kind of array we are using, while explicitly filling out the algorithm requires us to make a choice of LAPACK_ or CUSOLVER_, ...
There was a problem hiding this comment.
Then you need to merge the Driver PR at MatrixAlgebraKit because the way you have this currently set up, it forces the LAPACK algorithm to be used with no choice by the user, irrespective of the backing array type
There was a problem hiding this comment.
Does that mean that if you call lq_compact!(CUDA.rand(2, 2)) it defaults to LAPACK_HouseholderLQ? Because that seems to imply we are missing a MatrixAlgebraKit.default_algorithm implementation somewhere
There was a problem hiding this comment.
No it means deep in the bowels of MPSKit someone hard coded that the LAPACK method be used, this happens dozens of times over the QKH ecosystem
There was a problem hiding this comment.
Yes, I agree with this, but I think this is one of the very few places where we didn't do that, and the change suggested here does make it use the LAPACK one since Defaults.alg_lq() is the LAPACK one
lkdvos
left a comment
There was a problem hiding this comment.
Since I stole the commits from here, I thought I'd might as well also go through and indicate some merge changes that were probably no longer needed, and meanwhile left some more comments
|
|
||
| for j in center:i | ||
| v.parent.ALs[j], v.parent.Cs[j + 1] = qr_compact(v.parent.ACs[j]; positive = true) | ||
| v.parent.ALs[j], v.parent.Cs[j + 1] = qr_compact(v.parent.ACs[j], Defaults.alg_qr()) |
| TensorKit.storagetype(l::LeftGaugedQP{S, T1, T2, E}) where {S, T1, T2, E} = storagetype(S) | ||
| TensorKit.storagetype(::Type{LeftGaugedQP{S, T1, T2, E}}) where {S, T1, T2, E} = storagetype(S) |
There was a problem hiding this comment.
| TensorKit.storagetype(l::LeftGaugedQP{S, T1, T2, E}) where {S, T1, T2, E} = storagetype(S) | |
| TensorKit.storagetype(::Type{LeftGaugedQP{S, T1, T2, E}}) where {S, T1, T2, E} = storagetype(S) |
Should be defined in terms of the generic QP
| TensorKit.storagetype(l::RightGaugedQP{S, T1, T2, E}) where {S, T1, T2, E} = storagetype(S) | ||
| TensorKit.storagetype(::Type{RightGaugedQP{S, T1, T2, E}}) where {S, T1, T2, E} = storagetype(S) |
There was a problem hiding this comment.
| TensorKit.storagetype(l::RightGaugedQP{S, T1, T2, E}) where {S, T1, T2, E} = storagetype(S) | |
| TensorKit.storagetype(::Type{RightGaugedQP{S, T1, T2, E}}) where {S, T1, T2, E} = storagetype(S) |
I know, since this is my bad I thought I would try and help fix it 😨 |
|
It might take me a few days to get back to all this as I have a few other things in flight |
Co-authored-by: Lukas Devos <ldevos98@gmail.com>
Mostly allowing passing an array type instead of just an element type to allow GPU arrays to back MPS tensors