Conversation
lkdvos
left a comment
There was a problem hiding this comment.
I think I agree with your assessment that this is probably not entirely what we want, and we should simply provide adapt functionality for operators in MPSKit.
My main motivation here is that:
- I am not sure if we really have to support being able to swap out every part of the pipeline to run on GPU. As I don't think this is typically a bottleneck, I would much rather reduce the amount of code and avoid having to add extensions and dependencies here that we also have to maintain.
- Given the
@allowscalarcalls I don't think this is any more efficient than simply constructing everything on CPU and then sending it over. - In principle all of this code for creating the tensors should eventually be replaced by the
TensorKitTensors.jlimplementations, but that just hasn't happened yet. Therefore I would prefer to not have to add this here, if that would work?
|
Well, not having this is completely blocking a lot of the PEPSKit and MPSKit examples from working... we could get rid of the |
|
Is there a reason not to change the GPU examples to simply have the workflow of using |
Add support for passing array types instead of just element types as well as CUDA-compatible operator builders. For some of these it may make more sense to just turn the ops into
CuTensorMaps at the end, rather than using@allowscalar.