Description
Hi,
I’m currently using the setup_torch_device and configure_cuda methods. Right now, configure_cuda always sets:
self.torch_device = torch.device("cuda")
which defaults to the first available GPU (cuda:0). On multi-GPU systems, the first GPU might already be in use, and it would be useful to allow selecting a specific GPU.
Proposed:
Add an optional argument to configure_cuda, e.g., device_index, so that the device can be explicitly set:
def configure_cuda(self, ort_providers, device_index=0):
self.torch_device = torch.device(f"cuda:{device_index}")
This would allow both PyTorch and ONNX Runtime to use the specified GPU, instead of always defaulting to cuda:0.
Thank you~
Description
Hi,
I’m currently using the setup_torch_device and configure_cuda methods. Right now, configure_cuda always sets:
which defaults to the first available GPU (cuda:0). On multi-GPU systems, the first GPU might already be in use, and it would be useful to allow selecting a specific GPU.
Proposed:
Add an optional argument to configure_cuda, e.g., device_index, so that the device can be explicitly set:
This would allow both PyTorch and ONNX Runtime to use the specified GPU, instead of always defaulting to cuda:0.
Thank you~