Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/meta-pytorch/torchcodec/1247
Note: Links to docs will display an error until the docs builds have been completed. This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
Hi @Cemberk! Thank you for your pull request and welcome to our community. Action RequiredIn order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you. ProcessIn order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with If you have received this in error or have any questions, please contact us at cla@meta.com. Thanks! |
|
Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks! |
|
Hi @Cemberk! I realize the PR is still draft, but I thought it might be worth engaging early. Would you be able to share context on this work and the motivation behind it? Generally, we want to enable third-parties to integrate with TorchCodec and extend it to more devices. We do that by enabling out-of-core extensions: the extension lives in a different repo, so that you can have full control over it. The idea is to implement and register your own DeviceInterface. See a simple example here, and a real-world usage here for XPU support. |
|
Hi @NicolasHug! The context is that I have been working on running Huggingface Transformers unit tests on Rocm GPUs in ci I am consistently seeing failures due to torchcodec working by default on Nvidia gpus but defaulting to CPU on Rocm my intention was to enable parity so the transformers code using torchcodec import would behave similarly on both gpus for native support based on the example code I have created a repo here for the plugin does this look correct? https://github.com/Cemberk/torchcodec-rocm Also in the example it seems like it is designed to first import torchcodec then import torchcodec-xpu since in the rocm case this would mean upstream transformers code need to add some compatibility logic across the repo could we instead add the following to ops.py so plugins once installed would wire in to the same import statement ? from importlib.metadata import entry_points
# Auto-discover and load third-party device extensions.
for _ep in entry_points(group="torchcodec.device_extensions"):
try:
_ep.load()()
except Exception:
pass |
|
Thanks for the context! On plugins: yes, there's #1151 which we still need to review but it should be doable |
|
This will be a nice addition for ROCm vLLM ecosystem too. So far we are using this: https://github.com/vllm-project/vllm/blob/main/tools/install_torchcodec_rocm.sh But we would like this integrated in upstream torchcodec repo so that we can use prebuilt packages and accelerate installation. |
Utilizing RocDecode ROCM package to mirror gpu api for accelerated decode behavior