Skip to content

requirements.txt failure of TensorRT 10.3 when running simple_progress_monitor on Jetson Orin Nano in Conda env #4727

@aliphys

Description

@aliphys

Description

I am trying to run the the simple_progress_monitor.py example provided in TensorRT 10.3.

The problem is with the requirements.txt file. Jetpack 6.2.1 comes with Python 3.10.12. Therefore there is no suitable version of cuda-python that cane be install since it lies between 3.10 and 3.11.

cuda-python==12.2.0; python_version <= "3.10"
cuda-python==12.5.0; python_version >= "3.11"

This is the error that is shown

(simple_tensorrt) jetson@jetson-desktop:/usr/src/tensorrt/samples/python/simple_progress_monitor$ conda install --yes --file requirements.txt
2 channel Terms of Service accepted

InvalidMatchSpec: Invalid spec 'cuda-python ==12.2.0;python_version<="3.10"': Invalid version '12.2.0;python_version<="3.10"': invalid character(s)

Additionally, there is a line to install pywin32 which is unparseable on the Ubuntu 22.04 system that the Jetson Orin Nano runs off.

pywin32; platform_system == "Windows"

CondaValueError: could not parse 'pywin32; platform_system == "Windows"' in: requirements.txt

Given that:

  • TensorRT support of the Jetson ecosystem is a major product feature
  • Jetson Orin Nano and TensorRT products are both developed by NVIDIA
  • Neither the Jetson Orin Nano nor TensorRT 10.3 are retired
  • Jetpack 6.2.1 is the latest officially supported version supported by NVIDIA

I would appreciate how I can run a simple TensorRT example with the Jetson Orin Nano in a smooth manner. Thanks!

Environment

TensorRT Version: 10.3.0

NVIDIA GPU: Jetson Orin Nano (Ampere)

NVIDIA Driver Version: 540.4.0 (Jetpack 6.2.1)

CUDA Version: 12.6

CUDNN Version:

Operating System: Ubuntu 22.04

Python Version (if applicable): 3.10.12

Tensorflow Version (if applicable):

PyTorch Version (if applicable):

Baremetal or Container (if so, version): Miniconda


jetson@jetson-desktop:/usr/src/tensorrt/samples/python/simple_progress_monitor$ dpkg -l | grep tensorrt
ii  nv-tensorrt-local-tegra-repo-ubuntu2204-10.3.0-cuda-12.5 1.0-1                                       arm64        nv-tensorrt-local-tegra repository configuration files
ii  nvidia-tensorrt                                          6.2.1+b38                                   arm64        NVIDIA TensorRT Meta Package
ii  nvidia-tensorrt-dev                                      6.2.1+b38                                   arm64        NVIDIA TensorRT dev Meta Package
ii  tensorrt                                                 10.3.0.30-1+cuda12.5                        arm64        Meta package for TensorRT
ii  tensorrt-libs                                            10.3.0.30-1+cuda12.5                        arm64        Meta package for TensorRT runtime libraries


jetson@jetson-desktop:/usr/src/tensorrt/samples/python/simple_progress_monitor$ sudo apt-cache show nvidia-jetpack
[sudo] password for jetson: 
Package: nvidia-jetpack
Source: nvidia-jetpack (6.2.1)
Version: 6.2.1+b38
Architecture: arm64
Maintainer: NVIDIA Corporation
Installed-Size: 194
Depends: nvidia-jetpack-runtime (= 6.2.1+b38), nvidia-jetpack-dev (= 6.2.1+b38)
Homepage: http://developer.nvidia.com/jetson
Priority: standard
Section: metapackages
Filename: pool/main/n/nvidia-jetpack/nvidia-jetpack_6.2.1+b38_arm64.deb
Size: 29300
SHA256: dd9cb893fbe7f80d2c2348b268f17c8140b18b9dbb674fa8d79facfaa2050c53
SHA1: dc630f213f9afcb6f67c65234df7ad5c019edb9c
MD5sum: 9c8dc61bdab2b816dcc7cd253bcf6482
Description: NVIDIA Jetpack Meta Package
Description-md5: ad1462289bdbc54909ae109d1d32c0a8

Package: nvidia-jetpack
Source: nvidia-jetpack (6.2)
Version: 6.2+b77
Architecture: arm64
Maintainer: NVIDIA Corporation
Installed-Size: 194
Depends: nvidia-jetpack-runtime (= 6.2+b77), nvidia-jetpack-dev (= 6.2+b77)
Homepage: http://developer.nvidia.com/jetson
Priority: standard
Section: metapackages
Filename: pool/main/n/nvidia-jetpack/nvidia-jetpack_6.2+b77_arm64.deb
Size: 29298
SHA256: 70553d4b5a802057f9436677ef8ce255db386fd3b5d24ff2c0a8ec0e485c59cd
SHA1: 9deab64d12eef0e788471e05856c84bf2a0cf6e6
MD5sum: 4db65dc36434fe1f84176843384aee23
Description: NVIDIA Jetpack Meta Package
Description-md5: ad1462289bdbc54909ae109d1d32c0a8

Package: nvidia-jetpack
Source: nvidia-jetpack (6.1)
Version: 6.1+b123
Architecture: arm64
Maintainer: NVIDIA Corporation
Installed-Size: 194
Depends: nvidia-jetpack-runtime (= 6.1+b123), nvidia-jetpack-dev (= 6.1+b123)
Homepage: http://developer.nvidia.com/jetson
Priority: standard
Section: metapackages
Filename: pool/main/n/nvidia-jetpack/nvidia-jetpack_6.1+b123_arm64.deb
Size: 29312
SHA256: b6475a6108aeabc5b16af7c102162b7c46c36361239fef6293535d05ee2c2929
SHA1: f0984a6272c8f3a70ae14cb2ca6716b8c1a09543
MD5sum: a167745e1d88a8d7597454c8003fa9a4
Description: NVIDIA Jetpack Meta Package
Description-md5: ad1462289bdbc54909ae109d1d32c0a8

Relevant Files

Model link: N/A

Steps To Reproduce

  1. Make sure that the Jetson Orin Nano is running Jetpack 6.2.1.
  2. Install Miniconda
  3. Create a new environment conda create --name simple_tensorrt
  4. Set the Python environment to 3.10.12 (same as the base installation) conda activate simple_tensorrt \ conda install python=3.10.12
  5. Go to location of the example cd /usr/src/tensorrt/samples/python/simple_progress_monitor
  6. Attempt to install the packages in the requirements.txt file. conda install --yes --file requirements.txt
  7. Comment out the lines as shown below, in order for the packages to be installed.
  8. cuda-python 12.2.0 is not present on the default anaconda channel and it has to be installed separately from the NVIDIA channel. conda install nvidia::cuda-python==12.2.0

Commands or scripts: Mentioned in steps to reproduce section above.

Have you tried the latest release?:

TensorRT and Jetpack 6.2.1 do not play well together. Sticking to the official stable versions included. Dependency hell is sadly a challenge with the Jetson ecosystem which I am trying to avoid.

Attach the captured .json and .bin files from TensorRT's API Capture tool if you're on an x86_64 Unix system

N/A

Can this model run on other frameworks? For example run ONNX model with ONNXRuntime (polygraphy run <model.onnx> --onnxrt):

N/A

Metadata

Metadata

Assignees

No one assigned

    Labels

    Module:SamplesIssues when using TensorRT samples under the samples/ directory, including usage with trtexec

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions