Replies: 2 comments 2 replies
-
With JetPack versions earlier than 5.0, all of the Jetson containers assumed that any of those needed packages are passed through (via symlink) from the host OS. Post-5.0, NVIDIA switched over to including CUDA, TensorRT, etc. in the container images themselves, so you don't have to install them into your OS image if you aren't using them natively. That makes the containers much larger, of course, but keeps your base OS smaller. |
Beta Was this translation helpful? Give feedback.
-
oh good to know! So basically, I can use a Yocto base image without CUDA or TensorRT installed and choose for example a TensorRT 8.5.2 container which is part of Jetpack 5.1 and it will work? so basically the Nvidia containers now communicate directly with the GPU without having to go through the Yocto base image starting from Jetpack 5.0? |
Beta Was this translation helpful? Give feedback.
-
Hello,
I am a bit confused as to how to run a TensorRT inference model with a Yocto base image. I want to use the versions from the JetPack 5.1 and I saw that you guys updated the meta tegra to JetPack 5.1. My question now is if I run a Nvidia container on top of the Yocto base image, for example: https://registry.hub.docker.com/r/nvidiajetson/l4t-ros2-foxy-pytorch or https://gitlab.com/nvidia/container-images/l4t-jetpack/-/tree/master/, will these containers create a symlink between their CUDA and TensorRT versions to the CUDA and TensorRT available in the yocto base image?
Another related question: if I just use the Nvidia runtime container on top of the Yocto base image, can I run CUDA and TensorRT without enabling CUDA and TensorRT in the Yocto base image? basically the libraries will only be available at runtime?
Many thanks!
Beta Was this translation helpful? Give feedback.
All reactions