Wow, great examples. Are you thinking about to provide inference examples based on ONNX packaged models as well? In comparison to Python pickle-based packaged models ONNX packaged models can be run on top of a lot of potentially way faster runtimes. I’m not quite sure about if there is NVIDIA specifc hardware acceleration supported already but according to the ONNX website in general NVIDIA acceleration seems to be supported or at least planned.
We accelerated using NVIDA TensorRT with torch2trt: An easy to use PyTorch to TensorRT converter. In TensorRT, we first convert PyTorch model to ONNX and then to TensorRT.
All the packages are accelerated for NVIDIA Jetson Hardware.
Hi @ak-nv we’ve been working quite hard on ROS 1 and ROS 2 packages for conda. These packages are cross-platform, and we could use cuda / cudnn etc. from the conda-forge channel. NVidia is actually quite active in the conda-forge community which our effort is based on. I am wondering if there is any interest by NVidia to also make the conda-package route attractive for Jetson? We already have ARM64 packages for ROS noetic, and for Foxy we just need to turn it on. Would be happy to give you a demo / chat about this. PS: Here is a link with our recent updates Cross-platform conda packages for ROS | by Wolf Vollprecht | robostack | Feb, 2021 | Medium
Great, thanks. I was having trouble building anything that used opencv in the docker, but I think it was just that the libopencv-dev package was missing.
Isaac ROS is available now at github.com/NVIDIA-ISAAC-ROS. Clone the repositories you need into your ROS workspace to build from source with colcon alongside your other ROS2 packages.
Our latest release includes hardware accelerated ROS2 Foxy packages for image processing and deep learning, tested on Jetson AGX Xavier with JetPack 4.6.
Stereo visual inertial odometry (60 fps at 720p)
DNN model inference for custom and pre-trained DNNs with included examples for DOPE 3D pose estimation & U-NET semantic image segmentation (pre-trained PeopleSemSegNet 25fps at 544p)
AprilTag detection (52fps at 1080p)
Image pre-processing (lens distortion correction, color space conversion, scaling)
Stereo depth estimation (disparity and point cloud)