Isaac ROS March update, vision based navigation

These are not in our roadmap for Isaac ROS.

The performance in of the stereo visual odometry package is best in it’s class.

It’s been submitted to KITTI’s benchmark with accuracy & performance results here; it runs at 0.007s on Jetson AGX, with an translation of 0.94%, and rotation of 0.0019 deg/m. It is the fastest stereo camera solution submitted to KITTI, with the highest real-time accuracy. In practical terms that’s 80fps on Jetson AGX at 720p on Foxy.

We test against multiple standard visual odometry sequences, and have deployed this to our Carter2 robot to generate maps, localize, and transfer the map to other robots to localize.

This is an early access release, of our work in progress, as part of a continuous improvement process. It will continue to improve as we get more mileage in our robots, and feedback from the industry.

There is a good writeup of the VSLAM package here.

In summary it does both. We use it as an additional vision based localization source, to improve the robustness of planar LIDAR. The current release can construct a map of landmarks & pose graph, refine this with time as the environment changes, save this map, and reload the map on start with a prior pose to guide the initial location. The plan is to remove the need for the prior pose on start in a release targeted for the end of this year.

There is good detail on what nvblox does here.

With this as an early release we need to work on several of the things mentioned above, including improving the performance to run real-time on our Jetson platform, and scale to large spaces. Like VSLAM we are working on vision based solutions to improve on existing LIDAR solutions.

The release includes a cost-map plugin for Nav2, and builds a reconstructed map in a 3D voxel grid.

This is not integrated into Nav2. It provides a package to enable a hardware accelerated pipeline from camera through detection for developers with their trained networks. We anticipate we will need to plug this into Nav2 as a hint to improve behaviour for obstacles as discussed in the thread.

We are open to collaborate where we can and it aligns with our vision based approach to navigation.

Thanks for all the probes and thoughts.

1 Like