Isaac ROS March update, vision based navigation

Isaac ROS which includes hardware accelerated ROS2 Foxy packages for AI perception with image processing adds updates for vision based navigation:

  • SVIO has been upgraded to VSLAM as a visual odometry source for Nav2, and can save and load its feature maps for localization.
  • nvblox (early access) provides parralized compute implementation of voxblox for 3D scene reconstruction and temporal cost maps for Nav2
  • DetectNet to detect and classify obstacles as an input to improve behaviour of navigation depending on the type of obstacle
  • bug fixes

nvblox_slice
(3d scene reconstruction & Nav2 local cost map with nvblox)

This Isaac ROS update is available for download and is part of our commitment to provide features and hardware acceleration for autonomous robots.

Clone the repositories you need into your ROS workspace to build from source with colcon alongside your other ROS2 packages. This release has been tested on the Jetson AGX Xavier with JetPack 4.6.1.

The next release will be in June transitioning to ROS2 Humble, Jetpack 5.0, and adding Jetson AGX Orin.

8 Likes

Great work @ggrigor and team. Any plans to open source NvBlast, NvCloth and RTX GI SDK? The robotics community would benefit tremendously from been able to leverage those technologies more openly.

1 Like

This is particularly interesting, mind if I probe a little? Also, I love that gif, I’m a big fan of visual demonstrations!

How well does this work and has it been tested / deployed in a real application? VSLAM has always been ‘not quite there yet’ for commercial use in my testing of existing methods in terms of long-term localization support over time, changing environments obstacles, etc. Do you have a pure-localization mode or some long term continuous mapping to deal with changes for real-world deployments?

I ask these questions because I had intended to do a VSLAM demo with Nav2 with @amerzlyakov and after evaluating the state of the art and openly available techniques, we ultimately passed because nothing was robust enough to put our stamp of approval / support for – but we haven’t looked at this (yet).

If this work is “ready” for prime time, I’d love to have a discussion around this for documentation / integration on navigation.ros.org.

Can you talk a bit about the roadmap here? How well / does this support large scale spaces? Where do you see this work integrating into Nav2 (costmap replacement, costmap plugin, etc). Just on a local costmap for height maps / dense representations or globally for full spaces?

I’m definitely open to real discussions on redesigning or otherwise reconsidering the environmental modeling in Nav2 – this may be a good place to chat if you / your team are putting resources behind this and thinking in a modern way. Really, I’ve been waiting to work on environmental modeling work in Nav2 until I had some help but I have many detailed thoughts about it.

Is this integrated anywhere? We actually had a dynamic obstacle pipeline we started in Nav2 that is on hold due to a lack of detector integrations. This may be a useful area of collaboration. I’d be interested in potentially upstreaming any detection / segmentation consuming costmap layers for general use.


In general, it might be beneficial for the Nav2 community or maintainers to sync up with you / your plans if you’re going to be developing add-ons for Nav2 / related to make sure we’re executing well on any synergies that may exist. There may be places we can collaborate.

2 Likes

These are not in our roadmap for Isaac ROS.

The performance in of the stereo visual odometry package is best in it’s class.

It’s been submitted to KITTI’s benchmark with accuracy & performance results here; it runs at 0.007s on Jetson AGX, with an translation of 0.94%, and rotation of 0.0019 deg/m. It is the fastest stereo camera solution submitted to KITTI, with the highest real-time accuracy. In practical terms that’s 80fps on Jetson AGX at 720p on Foxy.

We test against multiple standard visual odometry sequences, and have deployed this to our Carter2 robot to generate maps, localize, and transfer the map to other robots to localize.

This is an early access release, of our work in progress, as part of a continuous improvement process. It will continue to improve as we get more mileage in our robots, and feedback from the industry.

There is a good writeup of the VSLAM package here.

In summary it does both. We use it as an additional vision based localization source, to improve the robustness of planar LIDAR. The current release can construct a map of landmarks & pose graph, refine this with time as the environment changes, save this map, and reload the map on start with a prior pose to guide the initial location. The plan is to remove the need for the prior pose on start in a release targeted for the end of this year.

There is good detail on what nvblox does here.

With this as an early release we need to work on several of the things mentioned above, including improving the performance to run real-time on our Jetson platform, and scale to large spaces. Like VSLAM we are working on vision based solutions to improve on existing LIDAR solutions.

The release includes a cost-map plugin for Nav2, and builds a reconstructed map in a 3D voxel grid.

This is not integrated into Nav2. It provides a package to enable a hardware accelerated pipeline from camera through detection for developers with their trained networks. We anticipate we will need to plug this into Nav2 as a hint to improve behaviour for obstacles as discussed in the thread.

We are open to collaborate where we can and it aligns with our vision based approach to navigation.

Thanks for all the probes and thoughts.

1 Like

Nice job with the previous version in performance, very impressive, I made work the VIO package in my Jeston NX without docker and out of official Jetpack, 20.04+Noetic.

I integrated it my SLAM setup,.and worked nicely in the test I did with a d435 as pasive stereo camera .

I couldn’t make work the NN part, I opened a github issue, maybe was fixed in this new version. I would love to try this new update but
sadly I needed sell my Xavier NX, but I saved a backup, hopefully in the future I will be able to one.

This is the drone I had planned for Elbrus, now it has a RPi4 and Orbslam

Thanks a lot by share this great job. It looked amazing for me, I hope can comeback over it some day.

I really think it is so good as you described.

3 Likes

Benchmarks are great, but has this been used anywhere for a practical deployment for a a modest period of time without issues? That’s a more useful metric for quality and prime-time readiness than short term dataset benchmarks. Particularly w.r.t. localization, localizing in environments long term with changes, low feature parts of the environment, etc.

It is great though to see some metrics on a standardized system though! I look forward to hearing more in the future how well this fares for practical applications. If it work well, I’d be happy to have a tutorial or stronger integration with Nav2 to give users a VSLAM option out of the box.

How well does this scale now? Can it handle reasonably an apartment, a bodega, a grocery store, a warehouse, etc? Where on the scale of space can this handle today?

We are ramping in real environments from our current facility 100’s of sqm to our new facility ~100Ksqm to identify and resolve practical deployment issues.

elbrus_ros_3

Testing in simulation is crucial to our work; simulation allows testing of variations and parameters we either don’t have access to, or are unsafe and too costly to perform in the real world. Resimulation of recorded sequences provides real sensor, open loop test coverage.

Releasing the package allows issues to be uncovered so we can improve the function, as we do not have physical access to every type of environment it can be of value for. You’re welcome to try it, and we would appreciate the input from your experience to further improve it.

VSLAM can predict pose when visual odometry is lacking key visual features to localize in the environment making it more robust to these conditions.

With your prior experience in using VSLAM and finding existing solutions lacking could you share the concrete issues, so we can test against these known issues?

It’s been tested in smaller environments from 10’s of sqm to 1000’s of sqm, and we are ramping to larger spaces just as we are for VSLAM.

These questions are informative and we appreciate the thoughts.

2 Likes

Very cool stuff Gordon.

Was nvblox developed by the lab at ETH Zurich?

nvblox is improved from the ETH Zurich work on voxblox for parallel computing with CUDA.

Thanks.

Totally understand :+1:. Once you have some confidence in these spaces, I think it would be valuable for you to reach out and we can work on the best way to get this integrated / documented with Nav2. We’ve wanted a VSLAM demo for some time now and given the niche nature of it, I don’t have an issue if this is bound to a specific hardware manufacturer’s APIs. We will still continue to offer non-vendor specific 2D lidar SLAM and localization, so anything more is just icing on the cake!

Certainly. A situation I often think about is hallways with lots of 90 degree turns. Imagine having a situation where you have long hallways where we assume there are “sufficient” features. But there are corners between them which have no features. How does the VSLAM make sure it builds a globally consistent map after turning right/left 90 degrees and moving a few meters while navigating before it finds new features to start tracking? Typically, some odometry fusion from wheel encoders / IMU are valuable to deal with this kind of situation.

There are a host of others, but that’s an illustrative example I think about since it ties in a few issues (how to deal with non-trivial motions between feature rich areas; building a map with ‘similar’ structures of straight halls which are actually separated by a feature-less area).

As I did with Jetson NX with the previous.version I did make work the new version in my Laptop with RTX3060.
To be honest, without a deep testing , it performs really well, but to carry on being honest, if we forget the Docker installation as I didn’t try it, in native it is quite broken and difficult make it work, with some quite strange things that gives the sensation that nvidia really don’t want it be adopted in mass.
In both platforms NX and PC, the AI related nodes,wasn’t possible make it work, to be honest I don’t think someone could make it work natively, included nvidia staff, no the one that is uploaded in github, maybe other, it is just quite impossible, if someone did, it was with some hacking as I did , not following the guides or using the uploaded files.
I don’t mess more in that, but there is that issue, and recognise the VIO part looks superior to any other thing I tested, specially in don’t lose the track and speed.
I’m a hobbyist, maybe professionals find other different experience.

2 Likes

The AI-related nodes (TensorRT and Triton) require a specific setup of libraries and versions which we can provide through a Docker image. Getting that combination exactly right without Docker is quite difficult (especially for Triton) and we do not generally run natively, no, you’re right. We’d like to make the experience better for those running without Docker though. If you could file GitHub Issues against the isaac_ros_dnn_inference repository, we can take a deeper look into the troubles you faced.

1 Like

I filled some issues a few days ago, in not a very professional way, quite vague really, but I have some constraints in that.
Just noted nvblox and it’s Nav2 integration won’t work either, it looks it is hardcoded to work only in isaac sim.
This leave in my case only elbrus working in native installation (x64,20.04,foxy,d435i).
I would love do such tests you asked and fill more detailed issues, but it enter in conflict with a occasional activity I have as tester of robotic products/software.

I understand totally the actual software state, and in the same way, just recognise that the performance (speed, precision, reliability) is just far away of any other VIO software I tested, I really think nothing is even close. I don’t know about drift, ground truth and loop closure performance (and I have not real interest in discover it at the moment), but it just won’t lose the track in light conditions where others would stop to work. Keep the track alive, for me is the 1st and most important feature of a VIO software, and it is the best with no dude in the two setups I tried.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.