Hi all! Your Friendly Neighborhood Navigator here!
Its been a busy few weeks on navigation.ros.org, with a new GPS tutorial in collaboration with Kiwibot and now a brand-spanking-new VIO tutorial in collaboration with Stereolabs!
https://navigation.ros.org/tutorials/docs/integrating_vio.html
In this tutorial, we go over how to setup VIO / VSLAM to be used in a robot’s odometry estimation to augment existing IMUs or wheel encoders. This is majorly important for legged robots, omni robots especially using mecanum wheels, and outdoor robots which have traditionally poor wheel odometry due to their motion models or environments.
Visual Inertial Odometry (VIO) or Visual SLAM (VSLAM) can help augment your odometry with another sensing modality to more accurately estimate a robot’s motion over time. This makes your autonomy system more reliable and gives you the ability to rely on odometry for localized movements (e.g. docking or interfacing with external hardware).
In this tutorial, we show how to set it up using Stereolabs’ ZED X cameras, though the instructions are portable using other solutions. However, we highly recommend the Stereolabs solution as an integrated sensing-compute-hardware combo which is highly optimized and does not require finnicky tuning to get good out-of-the-box results. You get good results out immediately, saving you time trying to test every cameras combo with various open-source implementations of dubious quality.
While VSLAM for global localization in large spaces is still an active area of research we wish to eventually support, using it for local pose estimation and odometry is well established and can be used, today, in your robots to improve your odometry with ease!
Interested in learning more about Stereolabs relationship with ROS and Open Navigation? Check out our newest blog: Stereolabs - Sponsor Introductions P.III — Open Navigation LLC
Happy visual slamming,
Steve