Announcing stable release of Slam Toolbox

Hi!

Over the last 2 years or so a pet project of mine is finally ready for prime time and see get some use. Slam Toolbox is a set of tools and capabilities for 2D planar SLAM. This project contains the ability to do most everything any other available SLAM library, both free and paid, and more. This includes:

  • Ordinary point-and-shoot 2D SLAM mobile robotics folks expect (start, map, save pgm file)
  • life-long mapping (start, serialize, wait any time, restart anywhere, continue refining)
  • an optimization-based localization mode (start, serialize, restart anywhere in Localization mode, optimization based localizer)
  • synchronous and asynchronous modes
  • kinematic map merging (with an elastic graph manipulation merging technique in the works)
  • plugin-based optimization solvers with a new optimized Google Ceres based plugin
  • RVIZ plugin for interating with the tools
  • graph manipulation tools in RVIZ to manipulate nodes and connections during mapping
  • Map serialization and lossless data storage
  • … more but those are the highlights

For running on live production robots, I recommend using the snap: slam-toolbox, it has optimizations in it that make it about 10x faster. You need the deb/source install for the other developer level tools that don’t need to be on the robot (rviz plugins, etc).

This package has been benchmarked mapping building at 5x+ realtime up to about 30,000 sqft and 3x realtime up to about 60,000 sqft. with the largest area (I’m aware of) used was a 145,000 sq.ft. building in sychronous mode (e.i. processing all scans, regardless of lag), and much larger spaces in asynchronous mode.

I’d love to see what people think. Features of this have been running live on dozens of robots worldwide. I’ll be the first to admit it needs some refactoring, but the capabilities are there. Take a look, star, and I look forward to the issue tickets and feature requests!

Steve Macenski, Open Source Robotics Engineering Lead @ Samsung Research America

20 Likes

Awesome! Can you help us understand either qualitatively or quantitatively:

  1. The types of sensor it expects and the types of sensor it accepts
  2. The sensor quality it expects and accepts
  3. The typical computer power needed to do things such as 145,000 square feet real time mapping.

It looks terrific, and I am excited to try it on a Ubiquity Robotics Magni when our 3-D TOF sensor is ready.

One last thing you didn’t include a link to the repo. I presume this is the right one: https://github.com/SteveMacenski/slam_toolbox

1 Like

Hi Dave,

First, thanks for mentioning to include a link! There’s always something missing when you make an announcement…

It assumes the “vanilla” mobile base setup of a 2D laser scanner. The testing I have done are with SICK TiM, Hokuyo lidars, and RPlidars, but there’s no reason you couldn’t use a lower cost version and a coarser resolution map. But in general a 2D laser scanner broadcasting a sensor_msgs/LaserScan message.

On the computational side, I haven’t tried it with something too under powered, I think my weakest robot is a 6th gen i5 and I haven’t had a problem. Its a good question if this would work on something like a Raspberry pi, and the answer would be: I don’t know but I’m open to finding out. I haven’t done anything insanely above the ordinary of 2D laser based SLAM, so you should be fine to use this if you are able to use other 2D slam packages on your platform like Karto, Gmapping, Cartographer, etc. You definitely won’t get 145,000 sq.ft. realtime on that type of platform however. That metric was using a 7th gen i7 mobile NUC processor.

Nice tool, we’ve been looking to work with something like this. Anyone interested in using an OPS241-B short range radar sensor to make it work with the toolbox? It provides range information to one or more detected objects in it’s field of view. Detection range is 0.1 to 20m. Contact me if interested.

This is awesome work @smac, thanks for sharing!

Is there any tutorials on this? it is really hard to use…

2 Likes

@smac hello can we use solid-state LiDARs ? let’s benewake CE30-C or D, because it only covers 132 degree FOV.

I don’t see a reason why not. Most of my lidars are only 270. I’d be interested in your experience in that. Do you have bag file and I can play with it to see it if works out / any adjustments I’d need to make?

Please ask any specific questions on ROS Answers and I’ll try to get to them. In general, I think my readme provides a wealth of information and configuration assistance.

1 Like

Hi, sorry for the question being too general, and thank you for reply! I’m not an expert on this area, so the question might be trivial.

My environment : ROS melodic.

  1. Inconvenience in configuration changes
    If I install the toolbox using apt like this,
    $ apt install ros-melodic-slam-toolbox
    accessing to yaml files in the default package was inconvenient, so I’ve compiled the srcs in my workspace.
    Q1. Are there easier way to change configurations in the default installation process? accessing the hidden yaml files was a pain for me.

  2. tf tree confusion
    I understand the document states to read rep105 so I’ve read it, but am still confused.
    subscribed tf N/A a valid transform from your configured odom_frame to base_frame ==>
    Q2. Does this mean that the slam toolbox does not broadcast odom to base_frame?
    Q3. I’m trying to locate myself using laserscan data only. Do I need to get additional odometry, for example, wheel encoder data?
    When I publish only laserscan data to /scan at the base_footprint frame and launched the default online_sync.launch, nothing happened, so I’ve broadcasted another tf( pareant : odom, child: base_footprint) and the map appeared only first time and does not change from then.

  3. No execution example.
    Of course, there are lot of ppl who are familiar with the SLAM related nodes. For noobs like me, a clear instruction with command line description is a big help.
    Maybe using some example lidar such as rplidar and explain configuration changes… Of course this may be not necessary for advanced users, but If more detailed example is provided, it would save a lot of time of the new users.

I’m hoping to use your amazing work. Thank you in advance for your reply!

Please ask on ROS Answers.

But it sounds like your issues are related to an unfamiliarity with ROS / ROS standards rather than the package itself.

1 Like

Hi thank you for your reply, I attach the bag file link, can you please have a look and give me the feedback and share some picture. Really appreciate your help.

Unfortunately, without TF odometry and a TF tree with the sensor and robot frames, there’s not much I can do with just a raw data feed. See REP 105 for the standards relating to odometry and SLAM in ROS.

Thanks for your reply, where is REP 105 ? can you please share the link ?

@ibrahimqazi , here it is.
https://www.ros.org/reps/rep-0105.html

@ibrahimqazi
A polite request: a lot of times, a simple Google Search will get access to most of the information, leading to faster development of the modules.

Thank you so much for your help

got it :slightly_smiling_face:

Can you elaborate on how the statements below?
“ROS drop in replacement to gmapping, cartographer, karto, hector, etc”
“This was created in response to inadequate mapping and localization quality from GMapping, Karto, Cartographer and AMCL”
Source: slam_toolbox - ROS Wiki
Update: I just saw the comment below. Why there?

from @smac.