ROS Resources: Documentation | Support | Discussion Forum | Service Status | Q&A

Announcing stable release of Slam Toolbox


Over the last 2 years or so a pet project of mine is finally ready for prime time and see get some use. Slam Toolbox is a set of tools and capabilities for 2D planar SLAM. This project contains the ability to do most everything any other available SLAM library, both free and paid, and more. This includes:

  • Ordinary point-and-shoot 2D SLAM mobile robotics folks expect (start, map, save pgm file)
  • life-long mapping (start, serialize, wait any time, restart anywhere, continue refining)
  • an optimization-based localization mode (start, serialize, restart anywhere in Localization mode, optimization based localizer)
  • synchronous and asynchronous modes
  • kinematic map merging (with an elastic graph manipulation merging technique in the works)
  • plugin-based optimization solvers with a new optimized Google Ceres based plugin
  • RVIZ plugin for interating with the tools
  • graph manipulation tools in RVIZ to manipulate nodes and connections during mapping
  • Map serialization and lossless data storage
  • … more but those are the highlights

For running on live production robots, I recommend using the snap: slam-toolbox, it has optimizations in it that make it about 10x faster. You need the deb/source install for the other developer level tools that don’t need to be on the robot (rviz plugins, etc).

This package has been benchmarked mapping building at 5x+ realtime up to about 30,000 sqft and 3x realtime up to about 60,000 sqft. with the largest area (I’m aware of) used was a 145,000 sq.ft. building in sychronous mode (e.i. processing all scans, regardless of lag), and much larger spaces in asynchronous mode.

I’d love to see what people think. Features of this have been running live on dozens of robots worldwide. I’ll be the first to admit it needs some refactoring, but the capabilities are there. Take a look, star, and I look forward to the issue tickets and feature requests!

Steve Macenski, Open Source Robotics Engineering Lead @ Samsung Research America


Awesome! Can you help us understand either qualitatively or quantitatively:

  1. The types of sensor it expects and the types of sensor it accepts
  2. The sensor quality it expects and accepts
  3. The typical computer power needed to do things such as 145,000 square feet real time mapping.

It looks terrific, and I am excited to try it on a Ubiquity Robotics Magni when our 3-D TOF sensor is ready.

One last thing you didn’t include a link to the repo. I presume this is the right one:

Hi Dave,

First, thanks for mentioning to include a link! There’s always something missing when you make an announcement…

It assumes the “vanilla” mobile base setup of a 2D laser scanner. The testing I have done are with SICK TiM, Hokuyo lidars, and RPlidars, but there’s no reason you couldn’t use a lower cost version and a coarser resolution map. But in general a 2D laser scanner broadcasting a sensor_msgs/LaserScan message.

On the computational side, I haven’t tried it with something too under powered, I think my weakest robot is a 6th gen i5 and I haven’t had a problem. Its a good question if this would work on something like a Raspberry pi, and the answer would be: I don’t know but I’m open to finding out. I haven’t done anything insanely above the ordinary of 2D laser based SLAM, so you should be fine to use this if you are able to use other 2D slam packages on your platform like Karto, Gmapping, Cartographer, etc. You definitely won’t get 145,000 sq.ft. realtime on that type of platform however. That metric was using a 7th gen i7 mobile NUC processor.

Nice tool, we’ve been looking to work with something like this. Anyone interested in using an OPS241-B short range radar sensor to make it work with the toolbox? It provides range information to one or more detected objects in it’s field of view. Detection range is 0.1 to 20m. Contact me if interested.

This is awesome work @smac, thanks for sharing!