ROS Resources: Documentation | Support | Discussion Forum | Service Status | Q&A answers.ros.org

Announcing LaMa: An alternative localization and mapping package

Dear ROS users,

We would like to announce the release of the IRIS LaMa (Localization and Mapping) package.
It includes a framework for 3D volumetric grids (for mapping), a localization algorithm based on scan matching and two SLAM solution (an Online SLAM and a Particle Filter SLAM).

The main feature is efficiency. You can even run the Particle Filter SLAM in a Raspberry Pi.

We provide ROS integration with the iris_lama_ros package.

Fell free to try it and provide any feedback.

9 Likes

Very nice. I’ll definitely take the localization out for a spin

You should also give SLAM a chance :slight_smile:

Looking forward to test it!

Hi @eupedrosa,
Congratulations on the release!
I am wondering if you have found any differences comparing IRIS LaMa localization to amcl implementation that’s already in ROS. Same for mapping, with comparison to popular ones out there.
I am really curious to know.

1 Like

Yes, I did compared my solutions with popular ones found in ROS. In the README file you can find a few papers where I compare LaMa’s algorithms with solutions such as AMCL and GMapping. But here are my selling points:

  • LaMa Localization vs AMCL: In general both provide good accuracy but (by default) AMCL does not use all data to compensate for particle filter’s overhead and that can result in some errors. Scan Matching can be 5x times faster or more. I still use AMCL in applications where information is reduced and noisy.

  • LaMa SLAM vs GMapping: I think that GMapping is a wonderful piece of technology but very slow. I remember, back in early 2012, using GMapping online was difficult. LaMa PF SLAM is kinda like a fast GMapping or faster GMapping if you activate multi-threading. LaMa Online SLAM is the turbo version, it can generate the Intel map in 5seconds. Here is the result:

  • LaMa SLAM vs Others: I used the slam benchmark to compare with other SLAM solutions and we did good :slight_smile:. I believe that Cartographer also used the same benchmark.

  • LaMa Sparse-Dense Mapping (SDM) vs OctoMap: OctoMap is another top reference in robotics. I only developed SDM because OctoMap’s main focus is occupancy grids and I needed more flexibility. The inner structure of SDM is model agnostic and provides the same features for any type of grid map. Those features include Copy-on-Write and Online Data Compression.

4 Likes

Thanks! BTW that map looks awesome!

Both mapping and localization work pretty well, but I am particularly happy with the latter.

Congratulation for the awesome work !!!

1 Like

Would you mind sharing a video of your tests? A video is worth a thousand images :sweat_smile:

1 Like

Looks very interesting!
Did I miss the link to the benchmark results somewhere?
A comprehensive quick overview demonstrating the performance both in terms of speed and accuracy in comparison to the other methods would also help a lot.

No, you did not miss the link. Some of the benchmarks are in published articles.
But I can provide a short summary of the results.

Accuracy

I used this SLAM benchmark software (like other have). This software provides a mean translation and rotation errors.

Here is short version of the errors:

Trans. error (cm) LaMa Online SLAM Cartographer GMapping
ACES 4.2 ± 4.7 3.7 ± 4.2 6.0 ± 4.9
Intel 2.0 ± 1.9 2.2 ± 2.3 7.0 ± 9.3
CSAIL 3.0 ± 2.7 3.1 ± 3.5 4.9 ± 4.9
Fr079 3.9 ± 3.0 4.5 ± 3.5 6.1 ± 4.5
Rot. error (deg) LaMa Online SLAM Cartographer GMapping
ACES 0.4 ± 0.6 0.3 ± 0.4 1.2 ± 1.3
Intel 0.2 ± 0.3 0.4 ± 1.3 3.0 ± 5.3
CSAIL 0.4 ± 1.0 0.3 ± 0.3 0.6 ± 1.2
Fr079 0.4 ± 0.5 0.5 ± 0.7 0.6 ± 0.6

Speed

The following table show how long each solution takes to process a given dataset:

Data Duration (s) LaMa Online SLAM Cartographer GMapping
ACES 1366 3 41 313
Intel 2691 5 179 915
CSAIL 424 3 35 697
Fr079 1061 4 62 813

:warning: The values that are presented here were not obtained using the same computer, therefore it may not be a fair comparison. Nonetheless, the difference in scale is obvious.
The Cartographer values are taken from the author’s paper.

Conclusion

Accuracy is on pair with known (and established) solutions such as GMapping with very good performance. I omitted the Particle Filter SLAM but I can say (with confidence) that it also offers good results.

You should give it a try!

But GMapping uses Particle Filter right?

Yes, GMapping uses a Particle Filter.

@eupedrosa congrats for the packages, and thanks for sharing it! :slight_smile:

What exact .rosbag files have you used as inputs for the SLAM benchmarking?

I’d like to know specifically what computers each of those tests were run on, I’m not sure you could say the scale is obvious if its a 5th generation laptop vs an 8th generation desktop CPU and had a fair comparison of settings.

1 Like

Thank you!
For the benchmark I used the raw log files available at SLAM bechmark dataset. These are CARMEN .clf log files (if I am not mistaken). To use these logs with ROS I created a small program to convert .clf to .rosbag.

I took the time to redo the tests and you are correct, the scale is not obvious.

The original GMapping values were taken from a HP ProLiant with two Intel Xeon CPU E5-2640 0 @ 2.50GHz (Max Turbo Frequency 3.00 GHz) running Ubuntu 16.04 TLS.

Here are some values taken from the same computer, a Thinkpad L480 with an Intel i7-8550U CPU @ 1.80GHz (Max Turbo Frequency 4.00 GHz) running Ubuntu 18.04.3 LTS.

exec. time (s) LaMa Online SLAM LaMa PF SLAM LaMa PF SLAM (4 threads) GMapping
ACES 3 34 11 77
Intel 5 34 12 182
CSAIL 3 48 15 140
Fr079 4 62 25 410

Maybe I should also take the time to test Cartographer.

1 Like

I like the direction this thread is taking…

Lot of us care about the quality of localization and it is hard to figure out the pros and cons of different algorithm, being also Karto one I would add to the list.

It would be very nice if we figure out a way to make these benchmark easily reproducible, maybe with a Docker image and a Gthub repository where people can add their own algorithm to the test.

If we dream big, we may even associate a continuous integration machine and get updated about new results!!! :star_struck:

Something that I consider valuable is the amount of tuning that a certain algorithm need.

With Cartographer, I always have the feeling that I can play with hard to understand parameters to make it work. I know that there are parameters that EVENTUALLY will give me an amazing map, but it could be frustrating.

Something I appreciate of iris_lama is that it just worked, at least with my dataset.

Benchmarking different SLAM algorithms (for map/localization quality and speed) is not a straightforward task. Benchmarks such as the SLAM benchmark try to be objective (and it kinda is), but, the final result can (and most likely will) be influenced by the parameters of the algorithm. This is most evident If I am the one trying to do the benchmark without fully understanding what the parameters do. And just like @facontidavide said, this can be frustrating.

Nonetheless, I do believe that benchmarking is necessary. It creates a healthy competition that can result in improvements. For example, the KTTI Visual Benchmark Suite has a leader board where the authors of a solution submit their code with proper parametrization for evaluation. Maybe something like this could exist for SLAM? I usually have to search for papers to find this kind of information.

I think that everybody likes things that just work :smiley:
The response of a system to change in parameters was something that I discussed quite often with my colleagues.

Hi,

I’ve been playing with this today and wanted to share my very preliminary results.

I would independently verify that its much lighter weight than other options I’ve seen recently for the optimizer based SLAM option. I’m seeing the CPU grow but overall pretty consistent over short trajectories at around 10% CPU on a 6th gen i7. Over the same trajectories with my package I’m seeing less consistent but generally hovering around 30% – both with more or less the same memory utilization.

I’d say though the rastered map image out isn’t as good as slam toolbox and I’m not seeing it accomplish loop closures as responsively. That may not be a big deal for many users. For the datasets it works with, it works pretty well to keep as a reasonable option on the table. For the datasets it doesn’t work with, I have no idea what’s going on. See below, the same robot, on the same day in the same environment 2 datasets were taken, one works fine, the other does this:

It worked for about 10-20 updates and then just started blowing up. No warnings or errors thrown. I’m also going to have to figure out why LaMa has so much of a CPU drop from Slam toolbox, it looks like it uses much of the same techniques and it may lie in the dependency libraries since I use Ceres as my LM solver & a bunch of outside libraries so I can swap out with new technology trends – though I’m sure you get a really nice speed up from the distance field work as well.

Overall I think this is a pretty good option, but needs to expose more of the parameters, documentation, and hardening – which in SLAM isn’t the hard stuff.

If there’s any interest in writing and maintaining long term a ROS2 port of this work, I’d support this as a genuine option for us on the ROS2 Navigation Working Group/TSC to consider at for the “default option” in ROS2. I think its well written and enables a number of applications on lower power machines, though being able to scale from small examples to 200,000+ sqft facilities remains to be evaluated.

Edit: I didn’t evaluate the localization stuff.
Edit2: I was thinking about those numbers, which seemed high and remembered that I didn’t build in release mode so those are going to be higher than you’d see in production.

3 Likes