ROS Resources: Documentation | Support | Discussion Forum | Service Status | Q&A

Announcing LaMa: An alternative localization and mapping package


I’ve been playing with this today and wanted to share my very preliminary results.

I would independently verify that its much lighter weight than other options I’ve seen recently for the optimizer based SLAM option. I’m seeing the CPU grow but overall pretty consistent over short trajectories at around 10% CPU on a 6th gen i7. Over the same trajectories with my package I’m seeing less consistent but generally hovering around 30% – both with more or less the same memory utilization.

I’d say though the rastered map image out isn’t as good as slam toolbox and I’m not seeing it accomplish loop closures as responsively. That may not be a big deal for many users. For the datasets it works with, it works pretty well to keep as a reasonable option on the table. For the datasets it doesn’t work with, I have no idea what’s going on. See below, the same robot, on the same day in the same environment 2 datasets were taken, one works fine, the other does this:

It worked for about 10-20 updates and then just started blowing up. No warnings or errors thrown. I’m also going to have to figure out why LaMa has so much of a CPU drop from Slam toolbox, it looks like it uses much of the same techniques and it may lie in the dependency libraries since I use Ceres as my LM solver & a bunch of outside libraries so I can swap out with new technology trends – though I’m sure you get a really nice speed up from the distance field work as well.

Overall I think this is a pretty good option, but needs to expose more of the parameters, documentation, and hardening – which in SLAM isn’t the hard stuff.

If there’s any interest in writing and maintaining long term a ROS2 port of this work, I’d support this as a genuine option for us on the ROS2 Navigation Working Group/TSC to consider at for the “default option” in ROS2. I think its well written and enables a number of applications on lower power machines, though being able to scale from small examples to 200,000+ sqft facilities remains to be evaluated.

Edit: I didn’t evaluate the localization stuff.
Edit2: I was thinking about those numbers, which seemed high and remembered that I didn’t build in release mode so those are going to be higher than you’d see in production.


Would it be possible to have access to the dataset where the mapping just started blowing up? I would like to investigate what happened.

The distance field does provide a nice speed up :smiley:

Those 3 things are in my TODO list.
Your dataset could be of great help for hardening.

I have interest in maintaining LaMa for as long as I can. If you believe that LaMa is an option for ROS2 Navigation you have my support. However, I have to say that I have little experience with ROS2 – maybe it is an opportunity to start using it in our robots.

I would also like to evaluate LaMa against large environments, unfortunately I do not have access to datasets of such kind. And, how much 200,000+ sqft is in meters? :wink:

I wonder if this interesting discussion should be moved to Github… (issues).

I also believe that the speed and simplicity of LaMA makes it a good candidate for ROS2 “default”

1 Like

Unfortunately these are datasets I can’t (yet) release. I’m working on building a critical mass of them to release in a larger database under some license but I can’t release incremental datasets or this becomes really challenging from a legal perspective.

for as long as I can

is there some triggering event where that would no longer be the case?

evaluate against large environments

I’m working behind the scenes on your behalf to make that happen. Stay tuned.

Is there an E.T.A. for those datasets?

The only trigger that I can think of is professional incompatibility of some kind. But this is very unlikely to happen. I work in (academic/industrial) robotic projects where localization and mapping is a topic of interest. Therefore, there is an interest in maintaining LaMa.

LaMa was not develop with large scale in mind. But, I hope at least the mapping framework (SDM) will scale nicely.

Unfortunately, no on the ETA. It’s a side project that isn’t a priority right now.

I’m curious, I saw another talk at IROS using distance maps for localization and it was clearly extremely susceptible to moving obstacles in the mapping phase since the distance function features immediately change if anything moves in free space. I was wondering if your approach has that same issue - that could explain what I’m seeing if when things move in areas its mapping that really messes up feature matching.

I work in (academic/industrial) robotic projects where localization and mapping is a topic of interest. Therefore, there is an interest in maintaining LaMa.

Good to know. I’d also be curious to chat more offline about what your future plans look like, but I don’t want to derail this discussion.

Well the PF things would immediately get thrown out for large scale, I’m more interested in the optimization things for large scale.

I hope that, even with low priority, those datasets will go public in a near future. In my opinion, datasets for testing SLAM are not abundant. We have the classics, they are nice but old.

Possible answer:

One major blocker is people keep asking me for ground truth files for them and that automatically makes it much higher effort than just dumping a bunch of data files. I figured any data was better than none but from my discussions that appears to be something folks want, even if they can compute their own based on another technique to compare against.

The goal of the datasets were to give a variety of spaces across multiple industries that people might not normally have access to both from physical access and physical robot resource restrictions (malls, grocery store, best buy, a city intersection, a jiffy lube, etc), and have real-world issues represented like wheel slippage, getting stuck, mirrors and windows, etc. If I have to add ground truth to that it makes things much harder. Even without ground truth, I only have about 20 unique datasets, I’d need a bit more before its more than a side project. Right now I’m collecting raw sensor data from Laser, IMU, odometry, as well as the TF transformations. I thought about cameras and such but that gives a little more away about my platform than I’m comfortable with & that makes the bag files much larger.


Continue on ticket now that we’ve narrowed down the line of discussion

@eupedrosa will this work normally on Raspberry Pi 3B?

Yes @parzival, it will work on a Raspberry Pi 3B. I have a turtlebot2 being controlled by 2 Pi’s (3B+), one for processing and the other just for reading data from sensors. In one of them I run the slam2d_ros without a problem. For reference, it processes the intel dataset, which has a span of 44 minutes, in 50 seconds. The pf_slam2d_ros will also work but I would recommend using multi-threading and data compression.

The localization loc2d_ros also works fine.

1 Like

I’ve been recently playing quite a bit with iris_lama and have created this blog post summarizing my experience:

Since I worked with slam_toolbox quite a bit before I made a video comparison running on the same data with mostly default settings:

When @eupedrosa said that the CPU consumption was minimal he wasn’t kidding. I’ll definitely keep iris_lama as one of the options going further with my experiments.

Coming next: I’m about to set up an RTK system. Hope I’ll be able to test my robot outside to create a medium-large dataset with a proper ground truth information.

If you have any tips on what else I could test I’d be super grateful for them!


Very nice. I also compared the two and, in general, there isn’t a big difference for my (relatively small) map.

Even in your video we may appreciate the fact that slam_toolbox is a graph slam and that might be better for some complex cases where a fancy loop–closure is needed.

Nevertheless, I am a big fan on iris_lama :star_struck:

1 Like

I wonder have you tested how good the T265 IMU is compared to Pixhawk autopilot IMU?

Are you still using the T265 for odometry source in this setup?

In a podcast some time ago, i heard that gmapping and cartographer maxed out between 50,000 and 100,000 square feet of internal space. What is the macimum space LaMa is capable of mapping?

Nice. :+1: :smiley:
It may be apples to oranges but at least variety exists. (I like apples and oranges)

If I am not mistaken, with the default parameters and a resolution of 0.05 cm it can address an area of approximately 4k Km^2. (I’m sorry but it is hard for me to think in square feet). I believe that the system will run out of memory before reaching the theoretical limit of the map structure.

I didn’t use T265 for quite a while now. I need to re-enable it in my setup and do some follow up experiments with the newest releases. The issue with T265 IMU is that it lacks a magnetometer and I’d rather have it.

I have been working with IMUs lately and I’m considering creating an IMU benchtest so that I can easily compare different solutions. If you have any ideas how to pull it off then feel free to let me know! Right now I’m considering using a motor+encoder setup to rotate some IMUs at a predefined and move them to predefined positions and check how they cope with it.

I don’t have lots of experience with IMUs.

But, I think a simple test would be how does the orientation drifts over a certain distance of travel. I frequently saw drifts in the pitch / roll direction as depicted in method 3 here using T265 IMU optical frame. I have only used this on a turtlebot3 not a quadcopter.

I have yet to test Pixhawk, but I would guess it’s more stable. As for odometry and localization, wheel encoders + lidar tend to drift over time. Which is why I used T265.

Can you share how the code for converting clf datafiles to rosbag?

IMO Pixhawk is an overkill for a mobile platform like this - if you launch mavros you end up with probably 20+ topics that you won’t care about. I’ll be testing Phidgets IMU quite soon, will let you know if it’s a better option.