ROS Resources: Documentation | Support | Discussion Forum | Service Status | Q&A

Announcing LaMa: An alternative localization and mapping package

Unfortunately, no on the ETA. It’s a side project that isn’t a priority right now.

I’m curious, I saw another talk at IROS using distance maps for localization and it was clearly extremely susceptible to moving obstacles in the mapping phase since the distance function features immediately change if anything moves in free space. I was wondering if your approach has that same issue - that could explain what I’m seeing if when things move in areas its mapping that really messes up feature matching.

I work in (academic/industrial) robotic projects where localization and mapping is a topic of interest. Therefore, there is an interest in maintaining LaMa.

Good to know. I’d also be curious to chat more offline about what your future plans look like, but I don’t want to derail this discussion.

Well the PF things would immediately get thrown out for large scale, I’m more interested in the optimization things for large scale.

1 Like

I hope that, even with low priority, those datasets will go public in a near future. In my opinion, datasets for testing SLAM are not abundant. We have the classics, they are nice but old.

Possible answer:

1 Like

One major blocker is people keep asking me for ground truth files for them and that automatically makes it much higher effort than just dumping a bunch of data files. I figured any data was better than none but from my discussions that appears to be something folks want, even if they can compute their own based on another technique to compare against.

The goal of the datasets were to give a variety of spaces across multiple industries that people might not normally have access to both from physical access and physical robot resource restrictions (malls, grocery store, best buy, a city intersection, a jiffy lube, etc), and have real-world issues represented like wheel slippage, getting stuck, mirrors and windows, etc. If I have to add ground truth to that it makes things much harder. Even without ground truth, I only have about 20 unique datasets, I’d need a bit more before its more than a side project. Right now I’m collecting raw sensor data from Laser, IMU, odometry, as well as the TF transformations. I thought about cameras and such but that gives a little more away about my platform than I’m comfortable with & that makes the bag files much larger.


Continue on ticket now that we’ve narrowed down the line of discussion

1 Like

@eupedrosa will this work normally on Raspberry Pi 3B?

Yes @parzival, it will work on a Raspberry Pi 3B. I have a turtlebot2 being controlled by 2 Pi’s (3B+), one for processing and the other just for reading data from sensors. In one of them I run the slam2d_ros without a problem. For reference, it processes the intel dataset, which has a span of 44 minutes, in 50 seconds. The pf_slam2d_ros will also work but I would recommend using multi-threading and data compression.

The localization loc2d_ros also works fine.

1 Like

I’ve been recently playing quite a bit with iris_lama and have created this blog post summarizing my experience:

Since I worked with slam_toolbox quite a bit before I made a video comparison running on the same data with mostly default settings:

When @eupedrosa said that the CPU consumption was minimal he wasn’t kidding. I’ll definitely keep iris_lama as one of the options going further with my experiments.

Coming next: I’m about to set up an RTK system. Hope I’ll be able to test my robot outside to create a medium-large dataset with a proper ground truth information.

If you have any tips on what else I could test I’d be super grateful for them!


Very nice. I also compared the two and, in general, there isn’t a big difference for my (relatively small) map.

Even in your video we may appreciate the fact that slam_toolbox is a graph slam and that might be better for some complex cases where a fancy loop–closure is needed.

Nevertheless, I am a big fan on iris_lama :star_struck:

1 Like

I wonder have you tested how good the T265 IMU is compared to Pixhawk autopilot IMU?

Are you still using the T265 for odometry source in this setup?

In a podcast some time ago, i heard that gmapping and cartographer maxed out between 50,000 and 100,000 square feet of internal space. What is the macimum space LaMa is capable of mapping?

Nice. :+1: :smiley:
It may be apples to oranges but at least variety exists. (I like apples and oranges)

If I am not mistaken, with the default parameters and a resolution of 0.05 cm it can address an area of approximately 4k Km^2. (I’m sorry but it is hard for me to think in square feet). I believe that the system will run out of memory before reaching the theoretical limit of the map structure.

I didn’t use T265 for quite a while now. I need to re-enable it in my setup and do some follow up experiments with the newest releases. The issue with T265 IMU is that it lacks a magnetometer and I’d rather have it.

I have been working with IMUs lately and I’m considering creating an IMU benchtest so that I can easily compare different solutions. If you have any ideas how to pull it off then feel free to let me know! Right now I’m considering using a motor+encoder setup to rotate some IMUs at a predefined and move them to predefined positions and check how they cope with it.

I don’t have lots of experience with IMUs.

But, I think a simple test would be how does the orientation drifts over a certain distance of travel. I frequently saw drifts in the pitch / roll direction as depicted in method 3 here using T265 IMU optical frame. I have only used this on a turtlebot3 not a quadcopter.

I have yet to test Pixhawk, but I would guess it’s more stable. As for odometry and localization, wheel encoders + lidar tend to drift over time. Which is why I used T265.

Can you share how the code for converting clf datafiles to rosbag?

IMO Pixhawk is an overkill for a mobile platform like this - if you launch mavros you end up with probably 20+ topics that you won’t care about. I’ll be testing Phidgets IMU quite soon, will let you know if it’s a better option.

Hi, I get error when building the iris_lama_ros -

Could not find a package configuration file provided by “tf_conversions”
Could not find a package configuration file provided by “tf_conversions”
with any of the following names:


Add the installation prefix of “tf_conversions” to CMAKE_PREFIX_PATH or set
“tf_conversions_DIR” to a directory containing one of the above files. If
“tf_conversions” provides a separate development package or SDK, be sure it
has been installed.

@rosdevil7 just run this command on your bash if you are using ros melodic, or change to your ros distro
sudo apt install ros-melodic-tf-conversions

@Zakaria It works! Thanks a lot.

I wonder how you evalue the results of gmapping and cartographer by slam benchmarking.
The input of slam benchmarking is logfile, right? How can you get it?

I wonder how you evalue the results of gmapping and cartographer by slam benchmarking.
The input of slam benchmarking is logfile, right? How can you get it?

Yes, SLAM Benchmarking requires a (CARMEN) logfile. I got one by generating a logfile while doing SLAM. After each update, I would write an FLASER tag to the file. This was done directly in the source code.

Note that I only did this for my solution, not GMapping and Cartographer. For those I just used the Benchmark results reported in this papers:

Holz, D. and Behnke, S. (2010). 
Sancta Simplicitas - On the Efficiency and Achievable Results of SLAM Using ICP-based Incremental Registration. 
In Proc. of the IEEE Int. Conf. on Robotics & Automation (ICRA), pages 1380–1387, Alaska, USA.

Hess, W., Kohler, D., Rapp, H., and Andor, D. (2016). 
Real-Time Loop Closure in 2D LIDAR SLAM. 
In 2016 IEEE International Conference on Robotics and Automation (ICRA), pages 1271–1278, Stockholm, Sweden.

The advantage of the SLAM benchmark is that the results provided by others can be directly compared.