@rosdevil7 just run this command on your bash if you are using ros melodic, or change to your ros distro
sudo apt install ros-melodic-tf-conversions
@Zakaria It works! Thanks a lot.
Hello!
I wonder how you evalue the results of gmapping and cartographer by slam benchmarking.
The input of slam benchmarking is logfile, right? How can you get it?
I wonder how you evalue the results of gmapping and cartographer by slam benchmarking.
The input of slam benchmarking is logfile, right? How can you get it?
Yes, SLAM Benchmarking requires a (CARMEN) logfile. I got one by generating a logfile while doing SLAM. After each update, I would write an FLASER tag to the file. This was done directly in the source code.
Note that I only did this for my solution, not GMapping and Cartographer. For those I just used the Benchmark results reported in this papers:
Holz, D. and Behnke, S. (2010).
Sancta Simplicitas - On the Efficiency and Achievable Results of SLAM Using ICP-based Incremental Registration.
In Proc. of the IEEE Int. Conf. on Robotics & Automation (ICRA), pages 1380β1387, Alaska, USA.
Hess, W., Kohler, D., Rapp, H., and Andor, D. (2016).
Real-Time Loop Closure in 2D LIDAR SLAM.
In 2016 IEEE International Conference on Robotics and Automation (ICRA), pages 1271β1278, Stockholm, Sweden.
The advantage of the SLAM benchmark is that the results provided by others can be directly compared.
Just curious, is it doing sensor fusion and filtering. How many sensor nodes feed through?
I think that mapping is sensor fusion, and the likelihood field does some filtering. But you may be talking about other kind of sensor fusion.
Only one sensor node feeds through, usually. It accepts multiple sensors to publish in the βinputβ topic.
I should have explained a bit more; the intent of using a localization package is not for the purpose of navigation or mapping. I have a sensor stack that has around 7 sensors, GNSS, imu (several-from tracker camera, from wind sensor, from an imu itself), and the rest other sensors which are measuring some values in the air.
I was looking for an alternative to robot localization where I could feed all the position and orientation related sensors to the node and get a synced positional output at the output of doom (all the sensors are at different frequencies ranging from 5hz to 200hz).
Do you think I can use LaMa for the same? The fact that I can run it over even on raspberry pi for the computationally inexpensive algorithm you have, got my attention.
Sorry, I guess that LaMa is not what you want.
The localization algorithm is based on occupancy grids. So it much be feed with range information alone. It is not a multi-modal algorithm.