Hi friends!
Today, all active ROS 2 distributions finally ship the brand-new mola_lidar_odometry
package, featuring 3D LiDAR odometry & mapping from either, the most “famous” datasets (KITTI, MulRan,…) or your own ROS bags or a live robot. It can be run via a CLI, a custom GUI, or from RViz or FoxGlove.
That is just one package from an “ecosystem” of C++ libraries, ROS packages, and CLI and GUI tools, part of an attempt to “change the mindset” towards using view-based maps (dubbed simple-maps
in our framework), which can then be used to generate metric maps of different types, filter them, etc. with pipelines defined as YAML files (no coding needed at all!). The core insight is: the same “simple map” should be used to generate different metric representations depending on the desired robotic task: navigation, manipulation, perception, and so on.
The LiDAR odometry algorithm itself can be fully reconfigured via a YAML file too, and extended via a plugins mechanism to define new metric map types, raw sensory data matching algorithms, etc.
This video summarizes some results and features of the project:
Interested users should probably head to:
- The official project documentation website, which includes a step-by-step tutorial for building your first 3D map: MOLA — MOLA v1.1.2 documentation
- The ArXiV preprint with all the conceptual details and experimental benchmarks: [2407.20465] A flexible framework for accurate LiDAR odometry, map manipulation, and localization
- The “core” MOLA GitHub repo: GitHub - MOLAorg/mola: A Modular Optimization framework for Localization and mApping (MOLA)
I’m thriving to see what feedback the ROS community gives about all these tools. Feel free of opening issues and/or pull requests in the project repos, and share your successes (or issues!) trying it with your own robotic projects!
Of course, it’s an ongoing project and new features are planned for the short term, so stay tuned for more news.
JL