Hi,
Would it be possible to have access to the dataset where the mapping just started blowing up? I would like to investigate what happened.
The distance field does provide a nice speed up
Those 3 things are in my TODO list.
Your dataset could be of great help for hardening.
I have interest in maintaining LaMa for as long as I can. If you believe that LaMa is an option for ROS2 Navigation you have my support. However, I have to say that I have little experience with ROS2 ā maybe it is an opportunity to start using it in our robots.
I would also like to evaluate LaMa against large environments, unfortunately I do not have access to datasets of such kind. And, how much 200,000+ sqft is in meters?
Unfortunately these are datasets I canāt (yet) release. Iām working on building a critical mass of them to release in a larger database under some license but I canāt release incremental datasets or this becomes really challenging from a legal perspective.
for as long as I can
is there some triggering event where that would no longer be the case?
evaluate against large environments
Iām working behind the scenes on your behalf to make that happen. Stay tuned.
The only trigger that I can think of is professional incompatibility of some kind. But this is very unlikely to happen. I work in (academic/industrial) robotic projects where localization and mapping is a topic of interest. Therefore, there is an interest in maintaining LaMa.
LaMa was not develop with large scale in mind. But, I hope at least the mapping framework (SDM) will scale nicely.
Unfortunately, no on the ETA. Itās a side project that isnāt a priority right now.
Iām curious, I saw another talk at IROS using distance maps for localization and it was clearly extremely susceptible to moving obstacles in the mapping phase since the distance function features immediately change if anything moves in free space. I was wondering if your approach has that same issue - that could explain what Iām seeing if when things move in areas its mapping that really messes up feature matching.
I work in (academic/industrial) robotic projects where localization and mapping is a topic of interest. Therefore, there is an interest in maintaining LaMa.
Good to know. Iād also be curious to chat more offline about what your future plans look like, but I donāt want to derail this discussion.
Well the PF things would immediately get thrown out for large scale, Iām more interested in the optimization things for large scale.
I hope that, even with low priority, those datasets will go public in a near future. In my opinion, datasets for testing SLAM are not abundant. We have the classics, they are nice but old.
One major blocker is people keep asking me for ground truth files for them and that automatically makes it much higher effort than just dumping a bunch of data files. I figured any data was better than none but from my discussions that appears to be something folks want, even if they can compute their own based on another technique to compare against.
The goal of the datasets were to give a variety of spaces across multiple industries that people might not normally have access to both from physical access and physical robot resource restrictions (malls, grocery store, best buy, a city intersection, a jiffy lube, etc), and have real-world issues represented like wheel slippage, getting stuck, mirrors and windows, etc. If I have to add ground truth to that it makes things much harder. Even without ground truth, I only have about 20 unique datasets, Iād need a bit more before its more than a side project. Right now Iām collecting raw sensor data from Laser, IMU, odometry, as well as the TF transformations. I thought about cameras and such but that gives a little more away about my platform than Iām comfortable with & that makes the bag files much larger.
Anyhow.
Continue on ticket now that weāve narrowed down the line of discussion
Yes @parzival, it will work on a Raspberry Pi 3B. I have a turtlebot2 being controlled by 2 Piās (3B+), one for processing and the other just for reading data from sensors. In one of them I run the slam2d_ros without a problem. For reference, it processes the intel dataset, which has a span of 44 minutes, in 50 seconds. The pf_slam2d_ros will also work but I would recommend using multi-threading and data compression.
Since I worked with slam_toolbox quite a bit before I made a video comparison running on the same data with mostly default settings:
When @eupedrosa said that the CPU consumption was minimal he wasnāt kidding. Iāll definitely keep iris_lama as one of the options going further with my experiments.
Coming next: Iām about to set up an RTK system. Hope Iāll be able to test my robot outside to create a medium-large dataset with a proper ground truth information.
If you have any tips on what else I could test Iād be super grateful for them!
Very nice. I also compared the two and, in general, there isnāt a big difference for my (relatively small) map.
Even in your video we may appreciate the fact that slam_toolbox is a graph slam and that might be better for some complex cases where a fancy loopāclosure is needed.
In a podcast some time ago, i heard that gmapping and cartographer maxed out between 50,000 and 100,000 square feet of internal space. What is the macimum space LaMa is capable of mapping?
Nice.
It may be apples to oranges but at least variety exists. (I like apples and oranges)
If I am not mistaken, with the default parameters and a resolution of 0.05 cm it can address an area of approximately 4k Km^2. (Iām sorry but it is hard for me to think in square feet). I believe that the system will run out of memory before reaching the theoretical limit of the map structure.
I didnāt use T265 for quite a while now. I need to re-enable it in my setup and do some follow up experiments with the newest releases. The issue with T265 IMU is that it lacks a magnetometer and Iād rather have it.
I have been working with IMUs lately and Iām considering creating an IMU benchtest so that I can easily compare different solutions. If you have any ideas how to pull it off then feel free to let me know! Right now Iām considering using a motor+encoder setup to rotate some IMUs at a predefined and move them to predefined positions and check how they cope with it.
But, I think a simple test would be how does the orientation drifts over a certain distance of travel. I frequently saw drifts in the pitch / roll direction as depicted in method 3 here using T265 IMU optical frame. I have only used this on a turtlebot3 not a quadcopter.
I have yet to test Pixhawk, but I would guess itās more stable. As for odometry and localization, wheel encoders + lidar tend to drift over time. Which is why I used T265.
IMO Pixhawk is an overkill for a mobile platform like this - if you launch mavros you end up with probably 20+ topics that you wonāt care about. Iāll be testing Phidgets IMU quite soon, will let you know if itās a better option.
Could not find a package configuration file provided by ātf_conversionsā
Could not find a package configuration file provided by ātf_conversionsā
with any of the following names:
Add the installation prefix of ātf_conversionsā to CMAKE_PREFIX_PATH or set
ātf_conversions_DIRā to a directory containing one of the above files. If
ātf_conversionsā provides a separate development package or SDK, be sure it
has been installed.