I’m using LDS-01 LIDAR for creating 2D occupancy grid. It is accomplished using gmapping and it works reasonably good. LIDAR gives good precision but information is collected in 2D plane. It cannot detect obstacles higher or lower than LIDAR and that is reason why generated map is not realistic. Robot keeps hitting tables, chairs, fans, etc…
I bought Intel R200 depth camera to solve this problem but I cannot find solution.
As gmapping is laser-based solution I tried to convert PointCloud2 depth data to laser scan data using depthimage_to_laserscan package. It correctly shows obstacles in rViz but looks like gmapping does not support multiple laser sources.
Then I tried to use ira_laser_tools package to merge two laser sources… But looks like laser scan data that is generated with depthimage_to_laserscan package is not compatible with laser scan data that LIDAR provides. This is error that ira_laser_tools is reporting:
burger@burger-UP-CHT01:~$ roslaunch ira_laser_tools laserscan_multi_merger.launch
… logging to /home/burger/.ros/log/4979d2f4-d902-11e8-a0f5-00c0caa51893/roslaunch-burger-UP-CHT01-5052.log
Checking log directory for disk usage. This may take awhile.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.
process[laserscan_multi_merger-1]: started with pid [5075]
[ INFO] [1540546555.340892157]: Subscribing to topics 2
/scan /scan_r200 [ WARN] [1540546555.411538397]: Transformer::setExtrapolationLimit is deprecated and does not do anything
[pcl::concatenatePointCloud] Number of fields in cloud1 (5) != Number of fields in cloud2 (4)
[pcl::concatenatePointCloud] Number of fields in cloud1 (5) != Number of fields in cloud2 (4)
[pcl::concatenatePointCloud] Number of fields in cloud1 (5) != Number of fields in cloud2 (4)
[pcl::concatenatePointCloud] Number of fields in cloud1 (5) != Number of fields in cloud2 (4)
[pcl::concatenatePointCloud] Number of fields in cloud1 (5) != Number of fields in cloud2 (4)
[pcl::concatenatePointCloud] Number of fields in cloud1 (5) != Number of fields in cloud2 (4)
[pcl::concatenatePointCloud] Number of fields in cloud1 (5) != Number of fields in cloud2 (4)
[pcl::concatenatePointCloud] Number of fields in cloud1 (5) != Number of fields in cloud2 (4)
[pcl::concatenatePointCloud] Number of fields in cloud1 (5) != Number of fields in cloud2 (4)
[pcl::concatenatePointCloud] Number of fields in cloud1 (5) != Number of fields in cloud2 (4)
[pcl::concatenatePointCloud] Number of fields in cloud1 (5) != Number of fields in cloud2 (4)
Is there any other solution to fuse depth information from R200 with LIDAR data and use it with gmapping SLAM?
A bi-dimensional map with some extra knowledge: If you only have a 2D map and your robot is hitting tables and chairs…well, include in your map those tables and chairs. If your table is static, just draw it on your map. The navigation system will take care of not hitting it…Also, if the robot can go under the table, can be as if you have not a table at all, so sometimes makes also sense to make your robot not that tall. Chairs are a little bit more tricky.
Think in the map that you are doing with gmapping as an hiper-static map. Nothing is expected to move there. You have to keep there only walls, tables that does not move and things that will remain in their position. If you include chairs and move those around you will make the work of your amcl harder.
The navigation stack is awesome: you have your map with static obstacles. Use it only for localization! The navigation stack can afford several laser inputs and draw dynamically objects that moves, appears and disappears. If you have a reliable positioning, it will not be hard to add those obstacles and avoid them.
Finally…and focussing in the ira_laser_tools. I would just look at the fields of cloud1 and cloud2. Probably are not the same kind of message. Could you please check rostopic info name_of_topic in both of them and tell us the type of message that they are using?
The general solution to the problem is still unsolved. You are not going to find a turn-key solution.
I think a key insight I got from reading is that SLAM a very different problem from collision avoidance and you need different software pipelines for each problem
It makes sense to divide obstacles into two classes “dynamic” and “static”. Anything that can be moved is “dynamic” and you treat it as if it were another vehicle. They are all moving objects that are in your shared environment.
Funny that LIDAR works best for indoor static obstacles (like walls) but outdoors LIDAR is used to track dynamic obstacles (like other cars)
You might use a single 3D depth camera for both SLAM and collision avoidance but you’d have to split the data and send it to different pipelines.
For SLAM you would prefer the sensor to be at about human “eye level” where the indoor space is more stable and it can see over most furniture. And as I said collision avoidance is best treated separately from SLAM
It’s better to ask this kind of question over at https://answers.ros.org/. If you encounter problems while following the tutorial, create a question over there and copy a link here, I’ll try to answer.
I know that I can manually edit 2D map but It won’t be easy task for larger maps, that is reason why I’m looking for solution to do it automatically using existing R200 camera.
Navigation stack works great, I don’t have complaint for it, it works great with multiple laser scanners, PointClouds, just great. But I’ll still like to make more realistic map that will prevent robots to navigate under table if robot is taller than table and similar situations.
R200 camera cannot detect obstacles that are closer than 0.5m and it makes it useless for Navigation stack.
I’ll give you example. When I try to navigate from point A to point B and there is desk in between, robot will try to get closer to table, when it approaches at distance less than 0.5m, R200 camera cannot see that table anymore and robot will keep trying to go under table and result in hitting it.
Finally…and focussing in the ira_laser_tools. I would just look at the fields of cloud1 and cloud2. Probably are not the same kind of message. Could you please check rostopic info name_of_topic in both of them and tell us the type of message that they are using?
Both of these topics are Type: sensor_msgs/LaserScan
@ Chris_Albertson
I also tried using cartographer SLAM to make map and it supports to have multiple laser scanner sources. With this approach I can collect information from both, LIDAR and R200 camera. But map that is generated cannot be used effectively by navigation stack. I’m looking for solution how to convert map generated by cartographer SLAM and to use it by navigation stack.
@ Martin_Guenther
I’m already using both sensors in navigation stack, but I still need solution for using both sensors for creating more realistic map.