We consider as input two maps: occupation grid map and a elevation map (DEM). After we create several layers (for each robot yaw), where we create new occupied cells if the position and yaw are danger for the robot. After that we consider an A-star extended to search in this new 3D space (x,y,theta).
I’m not affiliated at all with the authors, but just wanted to point out that there is additional information regarding @smac’s question in this video, specifically at the following time stamps:
11:20 - AgroPP - A* path planning, occupation grid map, one layer per robot yaw
22:08 - related question and more detailed answer during the Q&A
From what I understand, you need preexisting maps to be able to run this PP right? As the node subscribes to /map which is a 2D occupancy grid, but you also need to set the altitude_map parameter, which I imagine is the way the node accesses the DEM that has been previously created to plan its path in 3D? The occupancy grid map and elevation map can be created using your PC2GD algorithm right? Could this be done using a stereo camera and eg. RTAB-Map, the docs seem to only mention 2D/3D LiDAR? What kind of controller do you then use to execute the planned path?
@fbnsantos A great project really enjoy it. I wanted to ask some questions about the vineyard detector.
If you or anyone else can please explain how exactly is the hLBP by color being computed that would be great, I tried looking at the code base but it doesn’t really have any comments and it’s quite hard to understand from there.
I really need a robust texture descriptor for another project and this would be great to use.