I wanted to get some community feedback on how we handle path planners into the future.
At the moment, we replan at a given rate, say 1 Hz by default. This has the benefit of a refined path at a regular rate. It has the bad trait that if you’re planning to a place with multiple valid directions you can go in of similar cost but opposing directions, you can end up with oscillation (imagine a robot being asked to plan to another side of a desk from the center, both directions are equally valid).
I think this method has some issues. Namely, the above and the impact on constant replanning for more feasible path generators (more below).
So please look over this ticket or comment here to give your thoughts, but I am currently considering the options below based on the criteria:
- I’d like to be able to run “better” planners. Ones that search, sample, and optimize their way to a feasible path based on a robots characteristics (differential, omni, ackermann, other-weird-things, legged, etc). These types of planners aren’t going to run in <100ms like NavFn and constantly replanning would be high impact on the CPU for minimal gains if the paths only change slightly or the total cost is similar.
- I’d like to consider the behavior of multi-robot systems. Constant replanning of individual agents makes the distributed planning problem much harder since you can’t necessarily know where each robot is going to be without yourself replanning every 1 Hz. Having a defined plan that you stick to until there’s a good reason not to helps this (obviously not a problem if you’re planning all robots together in the cloud).
- I’d like to make sure that we can replan paths “immediately” as required. Not wait until a replanning rate or recovery action takes hold. I define immediately as in a reasonable timeframe, not exceeding 100ms after a problem is detectable.
In that direction, my suggested paths forward:
A) Starting with a plan and a cost of that plan: At some rate (lets say 10 Hz, why not) we go over the plan and check the cost and if its been invalidated due to lethal obstacles entering it. Trigger replan. Else, check the cost of that path if its probabilistic similar based on some common metric to the original and last check, we stick with it. Else, replan. This allows us to only replan when actually required because something has substantially changed in our understanding of the environment or the environment itself.
B) Replan constantly at some rate (which you may set to 0, if you like) and also replan as an immediate first step recovery behavior in the behavior tree. This would allow you to have the replanning rate or turn it off and only replan in failure modes.
I prefer option A, but I wanted to get feedback from stakeholders on what they think is the right choice, or to leave things how they are. Option B is similar to what navigation stack in ROS1 enabled in the FSM. The reason I think this may be less ideal is that if you’re working with heavier planners that takes 1-2 seconds to generate a path, you’re pausing 1-2 seconds after a failure rather than being able to continue because potentially the path costs increasing triggers a replan while its still valid but less ideal. Also the constant replanning doesn’t help with my bullet #2 because we’re still replanning at a pretty consistent rate. It will also replan paths “immediately” on failure so it meets criteria #3.
Other relevant conversations:
Edit: tl;dr, I would like to change the way we recompute paths, but since I think that’s a pretty substantial change to how the system currently works, its worth a public discussion and potentially raising issues, new ideas, and consensus.