Where does your eq. of this route graph planning live for me to take a look?
The API of the planner can be found here while the implementation is spread throughout these source files. A few notes about this implementation:
Its key feature is the ability to plan motions around moving obstacles
It also reports “traffic dependencies”, e.g. “The robot needs to wait at this waypoint until [other robot X | moving obstacle Y] has moved out of the way”
It can infer based on graph information when certain events need to happen, like opening doors or summoning elevators (although the types of events that are currently supported are hard-coded in the static type system)
As you can imagine it’s intended for finding motion plans suitable for AGVs so it generates trajectories intended for robots that drive in straight lines and come to a complete stop each time they need to turn. However, these plans are still suitable for providing a high-level bread crumb trail to AMRs, which can then deviate from the straight lines as they see fit.
I’m working on a new composable/modular path planning framework that will not have any of the described limitations, but it’s not quite ready for display yet.
How do you plan to accomplish that with the clearance nodes?
It’s not significantly different from the route graph approach.
A free space motion plan is generated that accounts for moving obstacles (note that other robots are considered moving obstacles for this motion planner).
The motion plan also accounts for static occupancy information provided by sensor data.
The planner identifies “traffic dependencies” within the plan that it generates. From these “traffic dependencies” we can infer checkpoints / clearance nodes along the intended path.
When sensors detect new obstacles that mandate a significant change in the motion plan, then the costmap is updated and the motion planner is re-run. (We can have parameters for buffer space between the robots to minimize the overall multi-robot system’s sensitivity to deviations from the original plan)
When a robot replans, it informs its “dependents” (other robots whose motion plans have a “traffic dependency” on this robot) so that they can consider replanning.
Everything I’ve described above is already being done in Open-RMF. The only new thing would be doing it over an occupancy grid instead of on a route graph.
In the proposal above, I assume that would notionally be handled outside of the Route Server (as your diagram previously suggested)
Correct, I imagine it being some kind of sibling node or sibling plugin.
An aside of design, is this something folks at OR- err- Intrinsic would be interested in collaborating with me on?
As you can imagine I’m not currently in a position to make any firm commitments. The transition from OSRC to Intrinsic is very fresh and I’m still on PTO for a couple more weeks, so I don’t know for certain right now what I’ll be staffed to in January. That being said, I am personally very enthusiastic about collaborating on this, and I will certainly do so if my staffing assignment gives me the opportunity to.
The way we do this now is to iterate through the path’s poses, beginning at the start of the path until we reach the current position of the robot. Any edges we iterate past, we trim. Any nodes/waypoints we iterate past, we report through feedback. There are probably more optimal ways of doing this. The reason we don’t simply check for proximity is 1) if we are checking for proximity to all nodes every tick, this would not only be computationally intensive for large graphs, but it could result in erroneously reporting having passed a node further along in the graph, especially if there are loops, etc. 2) if we are checking for proximity to only what we think is the next upcoming node there might be a risk of missing that node (for example, if the local planner deviates enough from the global plan) which would result in us being “stuck” waiting forever to pass that node.
If my understanding of the situation is not totally off, perhaps this feature could also aid in enabling the functionality that @grey is hoping for without having to incorporate “rmf-esque” features into the Nav2 stack.
If we expand the idea to include the ability to specify a sequence of nodes the robot must pass through, even if they are not necessarily consecutive (kind of like the route-server’s parallel to NavigateThroughPoses), perhaps an RMF could implement their own “event handler” type thing one layer above what Nav2 provides. The way I see this working with the example that @grey gave is that this abstracted layer could send the robot through the “checkpoints” up until the first blocked checkpoint (this assumes the checkpoints coincide with graph nodes). This could look like:
If that checkpoint becomes clear while the robot is moving, this layer could then preempt the current goal and send an updated list through that previously-blocked checkpoint up to the next blocked checkpoint:
This is a good feature and point of discussion. Let me follow up with you on this after the new year when I have a plan of attack together and we can have an online discussion about how you do that now and what you’d want from it in Nav2 to see how we can massage it in. This might be a great place for you to collaborate with us as well while we’re building the prototype to develop such a feature!
Yes, fully agreed, this would be great. One issue that we do run into with RMF is inferring the intention of the robot when it starts to deviate from the straight-line paths that we originally intended. The kind of progress updates that you’re describing would be a huge help for reducing fragile assumptions and inferences that currently happen at higher levels in the stack.
If that checkpoint becomes clear while the robot is moving, this layer could then preempt the current goal and send an updated list through that previously-blocked checkpoint up to the next blocked checkpoint:
I think at first glance this appears to be a viable alternative, but real-world asynchrony issues would make it less smooth than the checkpoint approach that I’ve proposed. I’ve sketched a scenario where a piece-wise path command strategy could have an undesirable backtracking outcome. This is a scenario that we’ve run into when experimenting with different Open-RMF integration strategies.
Admittedly this can be avoided if the new command is always produced inside the same process as the controller server so that there’s no delay between state update and the command that’s being issued, but I’m not sure if that’s what you meant or if that’s even desirable.
The difference with the checkpoint proposal is the controller would know the entire intended path ahead of time so the changes in checkpoints being open can be smoothly incorporated into the outgoing control signal no matter when the information about the checkpoints arrives.
I had not considered this. In the beginning of my previous post I explained how we iterate from the beginning of the path to the pose of the robot to determine which waypoints we already passed, so our system would simply mark that waypoint as having been passed and continue forward in the scenario you described. But does Open-RMF enforce that each waypoint in the path be passed though? Or is there another reason the robot backtracks? (maybe has to do with the way certain controllers behave? I have only used RPP which just directs the robot to the closest pose in the path rather than bringing it to the first pose). In any of these cases, the only solution I can think of off the top of my head would be for the client to check for close proximity to the upcoming node before sending the a new batch.
I’m starting to think about the specifics of a design and wanted to touch base on this point. The clearance stuff is an area I feel there’s a murky relationship happening. While I haven’t formalized the other topics here yet, I can see a path forward for them. I would like to avoid, if at all possible, making the core APIs of the task servers rely on direct communication between each other without using the behavior tree as a structure for passing information deterministically. The speed limits are the (for now) single exception to that rule largely due to convenience and the fact that it doesn’t modify the core nature of what the task is trying to do (just adjusting a parameter, like one might adjust the maximum speed as a dynamic parameter).
That, to me, precludes the idea of having the clearance information directed at the controller server directly from the Route Server or Event Handler. I think this is better served as a node in the Behavior Tree. When the Route is computed and returned to the behavior tree to communicate to the controller server, we can insert a call to the event handler to process the route request in the BT and store the clearance information in the BT. Then as we execute the task, the feedback updates from the Route Server can be used in conjunction with that clearance information stored in the BT to pause the robot as needed by canceling / pausing / allowing the controller server to stop gracefully.
Or, the route server itself could find the clearance nodes, I’m not very particular about where that logic happens. But I am particular that the Route Server / Event Handler should not be talking to the Controller Server without going through the behavior tree to ensure deterministic/synchronous communication of key elements. The behavior tree seems to me like the right place for navigation-logic like “pause at these various points for more information”. That logic can still be made via a service/action call to a server, implemented as a BT node (as is typical for us) – so that logic can still even live in the Route Server / Event Handler, just coordinated for execution by the BT. I don’t see any other place where that would belong naturally. The only other option I see is if the Route Server / Event Handler had a back-room relationship with the controller server to perform some logic that then the behavior tree would be completely unaware of. That seems like a big problem to me, especially if folks customize their BT for their tasks. Imagine the robot is paused at a clearance point for awhile waiting for the right of way and the BT has recovery logic if the robot is stuck/paused/not moving. The system is going to start thrashing by re-requesting a route, then trying to move, then executing recovery behaviors, then not moving, and so forth. The BT needs to know what’s up at the very least – if not directly the one ordering the pause.
I’m a pretty good ways through the initial functionality (minus live monitoring stuff, that’s for another phase of development) and revisiting another topic that came up.
On creating a path from the initial / final node to the start / goal poses: How important is it to you that this happens within the Route Server? I could use some feedback here.
If it were to happen instead in the behavior tree, then we could leverage the Planner Server and have that logic pushed up (always good!) with the ability to leverage any/all of the algorithms in the Planner Server that someone may have established for an application.
If we were to do it in the Route Server itself, we could have a very simple straight-line planner, but that would be pretty “dumb” so-to-speak or hardcode a planner instance within the route server (NavFn, probably) to make those connections. If we were to consider adding a plugin-interface to allow for arbitrary Planner plugins, that’s the bridge I think is far enough that we should consider using the BT to request those paths from the Planner Server outright so we don’t have to have multiple costmap instances running concurrently – which is a very large resource drain. I’m also not sure for what proportion of users that having the freespace-connection is useful also something I’d like to hear feedback on. I’m not sure if this is “everyone wants this” or a “some niche applications want it”.
Certainly if that was the plan, I would offer a BT XML which has this all set up for you to have an easy starting point and example. Please let me know!
On our particular application (outdoors) it makes more sense to plan between the poses returned by the route server using the BT and the planner server there. Our robots navigate in very large environments which wouldn’t be practically represented on a single big costmap so we rather use a rolling global costmap and only plan through the goals that fit in that costmap as the robot moves.
I imagine that for planning the path between the poses returned by the route server you would need to fit them all on a single costmap, which could be slow on multi squared kilometers costmaps; or have the costmap “roll” through the poses, which is what the BT-planner server would do anyway with the advantage of having real time sensor data that is not present if only a static layer is used at route-time.
In conclusion, from my perspective I would say “some niche applications want it”.
So I’m looking for some feedback on my idea for this to see what others think. I want to find the most robust and general way to find when we achieve a route node. I’m currently thinking:
We maintain the state of the last passed node, the next node, and the current edge
At 100hz, we get the current robot pose and check if we’re in a generous radial window around the next node in the route and set a flag
If the flag is set, we start checking the current pose’s distance with the last iteration’s pose distance to the goal
To evaluate when we actually achieve the node do either: (1) Using the line segment created by the last time step’s pose and the current pose, see if the node’s closest location on the line segment can be projected orthogonally between the 2 poses, meaning that along the vector of travel, we have passed the nearest point on the line segment to the node (2) look at the next node and the vector of the edge that connects it to the next-next node. Orthogonal to that would be the vector to cross to be considered “passed”. Basically using the distance vector, we check when the dot product of the next-node-to-next-next-node vector and the distance vector from the robot pose switches signs (3) Simpler yet, if this distance goes from decreasing (driving towards) to increasing (driving away) we trigger. (1) and (3) might be subject to error in state estimation jitter or localization jumps - so perhaps 100hz is aggressive, but I’d like the system to be highly-responsive. (2) I think would be subject to errors as well, but not as sensitive except when really close to triggering anyway.
Additionally, if the current time step’s pose is outside of the radius after the flag is set, we consider the node achieved (if the robot just kisses the outside radius)
This doesn’t involve iterating over the route as you concern #1 shows. If the window to start tracking the change is sufficiently generous, then it wouldn’t cause issue #2 either. The additional refinement stage is what allows us to get a more fine-tuned check than just the radial window - even if the robot deviates from the path a bit. The other option is as @Mitchell_Adams suggested in waiting until we’ve cleared the node to say we’ve achieved it, but then we’ll always be delayed after the actual node itself.
@grey how does Open-RMF approach this? I can think of another half-dozen geometric solutions but maybe you can shed some light on what works in your experience as what already exists.
Thanks for throwing out your ideas! Here are my thoughts on the three methods you suggested.
This is a cool idea. We would just have to be sure that there are no cases where this would fail to detect. For example, if a robot were to come just short of the node and then do a sharp rotation in place back the other way to pursue the next node, is there a chance it could miss it? Maybe I’m overthinking it, but want to try to throw any edge cases out there in the consideration.
Rather than having the checkpoint line be orthogonal to the next edge, I think it might work better to have that line be right in between the the vector tangent to the end of the current edge and the vector tangent to the beginning of the next edge. If I’m imagining it correctly, the orthogonal-to-next-edge method would only work if the angle between the current edge and the next is greater than 180 degrees. Here is a diagram to compensate any poor exlaining.
The one pitfall I can see with this still is if the robot does a 360-degree, or nearly 360-degree turn in transitioning from one edge to the next (for example, if a robot goes from node A to node B, and back to node A).
This might be my favorite idea of the three. Like you said, it would be susceptible to jitter and such, but maybe we could filter that out.
All of these ideas assume an edge only passes through the radius at the end of the edge, so something like this would fail
But any wrap around situation that would cause that would be strange for an edge, so I think that is a fair assumption.
2 is what I ended up implementing, but in the process of testing starting next week. Option 1 has some conceptual issues I found after posting. 3 I think is the lowest quality idea that has a number of edge cases I can define but the question comes to “does it matter?”. I’d rather do something more exact. Still subject to change or ideas, but its just where I’ve started to keep moving.
I don’t quite understand. If you’re approaching Node 1, I’m currently taking the unit distance vector from the robot to Node 1 and dotting it with the unit vector making up the direction from Node 1 to Node 2 as the “future” movement direction. Since that is only positive when there is a projection between the vectors (e.g. on the same side of the waypoint axes with X axis aligned to Node 1-Node 2), this handles the 360 turn around perfectly fine. You’d achieve the node when you passed the node for the first time to set the next node which would make the robot back out and head back towards the parent of the node again.
Those are the exact same points. The start of Edge 2 in a route is the same as the end of Edge 1 for a connected route.
It would trigger after the robot went into and then back out of the radial circle in this case, either because it left or because its passed the Y axis (if we imagine the node is connected to an edge moving straight up as the X axis). There’s no reason to think that the robot would whip around like that for any particular task. But I suppose if the robot had the ability to have dynamic behavior and an application required near-exact node achievement, that could be conceptually possible, but very, very niche. But a better question is how technically you’d even do that. If you have a route / path saying “this node is achieved, move on” when we leave the radial element, why would the robot continue to swing back around? It would start navigating along the path to the next node over the next edge. So I don’t think that situation is technically possible. However if a user wanted to have that behavior, you’re right that we’d have to change the node achievement logic to make possible. I generally like to solve problems that actually exist and I can’t see a reason why the following situation would exist: you both want extremely dynamic behavior from the robot to allow it to deviate from the path in order to find yourself in that situation but also absolutely require near-exact path following behavior for node achievement to necessitate whipping around.
Edit: But, if you did require that situation and you had a controller to reliably exactly achieve (or achieve to a high degree of precision), you could simply make the radius of that circle very small – and then all of that concern goes away! so I don’t think there’s actually an issue at all. Just make the radius appropriately small if the robot will need to come back around.
Essentially the case I was trying to communicate is the following:
In this case, the dot product would be negative. If the robot were to proceed past Node 1, the dot product would go positive, and everything would be fine. But what would stop a local planner from filleting across to the next edge without ever technically going past Node 1? Does the route server somehow enforce that the robot literally moves over each node?
Another edge case that comes to mind is if the angle between the two vectors is close to a right angle, the dot product might flicker between positive and negative as soon as it enters the radius.
_
Ultimately, I do like the idea, just trying to fill in a couple of gaps in my understanding of it.
I totally agree with your response to my other concern. No need to chase a problem that will likely never occur, and even if it does occur, simply reducing the radius would probably solve it.
Ah… Yes. That is a potential issue. It would basically trigger when in the radius of the node on its approach instead of waiting to have crossed an imaginary line. It would be no worse than just radius checking then - which is not great but probably “fine” for some applications. But having that level of implicit differing behavior makes that solution probably not work. Something different it is!
Maybe I got caught up in the language, but I think I understand what you mean. You’re saying here that we take the line to bisect the vectors of node_last -> node_approaching and node_approaching -> node_next_after_approach? I think that’s definitely worth trying and is probably a good solution to resolve the issues you mention above.
The only obvious concern I have with that is if the local trajectory planner tries to short cut the path and the radius is not set “sufficiently” large for the dynamic behavior allowable (if not just exact path following or even so with a sharp low-angle turn). The robot wouldn’t get into the radius to allow the vector refinement to be achieved – but I don’t think that’s any different from my previous suggested solution. This is the same as your last “fileting” figure.
Nothing, so if a robot can have behavior where its allowed to deviate (which is most of them except Pure Pursuit), then the radius to enable to refinement check to trigger needs to be sufficiently large to capture that. This is why I’m referring to this as a generous window. If no doubt, make it bigger, because at the end of the day its not all that important to make it smaller since the triggering isn’t based on the window size, that’s just a pre-condition to checking the vectors which gives a better estimate of when the node is truly “passed”. The last node is the only which will use the radius alone because there is no future vector - but that’s a special case that would be easy to have a parameter for a more restricted window to use then.
I think that’s what we’re discussing the definition of
From the implementation just wrote it looks like the only edge case to concern yourself with is the first node case where there is no previous edge to bisect.
For consistency of having arbitrary start positions relative to graph start edges, I’m leaning towards doing the same radius-only check for start nodes as we do end nodes so that its not “sometimes” the dot product sign checks happen and sometimes they don’t (using the future edge vector instead of the bisector), depending if the robot is considered already in the spatial domain of the edge when entering the radius. With that said, I can appreciate the inverse argument where it would be nice to have it, if its possible.
What do you think? I’m easily swayed on this question. I figure its rather unlikely for there to be really important 10-20cm-really-matters operations occurring at the very first or last nodes in a route. More likely than not, its speed limitation related, if anything at all. And these would all be triggered slightly earlier, which I figure is actually a benefit to getting a “warm start” to starting to track the route from a freespace plan.
@grey or anyone involved in OpenRMF: if I wanted to make a few plugin demos on OpenRMF for triggering some action at a node (opening a door, call an elevator, etc) do you have some example interfaces (action, service, topic) for calling that in a standardized way I can offer? Certainly I can make up my own message, but I figure why not try to use what may already exist.
I’m just now buttoning up the Operations and Operations Manager code which will trigger actions at various state changes and when nodes or edges request them and looking to support out of the box quite a number of them which I expect will be popular. Various replanning mechanics (frequency, when blocked, when a service calls) and actions (adjusting speed limits, opening doors or calling elevators or other pieces of infrastructure) are mostly what I have in mind. I need a few more to fully cover the factory code cases - particularly some examples that would be embedded in the graph like opening doors in front of a door or something contextual to a particular node which you wouldn’t want to have process at any given status change or on regular evaluation.
As a greater project update, the code is mostly in place at this point, just the last few pieces like new plugins, functional testing, and unit test coverage. It may be some time before the tutorials and behavior tree plugins are made to fully integrate into Nav2, so we’re looking at a mid-March release most likely. The details matter and take some time to get right
Apologies for the long delay; the past month has been a whirlwind.
I don’t have a strong opinion on whether the clearance logic is handled in the BT or in the controller as long as the behavioral outcomes meet the requirements. It sounds like you have a lot of reasons for preferring the logic to be in the BT, so I have no objection to that. It’s not as obvious to me how the clearance logic would be implemented within the BT rather than being implemented inside the controller itself (will we automatically generate a BT node for each closed waypoint?), but I’ll keep thinking, reading, and listening to get a handle on how it should work.
This is what I originally envisioned, but I do see how that clashes with the principle that the behavior tree should be in charge of all behavioral logic.
The part where I get confused about incorporating the clearance behaviors into the BT is precisely the matter of behavior customization. My current impression is that putting the clearance logic into the BT implies we need to be able to auto-generate the BT because the sequence of clearance nodes can only be generated at runtime and will depend on the route taken. But in my mind that conflicts with the ability of users to specify custom behavior trees. It’s unclear to me how to resolve that conflict, but maybe I just need to learn more about how exactly BTs are used in nav2. At the moment I imagine there would have to be two layers of BT: A higher-level BT handling the clearance (similar to the existing Waypoint Follower module) and a lower-level one handling the lower-level behavior details.
As with other details, I don’t have a strong opinion as long as the behavioral outcomes are good. One challenge I imagine with putting it into the behavior tree is: How do you make the decision about where and how the robot “merges” back into the predefined routes? I’ve created a picture to illustrate my concern:
Whatever module is making the decision about how to merge from free space back onto the predefined routes will need substantial information about both the cost map and the route graph at the same time. In the illustration, if we take the naive approach of finding the edge/node closest (by Euclidean distance) to the robot and merging onto that point, then we end up with the “Unacceptable” solution.
I think if the BT somehow decides this behavior then one of two options are needed:
The BT needs to have access to the full route graph
The BT needs to be able to query for “route elements within a distance of X from point P”
With option (2) the BT could take an iterative approach where it finds the shortest path to the closest element then sees what the real distance is to reach that element through free space. Then it can query for other elements of the graph that are within that real distance and calculate the real distance to those elements. Repeat until there’s no possibility that any unexamined route elements could be preferable. That should allow the BT to find the “acceptable” solutions in the illustration, but I don’t think it will be able to get what I’ve labeled as the optimal solution.
To get the optimal solution I think we’d need to be able to query “route elements whose distance from P plus cost to goal G are less than X” which can be done but gets quite complicated.
So I think the separation of responsibilities is possible but will require some tricky interfaces.
I think connecting the routes to/from freespace will be very close to 100% on “everyone wants this” in the long run. In many cases, even where predefined routes are being used, for the “last mile” (more literally, the last ~5-10 meters) it is very desirable for the mobile robot to navigate nicely to and from free space to do whatever interactions it needs to do. There are certainly highly controlled facilities that do not need that last mile flexibility, but those facilities go to great lengths to design their workflows to not need it. They would be able to save an enormous amount of time and cost if they don’t have to worry about having such tight control of their traffic flow.
Is it a necessary requirement that a single method works for every possible control scheme? Maybe I’m being too naive with this suggestion, but if the controller is in the Pure Pursuit family, couldn’t we say that a route node has been “achieved” once it has been pruned out of the controller’s window of consideration? I would think if we have the controller give feedback on its decision-making then we’d have a very easy way of implementing this.
If we allow the node tracking method to be a plugin then we could choose different methods based on what controller is being used, and then leverage the information that each controller is able to provide.
For Open-RMF it depends on what kind of information each platform is able to provide. The most generic and “reliable” solution has been to issue out one waypoint command at a time and wait until the robot declares it has arrived before issuing the next waypoint command. This is not desirable for the obvious unnecessary delays that it creates, which is a big part of why I’d love to have better integration between Open-RMF and the nav stack.
You can find Open-RMF’s door messages here. The typical procedure is:
Robot arrives in front of door
DoorRequest message is sent out with the robot’s ID as the requester_id, the target door’s name as door_name, and MODE_OPEN for the requested_mode.
The robot should wait while listening for incoming DoorState messages.
When the DoorState for the target door is MODE_OPEN then proceed through the door.
After arriving on the other side of the door, send another request but with MODE_CLOSED
Wait until the robot’s requester_id is no longer present in the SupervisorHeartbeat message before navigating away from the door.
Nope I’ll get back to this in a few weeks when I’m doing the more detailed work on integrating the server with the behavior tree, but this is a non-issue. We can store information on the blackboard (e.g. clearance nodes) and we’ll have state information from the Route Server’s feedback to know what edge we’re on and what nodes we’re approaching or achieving. All the pieces are there, they just need to be assembled. We should easily be able to make a BT node that cross references these and then tells the controller server to chill out and wait a moment until told to continue (the told to continue bits is the most complicated, but I think we have the puzzle pieces for that already).
I figure its on the route server and/or request coming in from the user regarding the route they want planned to decide what route points to start/stop at. From the freespace planner / BT perspective, they’re just given a route and doing that last few meter connection. They don’t know nor care.
To your point, that is a real issue for the route planner. I have not addressed if the selection of the nearest graph node is not an appropriate selection given physical infrastructure. However, the interface does allow you to plan via specific goal/start IDs rather than from euclidean coordinates, so I imagine in a situation where you would like to plan from A->B and there’s a maze like environment of physical barriers, the selection of specific nodes the application would like to use could be a solution. I imagine any real application of route planning they know of know already which nodes are the appropriate selections from semantic information and wouldn’t be using freespace points to begin with.
But this is a problem likely worth additional consideration, as a non-zero number of people will want that out of the box. As you mention, a solution could be to use the kNN’s (which I already find) and find which has the shortest real-world path. But that assumes that a costmap exists, which is not a good assumption for many uses of route planning problems. However if you do have it, its a reasonable and hopefully sufficiently fast solution since the paths should be short. If you don’t have it, I don’t know how it would be possible. If we did that though, we’d have the connecting start->A; B->goal freespace plans given “for free” as an output. This sounds like a good additional contribution after the initial V1 release, but I’ll make a note of it.
Discussion on the PP comment, but I think the comment below makes it inert, but worth potentially glancing through if interested
That’s not really an appropriate use of the PP or any other controller’s internal state handling - we can’t know why or for what purpose a pruning of the path represents and if that’s even a good proxy. For example, while the PP in Nav2 isn’t built this way, its very conceptually possible that you might prune the path after you set a particular node as the lookahead point. If you did that, then you’d be “achieving” the nodes the lookahead_distance away from the actual node of interest every time. That’s even initially how I implemented it.
It makes a couple of assumptions that I don’t believe are generally true:
It assumes we’re talking directly through the behavior tree to the controller server over a path type we can define keypoints in. What if instead we are taking the sparse route in say, GPS coordinates, and feeding them into the Waypoint Follower? That’s now 2-4 more levels of indirection that need to support it, if its possible
It assumes that the controller server is being used in the first place and we have knowledge or control over the algorithms it contains. Nav2 is designed so any server can be drop in replacable as long as they fill in our Action interface (or completely replace and make the appropriate BT nodes). Its not uncommon for folks to take just a few Nav2 servers for use in their larger system. Tying a high level route planner to a low-level controller seems not great.
It assumes the controller algorithm is pruning the path - while most are right now, its not a good assumption that all will and that there’s some data to be communicated back (or that pruning means the same thing to us in this situation as it means to itself)
Certainly we could make the node tracking a plugin, but I don’t see why that would be necessary at the moment. The discussion above I think has come together with a good, general, accurate, and robust solution to the node tracking problem. If it turns out that doesn’t work for everyone and it cannot be patched up to do so, we can add in a new interface, no problemo. But from 0 deg to 360 deg from 1mm distance to N km in distance, this seems to work well and more accurately than other alternatives. If someone finds holes I can’t plug, happy to open the ship.
My thesis is that robots should basically never be stopped except when charging or waiting for a human to do something with their slow, feeble hands . That’s alot of stopping with OpenRMF, I’m kind of surprised that method can support large fleets?
Grazie, will play around with these for unit testing and probably a plugin or two for the Operations (name given for the live actions to perform at key node or edge changes).
Edit: Why are none of the door actions services? It seems unnatural to publish to a topic to open a door and then have to sort through a subscription of data to see if one contains the result you’re looking for. Given ROS 2 will drop messages in unreliable networks, that seems like an odd selection in case your request or the subsequent response gets lost (at least with a service you’d know or timeout).
Edit2: I’ve decided to create a generic base class for calling services to perform actions and can populate request fields at their desire with the operation’s metadata and system state (and process results) rather than trying to have a particular implementation. I did make a PoC one for std_srv/Trigger as a generic demonstration and for use in unit testing but I expect everyone to create their own ones for their own particular triggering actions like opening doors and such. I figure the EventHandler that OpenRMF describes could expose the open door, call elevator, or other items in a service fashion? Implementing one of these plugins should be <20 lines from that point using the RouteOperationClient<SrvT> class. Will work on the collision monitor re-routing logic tomorrow.
This brings up another case I think we ought to consider. If the robots initial position is somewhere in between the start node and the second node (but still close to the edge), I think most people would just want the robot to merge into the closest point of the edge, and that’s how most controllers would handle this situation. But if the robot never enters the radius of the first node in this case, it would never detect the robot as having passed the first node.
Also noted, also sounds like a great contribution after the initial work is completed. I’d like to get the broad strokes done first
I thought about the complimentary issue last night and think its actually rather easy to have implemented, if a costmap or similar structure is provided:
We do a check if we have a world model we can use. If not, skip. Many situations might not have one as the Route Server may be used to fully replace a costmap model. But, many situations yet may still have one, so lets use it.
If we do, use the kNN’s to find the nearest N points to the robot, rather than the nearest one as we do today. This is a mere setting change from 1 to N.
For the nearest neighbor, perform a Line of Sight check on it to make sure there’s a collision free path to the node from the robot position. That doesn’t handle every situation, but its a nice pre-test to perform so we can skip the rest if its trivially achievable, as it likely is in a large number of cases.
If we cannot see the nearest node as LoS, we perform Breadth First Search from the robot pose until we find one of the NNs. This allows us to expand one planning run for all the nodes and likely this is a small region so this should be made quick work of. Select the starting node as the one that is first found.
LoS checks we have utilities for in Nav2 already and a BFS algorithm would be extremely derivative of other work in Nav2 so overall that feature could be implemented in a day or two fully tested. That would make sure we select a point even in the maze of physical infrastructure that is close distance-wise and closest practically speaking.
For the second problem you bring up, we could have a post-planning process that allows us to prune the route if we’re already passed the first one. Probably much easier to implement (“hey robot pose, are you between the two? Prune the first.”). I acknowledge if using the “iterate through path up to the closest to the robot pose” method would handle this quite naturally. But that comes at the expense of more on-going compute time in the main loops instead of being able to do a single check up front on each rerouting request.
Keep’m coming! Tell me all the issues I’ve overlooked so far in my initial development process! These are great. Always better to fix’m before users have to experience’m. These kinds of things I’d find while doing my initial testing of the full system which I haven’t yet gotten to (been trying to get all the spanning set of plugins written and tested first so I can use them all in system testing).