I haven’t used the navigation stack, but I understood David Lu to say at the last ROSCon that it doesn’t support Ackermann-steered vehicles. That is an important use case.
I agree with @Jeremie, decouping the move_base in separate modules in a state machine would greatly improve its flexibility.
Regarding features, a nice little thing to have IMO would be to be able to pass the goal tolerance (xy and yaw) in the goal message to move base.
Ok, let me see…
Things to keep about the current navigation stack:
- All the dynamic reconfigure for sure.
- All the representations in rviz (necessary for tunning as well)
- I will keep the algorithms that are currently implemented in ROS1, if possible with the same parameters. This will make the transition smooth.
Things that I will change:
Please, correct me if I am wrong. Right now, if you want to use your own planner (local or global), you have to use the C++ API. The class that you provided was extremely convenient to minimize the flow of information and speed up the full Navigation Stack work.
However, ROS2 should be more efficient while handling messages right? Is it fast enough to deal with the idea of using a node for each planner? This node could receive all the information throw messages or is too much?
If this is the case, we should study the possibility of detaching the nav_core and the planners.
Maybe the community will be more comfortable with a publish/subscribing paradigm. Is just a thought, but this will make everything even more modular right?
Is the navigation stack too dependent on the position or is just my impression? If the robot does not start in his position, everything goes really crazy. The local planner and the local cost map can be more independent of the positioning system, right? Is a bad configuration of my navigation or is like that?
Things that I will add:
A simple system that tracks common problems. E.g: Hey, I am waiting for a map and it is not coming! Hey, I am sending cmd_vel commands but the position does not change! Things like that.
A simple undocking algorithm; I do not want my robot to move backward unless a recovery behavior specifies so. A tiny algorithm at the beginning that moves the robot backward x meters could be handy. Moreover, this can be attached to a simple boolean topic. There, a sensor can publish if the robot is at the docking station or not. If the robot is at the docker, you execute the undocking before moving, if not, just move as usual.
+1 to the idea of specifying the goal tolerance.
I’m not an expert navigation researcher but I am a user and an experimenter.
For me, the biggest desire is for navstack2 to be a navigation framework built using ROS2 principles rather than a monolithic navigation solution, as it is in ROS1. This means defining the separate components (local planner, global planner, local map probider, global map provider, planner supporters such as costmap providers, rescuers, path follower, and so on), defining the interfaces between them, and specifying those as ROS2 messages, services and actions. We should be able to say “a node (or set of nodes) that provides a global planner compatible with navstack2 should publish/subscribe/use these topics, services and actions and provide these parameters, at a minimum”. The key thing here becomes defining the APIs between the different parts of the navigation stack.
Navstack2 should take advantage of the capabilities of ROS 2 to make things nodes but then keep them in the same process so that message passing is nearly cost-free.
Being possible to specify via a configuration file what the global planner is, what the local planner is, etc. would also be possible. This is to have it not just be a bunch of nodes with defined interfaces, but be more of a framework where it is simple to build a complete navigation stack without feeling like you are plugging things together manually.
If we do this, then we can achieve the goal of allowing different planners to be plugged in, etc. that is frequently stated in this thread.
It will also ensure that it is inherently easy to introspect the internal navigation process, because we can intercept all the messages flying around between the different parts.
Building on this, navstack2 should then provide a default configuration that provides an equivalent or better navigation functionality to the ROS1 navigation stack.
As someone who has a strong desire to participate but is something like 1000% over-committed in time, I would also like to see a central place where I can comment on design decisions and implementation choices made when I have time, without losing track of where everything is. Please make a github project so we can make issues and discuss them, if you haven’t already.
I think this is too application-dependent. But, it should be really easy make things like this default behaviour for your robot in navstack2.
wow, very detailed, and right, it’s what I meant…
All, thanks for the input so far, keep it coming!
A few comments.
- I plan to create a github.com repo for this (hopefully under github.com/ros2 namepace, working on that), and will start capturing the wish list items and requirements, as well as documenting design decisions, etc. This thread is just the primer for that.
- The many different types of maps requested, to me means we need a more abstract data type for maps, that can represent many different types of maps that might be inputs to the system. I’m not sure what that will be exactly yet, but to me, this illustrates that we need to decouple the map type as much as possible from the system.
- I also agree on the use of ROS2 nodes for the low level plug-ins like global and local planning. That will improve decoupling and make it easier to replace those components with other algorithms, and will also ease the debug effort as pointed out before. We can do this using shared memory pointer passing so that the performance overhead is small.
Keep the feedback coming!
To enable autonomous navigation, you have to allow the robots to sense the environment around to create its map and enable collision avoidance, In other words, sensor data input to any algorithm is important.
Now there are various sensor solutions serving for this purpose, for example, Sonar, Lidar, MMW, Vision and so on they serves different preference and it’s possible to sensor fusion with them in the real autonomous navigation. Free to think ROS2 navigation can consider more:
- general flexibility to smoothly use their output in different phase of autonomous navigation
- how to adapt their combination or sensor fusion well while engaging with ros2 navigation stack
- consideration to certain solution with upcoming trend or innovation to extend, for example, for vision based, it may not require to create map firstly to navigation in the future.
Late to the party, but here’s my $0.02. Generally, +1 to all of Geoff Biggs’ coimments. I’d like to see:
- More pluggability in the elements of the nav stack, and the ability to hot-swap implementations.
- A common API that will allow people to add their own underlying representations.
- The ability to handle time. I’ve been toying with the idea of a map that changes throughout the day (as corridors become congested, and such), so the ability to integrate this into the system would be important to me as a specific use case.
- The ability to run more than one algorithm at a time, compare the results and mux them. This is important for localization, where it can be used to compare the performance of two algorithms, or to use different algorithms at different time.
- Factoring things up as finely as possible. As Geoff pointed out, this should be more lightweight in ROS2
- Ability to develop in Python or C++.
- Use floats or doubles as the underlying representation, not some fixed-point hack. Actually, it might be nice to use arbitrary underlying representations (of probabilities).
- Some more modern algorithms from the literature.
- Being able to swap maps in and out of core seamlessly, so that I don’t have to keep all of campus in memory at the same time.
- Multiple floors in a building. Hybrids to let me get from one building to another.
I’d be interested in having our group here at Oregon State help with some of this, depending on where it goes.
Not really a navigation stack user myself, so just passing by, but I was surprised
move_base_flex wasn’t / isn’t mentioned more. Only @Jeremie mentioned it once earlier in this thread.
Move Base Flex (MBF) is a backwards-compatible replacement for move_base. MBF can use existing plugins for move_base, and provides an enhanced version of the planner, controller and recovery plugin ROS interfaces. It exposes action servers for planning, controlling and recovering, providing detailed information of the current state and the plugin’s feedback. An external executive logic can use MBF and its actions to perform smart and flexible navigation strategies. Furthermore, MBF enables the use of other map representations, e.g. meshes or grid_map This package is a meta package and refers to the Move Base Flex stack packages.The abstract core of MBF – without any binding to a map representation – is represented by the mbf_abstract_nav and the mbf_abstract_core. For navigation on costmaps see mbf_costmap_nav and mbf_costmap_core.
Would seem to be a good idea to get some input from its maintainers (@spuetz et al.).
Yes, I was overjoyed when I saw
move_base_flex announced at ROSCon last year. I consider it a major step in the direction I want to see the navigation stack go. I haven’t had time to try it out myself yet, but I agree with @gavanderhoorn that any effort to develop a new navstack for ROS2 should consider it the biggest input into design.
We’ve created a repo for the ROS2 Navigation project here:
We’re going to collect all design inputs, here, starting with high level use cases and requirements:
Please submit your use cases and requirements via pull requests so we can have design discussions there.
A bit late, but I figured I would chime in (having been a maintainer of the navigation stack for going on 5 years now).
First, I concur with the several comments about better modularity. I’d almost suggest that the ROS2 navigation stack shouldn’t include any planners in the main repo (as is the case with MoveIt). There have been several newer (and probably better) planners developed – but users assume they should use only the “default” planners. At the same time, maintaining a code base that includes “all the planners” is just not feasible. Having better modularity, and having things like local and global planners exist in other repos, makes it far easier to have more maintainers involved, and for development to proceed quicker (if you don’t like planner X, go write and release planner Y).
While splitting those things out to other repos, I would suggest providing some basic/core code to build planners on. At some point a Willow Garage intern started to refactor base_local_planner in that direction – but it was never really finished.
On the subject of 2d/3d – I think there is a fine balance to walk here. While most research is pushing more in the 3d direction, commercialization tends to push towards cheaper/smaller processing power – and some of the optimization in terms of 2d/2.5d in the navigation stack is important here. While a full 3d mode is awesome, requiring the whole system to always act as 6Dof pose + 3d terrain may make it unusable on smaller platforms like Turtlebot.
With regards to not being monolithic – I think this will be a serious challenge. One of the things that ROS1 does a really poor job of is synchronizing the operations in multiple nodes. I’m not sure how much ROS2 really helps in that regard.
But here’s my most important feedback: we need better testing. One of the reasons we have a hard time merging things in the current navigation stack is that there is just almost NO test code (similar issues with MoveIt). I have spent an enormous amount of time physically testing code in simulation or on real robots to try and be sure something contributed works – only to find out that it actually breaks some particular feature that someone was using. If you’re going to largely overhaul/rewrite things – do it in a test-driven way, and make sure those tests are meaningful so that the system can actually be maintained.
Has anyone looked at what other ROS2 dependencies are missing? AFAIK, there is no equivalent of actionlib yet (which is probably a pre-req to actually building most robot applications in ROS2). I’m also not sure the status of things like parameter management or dynamic reconfigure (highly required for people to actually tune a navigation setup in a reasonable amount of time).
I’m quite late to the party. Sincere apologies.
A simple way to switch from 2D (which can be the default) to nD would be awesome. By n, I mean 2.5, and 3 (and community might come by with weird n = 1, 1.5 or 4). It might not be sensible to support 3D always, but flipping a switch to turn 2D off and 2.5D or 3D on would be a god-sent.
It might be not be possible but I would like some approach similar to how ROS Control allows addition and removal of resources and then allows selection of control system based on free resources. This would allow people to swap between different types of maps (2 different 2D maps for different conditions or 1 2D map for exhibition and a 2.5D map for rest of the building) and thus different kinds of planners and localization (or SLAM) modules. The swapping time would require either both navigation stacks to work together during transition period and come to consensus or the robot has to come to stand still.
The inspiration for this comes from node lifetimes in ROS2.
Also, on a separate note, I expect the Navigation stack would have to use matrix multiplication somewhere or the other. Can Eigen/Boost math libraries be used(or provide compile time option to choose the preferred version) instead of rolling a new math library?
Thanks for the input, I was actually going to reach out to you directly and ask if you want to participate. I can see your point that maybe putting the planners in a separate repo might be a good idea for modularity. I also see your point on the difficulty of supporting 3D Navigation. We’ll need to discuss that one more. I’m trying to start first with use cases, and there are both 3D and 2D+elevation ones that need to be discussed.
I really hope you’ll participate and contribute, your expertise is welcome!
@tyagikunal - thanks for the input. I agree we need to look at the math libraries. I believe Eigen is already being used in the ROS Nav stack, but we’ll look at all options.
Great input. Let’s start collecting some of this into the repo. https://github.com/ros-planning/navigation2
Also, we welcome the help from OSU, I’ll follow up with you on that one directly.
ROS2 helps significantly in that regard. I don’t want to say it’s a magic bullet, but synchronising the execution of multiple nodes that make up a single navstack should be easy in ROS2.
Navigating in Lineworld is hard.
We have been using parts of the nav stack for quite some while now, however, we were using our own state machines instead of the default move_base one. Recently we switched to https://github.com/magazino/move_base_flex so we could implement our own high level logic and switch planners on run time, e.g. docking planner, wall following planner, line following planner, default dwa_planner. But I think the best thing of move base flex is the action interface. This also allows users to implement planners in python that are not dependent on the costmap_2d representation. We have been using https://networkx.github.io/ for example to draw graphs with rviz and plan routes along this graph and follow these paths with a line follower using LQR or simple PD controllers. I am also involved in the Robocup@Home competition, here we often have to various areas instead of poses. Therefore, we expressed our goals with goal constraints instead of poses ( https://github.com/tue-robotics/cb_base_navigation ). This allowed for sending goals like: ‘In front of the table’ or ‘close to the sofa’ or ‘in the living room’. Long story … to sum up, our wish list:
- More rich goal definition (volumes or constraints) on geometric level
- Action interface for planners / controllers and recovery on geometric level that is independent of planning representation (enables implementing planners / controllers in other languages + own world models + custom top level behavior)
- Action definition for topological planning that relates to geometric plans / goals
- Playground package with simple simulator that provides an overview of the various planners and controller combination that people can try.
I’d like to also give a vote on the ability to develop in Python the different parts of the navigation stack.
In general, thank you all guys for thinking about all these very important and interesting improvements for the next version!
Several people have mentioned move_base_flex. For another take on the ActionLib interfaces, you might check out http://wiki.ros.org/flexible_navigation . This slightly pre-dates move_base_flex and was designed to work with FlexBE, but shares many of the same motivations to separate the planning and logical control.