ROS2 Navigation - Input requested

Hello ROS and ROS2 users and developers! I have been in discussion with David Lu and have reached out to OSRF to start an effort to develop the ROS2 Navigation stack. We are currently investigating the needs for the next-generation Navigation stack in ROS2.

I’d like to invite you to reply with things that:

  1. You like about the current ROS Navigation stack and would like to see kept the same (or equivalent functionality)
  2. Things you think can be improved on and would like to see done differently, aka. your wish list

I have my own wish list but I don’t want to seed the discussion, I want to hear other users thoughts.

Also, if you are an active community member and want to be involved in the development effort, please let us know that also. We have a small team right now, but desire to do this in the open with community involvement.

11 Likes

How to be involved in the development ?

Happy to know this kickoff. :slight_smile:

Per my experience of using ROS Navigation Stack, I would like to add below items into the wish list:

  1. make a more flexible mechanism for plugin implementations, especially for recovery plugins.
  2. Support multi-thread and heterogeneous computing
  3. Adopt AI gene (e.g. Reinforcement learning) into path planning and collision avoidance.
  4. Support more map types
  5. it’s better to support 3D path planning and CA

Thanks for the feedback, a couple questions.

  1. I think you mean AI Gym, right? If so, are you aware of the gym-gazebo project? I have been using it for some RL work in this space. I believe it could be ported to ROS2 fairly easily also.
  2. What map types are you thinking in particular? Google Maps? Non-2D occupation grid maps?
  3. 3D is on my wish list too, for applications like drones.

Peter, I sent you an email, we can talk offline

This is a rather ambitious list, and perhaps some of the navigation tasks applicable to these maps types may be too domain specific, but I’ll just float some far ideas here:

  • Semantically oriented Maps
    • Indexed Points of Interest
      • rondevu points
      • moving goals
    • Labeled region boundaries
      • property borders
      • tolls or crossing costs
      • exclusion zones
    • Annotated Affordances
      • Doos, Elevators, Appliances, Chargers
      • Departments, Faculties
  • Vector Maps
    • Floor plans or 3D scale models
      • Google maps/earth
      • Architectural blueprints
    • Roadways maps
      • Turning lanes, intersections, crosswalks, etc
      • Congestion, Traffic density
  • Geo Maps
    • Topological
      • Elevation and grade
      • Underwater terrain
    • Weather
      • wind and tide velocities
      • Dynamic time series forecasts
    • Approximate at scale

I suppose I’d like to see navigation planners that could interoperate map format types that are more memory efficient, compressible, dynamic, human relatable, e.g. less pruly metric based like voxels or occupancy grids. I’d also to see ROS navigation planners generalize beyond the classic 2.5D mobile robot on a planer workspaces or perhaps appropriate other environment data in a map as navigational heuristics like for packbots, quadrotors, ROVs that climb, fly or swim in 6DoF.

3 Likes

Any dedicated Discourse/Slack or other mean for getting involved ??

One of the main frustration I experienced with ROS nav stack is the lack of flexibility (e.g. move_base inner state machine) that eventually got partially addressed by the community later on - e.g. move_base_flex let you use the state-machine of your choice under the hood.
Similarly, as it has been mentioned already, the possibility to use other types of map representation would be awesome.

1 Like

I’m interested in a similar desire to extend the typical map layers into more semantic meanings. Would add wifi, ble beacons and other rf land marks to the annotated affordances idea mentioned be @ruffsl

I haven’t used the navigation stack, but I understood David Lu to say at the last ROSCon that it doesn’t support Ackermann-steered vehicles. That is an important use case.

2 Likes

I agree with @Jeremie, decouping the move_base in separate modules in a state machine would greatly improve its flexibility.

Regarding features, a nice little thing to have IMO would be to be able to pass the goal tolerance (xy and yaw) in the goal message to move base.

1 Like

Ok, let me see…

Things to keep about the current navigation stack:

  • All the dynamic reconfigure for sure.
  • All the representations in rviz (necessary for tunning as well)
  • I will keep the algorithms that are currently implemented in ROS1, if possible with the same parameters. This will make the transition smooth.

Things that I will change:
Please, correct me if I am wrong. Right now, if you want to use your own planner (local or global), you have to use the C++ API. The class that you provided was extremely convenient to minimize the flow of information and speed up the full Navigation Stack work.
However, ROS2 should be more efficient while handling messages right? Is it fast enough to deal with the idea of using a node for each planner? This node could receive all the information throw messages or is too much?
If this is the case, we should study the possibility of detaching the nav_core and the planners.
Maybe the community will be more comfortable with a publish/subscribing paradigm. Is just a thought, but this will make everything even more modular right?

Is the navigation stack too dependent on the position or is just my impression? If the robot does not start in his position, everything goes really crazy. The local planner and the local cost map can be more independent of the positioning system, right? Is a bad configuration of my navigation or is like that?

Things that I will add:
A simple system that tracks common problems. E.g: Hey, I am waiting for a map and it is not coming! Hey, I am sending cmd_vel commands but the position does not change! Things like that.

A simple undocking algorithm; I do not want my robot to move backward unless a recovery behavior specifies so. A tiny algorithm at the beginning that moves the robot backward x meters could be handy. Moreover, this can be attached to a simple boolean topic. There, a sensor can publish if the robot is at the docking station or not. If the robot is at the docker, you execute the undocking before moving, if not, just move as usual.

+1 to the idea of specifying the goal tolerance.

I’m not an expert navigation researcher but I am a user and an experimenter.

For me, the biggest desire is for navstack2 to be a navigation framework built using ROS2 principles rather than a monolithic navigation solution, as it is in ROS1. This means defining the separate components (local planner, global planner, local map probider, global map provider, planner supporters such as costmap providers, rescuers, path follower, and so on), defining the interfaces between them, and specifying those as ROS2 messages, services and actions. We should be able to say “a node (or set of nodes) that provides a global planner compatible with navstack2 should publish/subscribe/use these topics, services and actions and provide these parameters, at a minimum”. The key thing here becomes defining the APIs between the different parts of the navigation stack.

Navstack2 should take advantage of the capabilities of ROS 2 to make things nodes but then keep them in the same process so that message passing is nearly cost-free.

Being possible to specify via a configuration file what the global planner is, what the local planner is, etc. would also be possible. This is to have it not just be a bunch of nodes with defined interfaces, but be more of a framework where it is simple to build a complete navigation stack without feeling like you are plugging things together manually.

If we do this, then we can achieve the goal of allowing different planners to be plugged in, etc. that is frequently stated in this thread.

It will also ensure that it is inherently easy to introspect the internal navigation process, because we can intercept all the messages flying around between the different parts.

Building on this, navstack2 should then provide a default configuration that provides an equivalent or better navigation functionality to the ROS1 navigation stack.

As someone who has a strong desire to participate but is something like 1000% over-committed in time, I would also like to see a central place where I can comment on design decisions and implementation choices made when I have time, without losing track of where everything is. Please make a github project so we can make issues and discuss them, if you haven’t already.

I think this is too application-dependent. But, it should be really easy make things like this default behaviour for your robot in navstack2.

2 Likes

wow, very detailed, and right, it’s what I meant… :grinning:

All, thanks for the input so far, keep it coming!

A few comments.

  1. I plan to create a github.com repo for this (hopefully under github.com/ros2 namepace, working on that), and will start capturing the wish list items and requirements, as well as documenting design decisions, etc. This thread is just the primer for that.
  2. The many different types of maps requested, to me means we need a more abstract data type for maps, that can represent many different types of maps that might be inputs to the system. I’m not sure what that will be exactly yet, but to me, this illustrates that we need to decouple the map type as much as possible from the system.
  3. I also agree on the use of ROS2 nodes for the low level plug-ins like global and local planning. That will improve decoupling and make it easier to replace those components with other algorithms, and will also ease the debug effort as pointed out before. We can do this using shared memory pointer passing so that the performance overhead is small.

Keep the feedback coming!

2 Likes

To enable autonomous navigation, you have to allow the robots to sense the environment around to create its map and enable collision avoidance, In other words, sensor data input to any algorithm is important.
Now there are various sensor solutions serving for this purpose, for example, Sonar, Lidar, MMW, Vision and so on they serves different preference and it’s possible to sensor fusion with them in the real autonomous navigation. Free to think ROS2 navigation can consider more:

  1. general flexibility to smoothly use their output in different phase of autonomous navigation
  2. how to adapt their combination or sensor fusion well while engaging with ros2 navigation stack
  3. consideration to certain solution with upcoming trend or innovation to extend, for example, for vision based, it may not require to create map firstly to navigation in the future.
1 Like

Late to the party, but here’s my $0.02. Generally, +1 to all of Geoff Biggs’ coimments. I’d like to see:

  • More pluggability in the elements of the nav stack, and the ability to hot-swap implementations.
  • A common API that will allow people to add their own underlying representations.
  • The ability to handle time. I’ve been toying with the idea of a map that changes throughout the day (as corridors become congested, and such), so the ability to integrate this into the system would be important to me as a specific use case.
  • The ability to run more than one algorithm at a time, compare the results and mux them. This is important for localization, where it can be used to compare the performance of two algorithms, or to use different algorithms at different time.
  • Factoring things up as finely as possible. As Geoff pointed out, this should be more lightweight in ROS2
  • Ability to develop in Python or C++.
  • Use floats or doubles as the underlying representation, not some fixed-point hack. Actually, it might be nice to use arbitrary underlying representations (of probabilities).
  • Some more modern algorithms from the literature.
  • Being able to swap maps in and out of core seamlessly, so that I don’t have to keep all of campus in memory at the same time.
  • Multiple floors in a building. Hybrids to let me get from one building to another.

I’d be interested in having our group here at Oregon State help with some of this, depending on where it goes.

cheers

– Bill

2 Likes

Not really a navigation stack user myself, so just passing by, but I was surprised move_base_flex wasn’t / isn’t mentioned more. Only @Jeremie mentioned it once earlier in this thread.

From the presentation at ROSCon and the docs it seems it’s gone into the direction that @gbiggs, @wdsmart and some others sketch:

Move Base Flex (MBF) is a backwards-compatible replacement for move_base. MBF can use existing plugins for move_base, and provides an enhanced version of the planner, controller and recovery plugin ROS interfaces. It exposes action servers for planning, controlling and recovering, providing detailed information of the current state and the plugin’s feedback. An external executive logic can use MBF and its actions to perform smart and flexible navigation strategies. Furthermore, MBF enables the use of other map representations, e.g. meshes or grid_map This package is a meta package and refers to the Move Base Flex stack packages.The abstract core of MBF – without any binding to a map representation – is represented by the mbf_abstract_nav and the mbf_abstract_core. For navigation on costmaps see mbf_costmap_nav and mbf_costmap_core.

Would seem to be a good idea to get some input from its maintainers (@spuetz et al.).

1 Like

Yes, I was overjoyed when I saw move_base_flex announced at ROSCon last year. I consider it a major step in the direction I want to see the navigation stack go. I haven’t had time to try it out myself yet, but I agree with @gavanderhoorn that any effort to develop a new navstack for ROS2 should consider it the biggest input into design.

Hi all,
We’ve created a repo for the ROS2 Navigation project here:

We’re going to collect all design inputs, here, starting with high level use cases and requirements:
https://github.com/ros-planning/navigation2/tree/master/design

Please submit your use cases and requirements via pull requests so we can have design discussions there.

Thanks,
Matt

1 Like

A bit late, but I figured I would chime in (having been a maintainer of the navigation stack for going on 5 years now).

First, I concur with the several comments about better modularity. I’d almost suggest that the ROS2 navigation stack shouldn’t include any planners in the main repo (as is the case with MoveIt). There have been several newer (and probably better) planners developed – but users assume they should use only the “default” planners. At the same time, maintaining a code base that includes “all the planners” is just not feasible. Having better modularity, and having things like local and global planners exist in other repos, makes it far easier to have more maintainers involved, and for development to proceed quicker (if you don’t like planner X, go write and release planner Y).

While splitting those things out to other repos, I would suggest providing some basic/core code to build planners on. At some point a Willow Garage intern started to refactor base_local_planner in that direction – but it was never really finished.

On the subject of 2d/3d – I think there is a fine balance to walk here. While most research is pushing more in the 3d direction, commercialization tends to push towards cheaper/smaller processing power – and some of the optimization in terms of 2d/2.5d in the navigation stack is important here. While a full 3d mode is awesome, requiring the whole system to always act as 6Dof pose + 3d terrain may make it unusable on smaller platforms like Turtlebot.

With regards to not being monolithic – I think this will be a serious challenge. One of the things that ROS1 does a really poor job of is synchronizing the operations in multiple nodes. I’m not sure how much ROS2 really helps in that regard.

But here’s my most important feedback: we need better testing. One of the reasons we have a hard time merging things in the current navigation stack is that there is just almost NO test code (similar issues with MoveIt). I have spent an enormous amount of time physically testing code in simulation or on real robots to try and be sure something contributed works – only to find out that it actually breaks some particular feature that someone was using. If you’re going to largely overhaul/rewrite things – do it in a test-driven way, and make sure those tests are meaningful so that the system can actually be maintained.

Has anyone looked at what other ROS2 dependencies are missing? AFAIK, there is no equivalent of actionlib yet (which is probably a pre-req to actually building most robot applications in ROS2). I’m also not sure the status of things like parameter management or dynamic reconfigure (highly required for people to actually tune a navigation setup in a reasonable amount of time).

-Fergs

7 Likes