[Nav2] Path Smoother Server, Plugins, Tutorial, New Architectural Diagram, Oh My!

Hi all, its your friendly neighborhood navigator here.

I’m proud to introduce to you a new Task Server in the Nav2 framework: the Smoother Server. The goal of this server is to take in a path, costmap, and other pertinent information and smooth out a path generated from any number of planning algorithms to create a more smooth path for the controller to track. This leads to better external behavior, smoother turns, and removes artifacts that can occur in infeasible planners such as NavFn.

How do you use this you ask? I’m glad you asked :slight_smile: I even jotted up quickly a tutorial about how to modify your existing behavior trees to use the new smoother server: Adding a Smoother to a BT — Navigation 2 1.0.0 documentation.

For now, we have a simple smoother available by default within the nav2_smoother package and shortly we will be merging a refined version of the original optimization-based Smac Planner Ceres smoother which will respect smoothness, curvature, cost, and distance. Its a bit more computational, but when paired with the Smac Planner, it yields impressive results.

To boot, we’ve had some really great work going on in Nav2 for some time and I think its darn time for a refresh of our architectural diagrams to represent some of the newer capabilities in Nav2 and better encapsulating all the things you can do with it. So I present to you below our new system-level diagram, which has been updated as of now on our website, navigation.ros.org (including interface types).

You might notice that we have a couple of new things here. Obviously the smoother server. But also a few new optional servers that will be merged into the stack in the coming week(s). Particularly:

  • Velocity Smoother: To take commanded velocities and ensure they respect kinematic constraints up to jerk before sending to the robot base. This can also be used to up-sample velocities by setting it to a far higher rate than your controller runs at and it will interpolate at the requested frequency.
  • Safety Monitor: A zone-based or velocity-based (both) system for ensuring a robot’s commanded velocity will not imminently collide with obstacles using raw input sensors to bypass the latency and smoothness requirements of the higher leel navigation stack. This is particularly useful in situations where you may have people jumping in front of the robot or in assisted teleoperations for ensuring a remote driver does not cause a collision.

More will be released on these two features over time, but I’m really happy to see this diagram grow over time. I can’t wait to see what we can add next time we give it a face lift!

Happy smoothing,



Oh! I also buried the lead, we’ve also renamed the Recovery server to be the Behavior server and working to add additional non-recovery generalized behaviors that are useful for navigation systems. The refactor helps us think more broadly about the opportunities and primitives we can add here to make robots smarter, more efficient, and capable.

If you have an thoughts on new behaviors of utility, please comment below!


Super nice to see the project advancing with new features so fast!
We are super interested in the safety monitor as we already have so stuff internally. We will try to take time to follow the devs and maybe contribute on this specific feature.

1 Like

I suspected as much :wink: Not just with you guys, but every company I’ve worked with or professional robot system I’ve had the pleasure of sticking my nose in has had some custom rolled version of what I’m proposing and what @amerzlyakov is actually implementing.

It’s always seemed bizarre to me that no one open sourced it, its not exactly rocket science but is really a necessary feature for safety. If you have a safety sensor (e.g. SICK) they will often have this kind of feature built into it. When paired with their controller blocks, that gives you a closed loop hard real-time solution, but that’s not realistic for many robot systems. Safety hardware is expensive and overkill for many uses.

So this work proposes doing the exact same thing, just at the CPU and navigation level. Its not hard-real-time certified and whatnot, but its functional and something is better than nothing!

The idea is to have 2 major operating modes:

  • Polygons: Establish a set of arbitrary point polygons, if there are N direct measurements from sensor sources inside of these areas, it completes an action. Examples: Stop Now! if too close, or Slow Down By 50% if there are workers nearby. You can set up as many polygons with as many custom settings as you like.
  • Velocities: Using the footprint of the robot, it projects a velocity command forward in time N seconds and if it is in collision with any of the sensor source measurements, it will scale back the velocity command such that it is always N seconds from collision. The result is that you could try to run your robot into a wall at full speed, and it would slow down when it gets close, and keep slowing down until its at a crawl just in front of it. This is nice for general safety but also assisted teleop / joysticking.

Really nice update over all - looking forward to playing with it!

Regarding the “Safety Monitor” I share the point of view of @mikeferguson from Collision safety node · Issue #1899 · ros-planning/navigation2 · GitHub
While the functionality is actually very nice it remains just that - functional, not safety related. Therefore, I would suggest renaming it to remove any specific reference to the word “safety” so there’s no confusion for future users.

As to your question of why companies (that often have in-house versions of this) haven’t open sourced their implementations I would hazard a guess that it’s related to my statement above. With the exception of robots that are a combination of small, lightweight, and slow, and can rely in inherent/passive safety, almost every robot company that I know of implement actual safety sensors with the necessary PL/SIL (Performance Level / Safety Integrity Level).
Any time you see e.g. 3D cameras or other types of non-safety related sensors used on a robot for e.g. avoiding obstacles in 3D (driving under a table because it looks clear to the robot’s safety laser scanner) the companies often explicitly state that this is NOT safety related functionality in their instruction manuals. And even though a company could be nice and release some non-safety related piece of code with a license that removes their liability - I don’t think many see the benefit and in the worst case it could be bad PR even if there are no liability concerns.

While I am not an expert on the US regulations, in the EU the Machinery Directive has a list categories of machinery for which the certification procedure is a lot more involved and specifically on this list are “Protective devices designed to detect the presence of persons” and “Logic units to ensure safety functions” which is why almost no one bothers to go any other direction than buying certified equipment. Both the US (e.g. ANSI/RIA R15.08-1-2020) and the EU (e.g. ISO 3691-4:2020) have standards for mobile robots that address what PL/SIL is required for different safety related parts of the control system (SRP/CS) such as motor controllers, brakes, bumpers, safety sensors and so on.

Just to reiterate - I don’t want to badmouth the functionality because is actually very useful for many users ranging from academia, to start-ups, to businesses, but I would just be careful with the safety wording. One nice use I could see from this functionality is that you can tune the “virtual bumper” (again this might be a bad name because in the safety world sensor based safety zones are sometimes called virtual bumper :slight_smile: ) to be slightly larger than your real safety scanner warning / stop zones.
If you have ever manually driven a large robot so close to a wall that it’s safety scanners make it stop, you know how irritating it is to get it back out and you berate yourself for not driving more carefully! :warning:

Sorry for the long post, but everyone who works with safety becomes tired of getting asked the question of the difference between safety and non-safety related stuff and one way to help this in the future is in how you name and talk about stuff. This is especially relevant as Nav2 becomes more and more ubiquitous!


I don’t want to get too into this, since its not exactly on topic but its a worthwhile discussion point.

We make do not make a reference to any safety standards. I find it difficult to believe that professionals that would be worried about such a thing could confuse a CPU-side ROS-based node provided by a navigation system which itself does not claim any compliance with a system that would meet any particular standard :wink: Reading those safety standards, its clear from even a glance that this is not what is meant by that. Further, if you were using a safety sensor, they often come with such capability built-in with wires for triggering behaviors if the zones are breached, so this piece of software is redundant.

Some I know do, I know many that do not. Even for those that do use safety lidars, I know many do not use the safety-zone or other functionalities of them, they’re mostly just looking for long-range 2D lidars which just come with that built in. You wouldn’t want your Roomba to have a $1,500 SICK lidar but I’d hope that you’d want your autonomous combine to have multiple of them and use the safety features :slight_smile:

The point is to offer something that adds an additional layer of protection against collision for any/all users to lower the barrier to entry even further than Nav2 has done already. It really depends alot on the environment and practical need.

My intuition on this is that you come from this perspective as a company that needs to worry about such safety elements, so its top of mind and you have detailed knowledge of the safety / regulatory requirements. I don’t feel this is actually a point of confusion for most folks, and for the folks like yourself that are antiquated with safety standards, I think you’d agree that one glance at it and you’d know this is not a vendorized solution to those problems. I believe throughout the certification process as well you need to do documentation on compliance and it would clearly come up in that process that this is not an appropriate solution (for some or all requirements).

I absolutely appreciate the discussion point and I’m not philosophically opposed to another name, but I think Collision Monitor is descriptive and apt to what it is accomplishing. If there is another word for Collision you think would be suitable alternative, I’d be happy to hear it!

Actually, maybe that’s the solution itself. I’ve been lazily calling it “Safety Monitor” or “Collision Monitor” and sometimes even “Collision Safety Monitor”.

Maybe just Collision Monitor would be the solution? Just stick with that, K.I.S.S. :slight_smile: Thoughts?


I think we are in agreement, and all the stuff in my previous post was not necessarily aimed at you but also for other people to understand the broader topic :slight_smile: As I said I think the functionality will be very handy for a lot of situations and I just want to avoid the specific word “safety” to be mentioned there. You call it Collision Monitor in your last post and I already like it better than Safety Monitor from the architecture diagram :+1:

As you said you don’t make any references to safety standards or anything, but in my work with academia and startups people don’t often get this separation between safety and non-safety. E.g. some think that you can rely on costmap layers / masks (catch-all terminology from nav1 / nav2) such as keep-out zones and use them for safety functionality to keep a mobile robot away from stairs - which is absolutely not the case.
So by avoiding the mention of “safety” on any nodes I think we are doing everyone a favor - also potentially avoiding bad PR for Nav2 when people wonder why the their forbidden zone didn’t stop their robot from drowning itself :smiley: https://ichef.bbci.co.uk/news/976/cpsprodpb/13D1A/production/_96987118_robo2.jpg

Edit: Didn’t see your last reply before I posted. I also don’t want to completely derail the discussion but just wanted to see if it was possible with a name change at this stage before everyone becomes used to the functionality!


I’ll rename it on Monday in the diagrams and have @amerzlyakov also rename the package accordingly. Thanks for bringing it up!


Great to see this.

Good name, better represents what it does, in detecting collisions, and avoids safety in the name.

The constraints monitoring collision need to be used for local planning.

@smac in the architectural diagram, how are the constraints from collisions used as inputs to both the collision monitor and the controller server? How are they tracked at run-time as the dynamics, or operation of the robot change by both?