I’m Sampath. I have experience with autonomous navigation in indoor environments using ROS 2 Navigation (nav2). I’m looking for out-of-the-box packages similar to nav2 that can be used for autonomous navigation, specifically for outdoor robots.
I’ve seen discussions about Polymath Robotics using nav2 on their outdoor robots. Could anyone recommend resources or packages for getting started with autonomous outdoor navigation in ROS 2? Any guidance or references to relevant documentation would be greatly appreciated.
Nav2 has been shown to work for outdoor environments, but the level of complexity required depends on your setup, environment, sensing, and application.
For example:
Localization: You could be using GPS, whereas we have a great Nav2 tutorial for GPS Navigation provided by Kiwibot, using Nav2 outdoor today [1]. Similarly, you could be in a GPS denied area and need to use 3D lidar or V-SLAM, in which you may need to evaluate options appropriate for your situation.
Perception: You could be in a broadly urban environment where perception of navigable regions is comparatively simpler or in a forested/hilly area with brush and so forth which makes that quite difficult.
So, you need to evaluate what portions of the problem are different due to your application environment and sensing. I don’t think outdoor is one space that can be totally binned together since Urban vs Natural vs Agriculture vs … all have different perception and/or localization needs – even before talking about higher level behaviors. Broadly speaking, Nav2 should work fine for an outdoor environment, but you will need to use GPS or BYO-Localization that is appropriate for your application. Additionally, you’ll need to pre-process sensor data and/or AI models for determining obstacles vs navigable space for use in the cost grid generation that the planners and controllers can operate within.
The overall Nav2 framework is perfectly suited for doing these things easily with plugin-based perception, planning, and control algortithms and not tying you to any particular localization modality. But, for any specific application, of course some customization is going to be required to get the behavior you want or need from your robot system. Instead of wanting to replace a default planner with a new one for optimizing your behavior, you’ll want to replace the default perception algorithms with 2D assumptions (or pre-process data going into it to remove the navigable ground) for your environment
I hope that helps! In addition, there are other technologies you might want to be aware of like Grid Maps, Terrain Estimation, Semantic Segmentation if you haven’t run into those before which are probably useful or solve some portion of the perception problem for you.
Not imminently - but if there were resources that were available (i.e. professional engineering time contributions, financial interests to hire or justify large blocks of priority time, etc) it could be moved to the top of the queue and progress made in the immediate time horizon. Its of course on my long-term roadmap but there are other things ahead of it.
Much of open-source is pushed forward by either (1) someone with the need or interest or (2) an organization providing financial support to enable something they need. In the absence of that, its worked on when it gets worked on at the top of the queue