ROS Resources: Documentation | Support | Discussion Forum | Service Status | Q&A answers.ros.org

Interesting topics for student assignments

We have some opportunities for students to work on navigation topics. As a research group we are currently in the transition from ROS to ROS 2. From our current projects, we have some challenges where students can work on. However, I am interested in what topics or challenges are interesting from ROS 2 / Navigation 2 community perspective. I.e. what topics should be tackled?

2 Likes

https://github.com/ros-planning/navigation2/tree/master/nav2_navfn_planner mentions issues:

  • Implement additional planners based on optimal control, potential field or other graph search algorithms that require transformation of the world model to other representations (topological, tree map, etc.) to confirm sufficient generalization. Issue #225
  • Implement planners for non-holonomic robots. Issue #225

Although the issue is closed, it might be interesting to pick it up. For us it is interesting because we have more constraints on planning (combine free navigation and line following in one plan, depending on extra information in the map).

  • Constraint-based goal definitions, eg. ‘Go to within 0.5m of this pose (while looking towards …)’
  • Scaling velocity based on visibility, eg. slow when there could be monsters lurking behind a corner but go fast in clear and open space
2 Likes

@Loy: especially the second point is very interesting. But I think the path planning should take that into account already. Driving in the center of a road to maximize the distance to corners could enable the robot to use different safety fields so that its overall speed would be higher.

1 Like

True, but in some applications, you want to drive on the right side of the hallway to allow for passing etc. Very application-dependent.

Another for the list:

  • combination with MoveIt! : where to drive to in order to pick up and deliver that object bottle of beer

@Wilco what did you have in mind? I’d be more than happy to identify a few projects in different areas that could help navigation2 / good demos / additional capabilities in the ecosystem.

Right now we have a few sub-working-groups within the navigation working group on a few topics (and always adding more as there’s interest and people wanting to do it)

  • Dynamic obstacle integration - radars, AI, costmaps, and planning
  • GPS integrations - using GPS for positioning, fusing GPS with other positioning systems, navigation2 using this information in potentially massive environments that can’t be realistically mapped
  • New algorithms - planners, controllers, recoveries, behavior tree plugins. In the works Lazy Theta*, Hybrid-A* (could use help from expertise in non-linear optimization), CiLQR
  • Environmental modeling (my bread a butter) - working on completely replacing costmap_2d with a new environmental representation that’s more flexible to work on cost-grids, elevation maps, and 3D cost grids (+ sensor processing algorithms for above).
  • Visual and 3D SLAM integrations - costmap plugins to use the outputs of these for navigation as well as formal integrations and support for VSLAM and 3D SLAM vendors (+ making of new version of these if applicable).
  • Testing framework - creating a realistic test framework for stress testing application and navigation systems. Ex not just going from A->B, but intentionally creating obstacles and triggering edges cases to ensure proper functioning. Also increasing unit and integration test coverage (currently at 71% with a goal of 80% by end of 2020)
  • Semantic navigation - using semantic information to aid navigation or complete some goal. Creating standards around formats for ROS semantic information and tooling to work with it (GUI to annotate common features, tools for serving this information and using it in a process like a TF buffer, etc)
  • Localization - replacing AMCL completely with new methods
  • Architecture - (mostly just me, but…) architecting all of the above and designing these to be as flexible, maintainable, and testable as possible. Continuing to explore new capabilities and directions to move the mobile robotics ecosystem forward into the state of the art, and create new variations of academic interest.

I think step 1 is to figure out what your goal is; to produce something high quality to share with the community, create technology demonstrations for students to learn how to work with these technologies, or helping with community efforts already underway. I’d always love more people working on demos and integrations, having official support / documentation for working with GPS/VSLAM/3D SLAM/T265/etc are definitely lower-knowledge requirements but very high long-term impact.

All of these projects are around the common theme of modernizing the ROS mobile robot ecosystem. I want the ROS mobile robot ecosystem, and Navigation2 at the center, to be the unquestionably most complete and production ready navigation system in the world, bar none. This is why I’m spending so much time and effort especially on environmental modeling and planning; as these are the key to unlocking the traditional “down falls” of navigation in ROS; requiring planar environments with circular differential/omnidirectional robot platforms. I want the future of Navigation to be complete; ackermann robots on hills, legged robots in a forest, etc.

Edit: cough and if anyone likes writing papers, so do I, so get involved and help with something paper worthy cough

4 Likes

At the TechUnited RoboCup@Home team, we’ve been using a 3D semantic world model (https://github.com/tue-robotics/ed_tutorials) for navigation. Objects have a 3D shape, type etc. Together with the constraints I mentioned earlier, it allows to to specify goals like "in room X AND in front of table Y AND in 0.5m of object Z AND looking at object Z)

MoveIt! already has a 3D planning scene though, using something like that or exposing that for navigation could be a first step.
Setting up a 3D scene graph with annotated objects that is usable for both manipulation, navigation, localisation, object classification etc, updated by eg. an external node calling a service would be great to have.
E.g build a map and semantic 3D world model from https://github.com/MIT-SPARK/Kimera-Semantics and keep that model updated while the robot does it’s task. :heart_eyes:

OK /me, take a deep breath and calm down…

@smac At the moment, I am trying to make up my mind :slight_smile: So, I am open to suggestions and ideas. As I am currently defining a masters assignment, this could be in the area of developing something new and contribute to the community. Bachelor assignments are more in the direction of applying existing techniques, making demonstrators, tuning those solutions and creating documentation.

The current assignment is part of an project on industrial navigation where we test the applicability of ROS 2 and Navigation 2 for industrial (custom) robots and vehicles. Based on my current understanding, there is a gap in the area of adding manual constraints to navigation. For example, most companies have different areas in their buildings. In some areas, robots are only allowed on a specified track. If there is an obstacle, they wait and “honk”. In other areas robots can freely navigate. They can take the shortest path, go around obstacles, etc. As far as we know, there is no solution for that yet. I think this involves both environmental modelling (adding user constraints to existing maps) and algorithms that take these constraints into account.

Moreover, we have already worked on precision docking. The robot uses the navigation stack to move to an area where it can “see” its goal with the lidar and/or 3D camera. The last part is then directly performed on the sensor data, not using the navigation stack anymore. In this way, we could achieve higher precision. However, it might be interesting to integrate this behaviour in whole stack.