Interesting topics for student assignments

We have some opportunities for students to work on navigation topics. As a research group we are currently in the transition from ROS to ROS 2. From our current projects, we have some challenges where students can work on. However, I am interested in what topics or challenges are interesting from ROS 2 / Navigation 2 community perspective. I.e. what topics should be tackled?

2 Likes

https://github.com/ros-planning/navigation2/tree/master/nav2_navfn_planner mentions issues:

  • Implement additional planners based on optimal control, potential field or other graph search algorithms that require transformation of the world model to other representations (topological, tree map, etc.) to confirm sufficient generalization. Issue #225
  • Implement planners for non-holonomic robots. Issue #225

Although the issue is closed, it might be interesting to pick it up. For us it is interesting because we have more constraints on planning (combine free navigation and line following in one plan, depending on extra information in the map).

  • Constraint-based goal definitions, eg. ‘Go to within 0.5m of this pose (while looking towards …)’
  • Scaling velocity based on visibility, eg. slow when there could be monsters lurking behind a corner but go fast in clear and open space
2 Likes

@Loy: especially the second point is very interesting. But I think the path planning should take that into account already. Driving in the center of a road to maximize the distance to corners could enable the robot to use different safety fields so that its overall speed would be higher.

1 Like

True, but in some applications, you want to drive on the right side of the hallway to allow for passing etc. Very application-dependent.

Another for the list:

  • combination with MoveIt! : where to drive to in order to pick up and deliver that object bottle of beer

@Wilco what did you have in mind? I’d be more than happy to identify a few projects in different areas that could help navigation2 / good demos / additional capabilities in the ecosystem.

Right now we have a few sub-working-groups within the navigation working group on a few topics (and always adding more as there’s interest and people wanting to do it)

  • Dynamic obstacle integration - radars, AI, costmaps, and planning
  • GPS integrations - using GPS for positioning, fusing GPS with other positioning systems, navigation2 using this information in potentially massive environments that can’t be realistically mapped
  • New algorithms - planners, controllers, recoveries, behavior tree plugins. In the works Lazy Theta*, Hybrid-A* (could use help from expertise in non-linear optimization), CiLQR
  • Environmental modeling (my bread a butter) - working on completely replacing costmap_2d with a new environmental representation that’s more flexible to work on cost-grids, elevation maps, and 3D cost grids (+ sensor processing algorithms for above).
  • Visual and 3D SLAM integrations - costmap plugins to use the outputs of these for navigation as well as formal integrations and support for VSLAM and 3D SLAM vendors (+ making of new version of these if applicable).
  • Testing framework - creating a realistic test framework for stress testing application and navigation systems. Ex not just going from A->B, but intentionally creating obstacles and triggering edges cases to ensure proper functioning. Also increasing unit and integration test coverage (currently at 71% with a goal of 80% by end of 2020)
  • Semantic navigation - using semantic information to aid navigation or complete some goal. Creating standards around formats for ROS semantic information and tooling to work with it (GUI to annotate common features, tools for serving this information and using it in a process like a TF buffer, etc)
  • Localization - replacing AMCL completely with new methods
  • Architecture - (mostly just me, but…) architecting all of the above and designing these to be as flexible, maintainable, and testable as possible. Continuing to explore new capabilities and directions to move the mobile robotics ecosystem forward into the state of the art, and create new variations of academic interest.

I think step 1 is to figure out what your goal is; to produce something high quality to share with the community, create technology demonstrations for students to learn how to work with these technologies, or helping with community efforts already underway. I’d always love more people working on demos and integrations, having official support / documentation for working with GPS/VSLAM/3D SLAM/T265/etc are definitely lower-knowledge requirements but very high long-term impact.

All of these projects are around the common theme of modernizing the ROS mobile robot ecosystem. I want the ROS mobile robot ecosystem, and Navigation2 at the center, to be the unquestionably most complete and production ready navigation system in the world, bar none. This is why I’m spending so much time and effort especially on environmental modeling and planning; as these are the key to unlocking the traditional “down falls” of navigation in ROS; requiring planar environments with circular differential/omnidirectional robot platforms. I want the future of Navigation to be complete; ackermann robots on hills, legged robots in a forest, etc.

Edit: cough and if anyone likes writing papers, so do I, so get involved and help with something paper worthy cough

5 Likes

At the TechUnited RoboCup@Home team, we’ve been using a 3D semantic world model (https://github.com/tue-robotics/ed_tutorials) for navigation. Objects have a 3D shape, type etc. Together with the constraints I mentioned earlier, it allows to to specify goals like "in room X AND in front of table Y AND in 0.5m of object Z AND looking at object Z)

MoveIt! already has a 3D planning scene though, using something like that or exposing that for navigation could be a first step.
Setting up a 3D scene graph with annotated objects that is usable for both manipulation, navigation, localisation, object classification etc, updated by eg. an external node calling a service would be great to have.
E.g build a map and semantic 3D world model from https://github.com/MIT-SPARK/Kimera-Semantics and keep that model updated while the robot does it’s task. :heart_eyes:

OK /me, take a deep breath and calm down…

@smac At the moment, I am trying to make up my mind :slight_smile: So, I am open to suggestions and ideas. As I am currently defining a masters assignment, this could be in the area of developing something new and contribute to the community. Bachelor assignments are more in the direction of applying existing techniques, making demonstrators, tuning those solutions and creating documentation.

The current assignment is part of an project on industrial navigation where we test the applicability of ROS 2 and Navigation 2 for industrial (custom) robots and vehicles. Based on my current understanding, there is a gap in the area of adding manual constraints to navigation. For example, most companies have different areas in their buildings. In some areas, robots are only allowed on a specified track. If there is an obstacle, they wait and “honk”. In other areas robots can freely navigate. They can take the shortest path, go around obstacles, etc. As far as we know, there is no solution for that yet. I think this involves both environmental modelling (adding user constraints to existing maps) and algorithms that take these constraints into account.

Moreover, we have already worked on precision docking. The robot uses the navigation stack to move to an area where it can “see” its goal with the lidar and/or 3D camera. The last part is then directly performed on the sensor data, not using the navigation stack anymore. In this way, we could achieve higher precision. However, it might be interesting to integrate this behaviour in whole stack.

Moreover, we have already worked on precision docking. The robot uses the navigation stack to move to an area where it can “see” its goal with the lidar and/or 3D camera. The last part is then directly performed on the sensor data, not using the navigation stack anymore. In this way, we could achieve higher precision. However, it might be interesting to integrate this behaviour in whole stack.

Currently, we still use Nav 2 to position the robot in front of the docking station and we have a separate node that drives the robot towards (and over) the docking station. This works as long as the initial position at the start of the docking procedure is not too bad. In our group, we have a student working on enhancing the precision docking and integration of precision docking into the Nav2 stack. I was wondering where you would solve the precision docking part in Nav 2. Would it be a “precision docking controller” that is not using the map or planners anymore, but that listens directly to the sensor data? It think it would be better to still have the planners in place so that we use the sensor data to estimate the pose of the docking station (which is dynamic in our case) as good as possible and use the whole stack to reach the goal. What are your ideas on this topic?

For the precision docking, we want < 1 cm accuracy. As we use quite accurate lidar scanners, we can “see” the docking station (in our case a crate) rather accurate. However, this measurement is in the lidar / base / robot reference frame. As the localization from AMCL is > 1cm accurate, we can’t use that to localize the docking station in the map frame. Our plan is now to have a precision controller and a precision planner that do not work in de map frame but in the base / robot frame. As we know the pose of the docking station relative to the robot, we plan and control in the same frame and we don’t have issues with the accuracy of AMCL. Does this make sense? Are we missing something here?

Hi !
From our experience on lidar-based precise docking, all things you are pointing are correct. As you said, for precise docking you have to forget amcl, since the accuracy is not enough. A good approach is to localize the robot with respect to the docking reference marker , and then execute the planner on that reference, so not on the robot frame but on the marker frame.
best!

1 Like

Sounds similar to what the kobuki used to do, but then with Lidar instead of IR.

1 Like

That’s a pretty typical way in my experience. Get to the general area at a heading where you can “see” the dock with whatever docking sensor is of interest to you, then using direct sensor detections to finely maneuver into it. It probably shouldn’t use maps, but it could use planners, depending on the distance from the dock and what planners you mean. You should definitely dock in odometry frame, so you shouldn’t need to worry about AMCL / localization at all.

I would put this into either a controller server controller or a behavior server behavior (e.g. recovery server recovery, but soon to be renamed and rescoped to more general behaviors. If you wanted to add into Nav2, that would be the place). They would have access to essentially the same data in both, but the behavior would allow you to have a separate BT node and .action definition which could be useful as a non-path-follower controller. It also semantically makes more sense to think of it as a “behavior” than purely a “controller” even if it is one. Maybe a new BT sub-tree to take care of any additional logic if not all handled internally to the docking action.

I could see the docking action either just detecting the dock and setting a goal to a odometry-framed kinematically feasible planner, but it also wouldn’t be hard to just have the docking action drive towards that goal itself if it’s a simple backup/move forward maneuver with minor angular corrections from the initial pose. KISS works well here unless you need something non-trivially more complex to get into a dock.

1 Like

There was a recent project implementing an action based autodock capability using fiducials in noetic. It might be a good start to port to Nav2.

2 Likes

Nice! It’s not quite as general as we’d like to be in Nav2. We’d want it to work with any type of dock detection method, not just fiducials. It also doesn’t appear to do any more than the initial detection of the dock rather than continuously detecting and refining the approach to the dock (if in sensor visibility). The TF way of transferring pose information could be used for other detection methods as well, but its not clear to me if the 3 fiducials method used to correlate would work wish just a straight random forest detector or similar.

I wish I knew you were working on something like this, it would have been nice if we could have chatted in the design phase to make sure it fit all of our collective needs! It’s probably 80% of what we need!

1 Like

@smac, the visual fiducial-based approach that Tully linked continuously uses and refines the approach to the dock so long as it can see the marker(s). It progresses through multiple states along the way, with the initial “look” at the large fiducials being used to calculate an “approach point” that is intended to be orthogonal to the dock. Anecdotally, we found this to be important for diff-drive robots; spinning and driving towards this approach point seems a bit ugly/hacky at first, but we found that it helps the docking reliability to ensure that the final “guided approach” starts basically orthogonal to the dock.

Once the robot has driven to the orthogonal “approach point,” the algorithm spins back towards the dock and approaches it head-on while continually using marker sightings. It continually refines the dock position estimate using TF frame estimates of the “small” fidicual that is intended to be placed at “eye level” of the robot’s camera so that it can see it for as long as possible during the final approach to the dock.

The “marker observer” functionality is provided by a separate node, which is continually publishing TF frames of the detected features (aruco_detect in the example simulation provided in the repo). The autodock node itself just subscribes to TF frames and steers based on them.

2 Likes

The Future Work section of the fiducial-based approach states:

filter occupancy of charging station on costmap.

Is this about removing the docking station from the costmap to prevent “collision” avoidance?

Hi, I am the author of this autodock package. Yes, this future work on costmap filtering is supposed to prevent collision avoidance when approaching the charging station.

On top of that, this solution is tailored to fiducial markers. Definitely, there’s room for improvement to refurbish the tf detection and logic portion, make it generic enough to support different kinds of reference markers.