"Deterministic" navigation in ROS

We are working on a research project where we are investigating the ROS Navigation Stack in an industrial setting (https://www.saxion.nl/onderzoek/smart-industry/mechatronica/next-generation-navigation). The partners in the project want autonomy to a certain level. For most areas in a plant, the behaviour of the robot must be predictable.

We have been looking into commercial solutions such as Navitec and Bluebotics and they have graphical tools to guide the navigation behaviour. In certain areas, the robot is free to navigate. In other areas, the robot is only allowed to follow a virtual line. The tools are some sort of vector-drawing tools, where you can draw routes and areas on top of a map. The planners use this additional information to come up with appropriate paths.

We are wondering if there are open source tools that can do these kind of things. We did not find them yet. If they are not there, it might be interesting to develop such a tool in the ROS ecosystem.

It’s very early days still, but we just opened up our in-progress “traffic editor” for this type of thing. Documentation is currently non-existent, publicly-viewable examples are not there yet, etc., but it’s coming along.

You can specify a floorplan image, draw lanes on it, trace the walls if you want, and then export the path data to YAML and/or a simulation model for Gazebo. The GUI is built using QtWidgets in C++ and saves all the data to YAML. The exporters are Python scripts.

Again, it’s just a work-in-progress in a public repo, not a polished product. But I think this style of robot operations is becoming a common use case in many domains, so hopefully the editor can become useful.

This is just an editor for these paths; it doesn’t touch the problem of actually following them with real robots.

Cheers

1 Like

Thank you. Your description is close to what I was looking for, so I will definitely have a closer look at your repository. We want to plan our routes on top of the maps generated by Gmapping or Carthographer. But essentially, those are also just bitmaps. We want the following flow: the robot drives around and builds a map, then the operator draws routes/paths, restricted areas, and free-to-navigate areas, speed zones, etc. on top of that map, and finally, the robot uses that data as input to the planner.

I think it is logical to separate the editor from the functionality of the planner. The interesting part is to have a common data format for the routes in YAML/XML/etc. Do you use a standard for storing the routes/paths? Currently, we have a group of students working on a planner that follows predefined paths.

Ps. I tried to build your code, but I got some errors. I will have a look at it tomorrow.

1 Like

Currently the annotations are just stored in a YAML format of our own dreaming. There are Python “generators” in the repo which process that YAML into other “output formats” such as Gazebo worlds (XML) or a “simpler” YAML format that is just the navigation data (lanes, etc.), intended to be consumed by nav stacks. We can create generators for any other formats or navstacks; I wasn’t aware of standards for this, but if there are, we can certainly convert the data to whatever format is desired. It’s just Python :slight_smile:

I added a GitHub Action to the repo now which does an automatic build on Ubuntu 18.04 every code commit, so if the badge is green on the repo, it should build (at least on Ubuntu 18.04). Please create an issue ticket if you’re seeing any build errors with details about your platform. Cheers!

1 Like

The build errors were my fault… I was first trying it out on an Ubuntu 16.04 machine. On 18.04, the project builds without problems.

This application is really similar to what we have in mind. We are now playing around with it. The application really helps to discuss the requirements with colleagues and we are sharpening our user requirements. I will get in touch for next steps. I think it would be nice to join efforts to further develop the tool.

Web-based GUI plz.

Preferably as composable widgets.

(It would make it easier to integrate the tool into application-specific UIs.)

Yeah, maybe rev 2 :smile:

Currently we’re running this tool “offline” to create map files and export various products (simulation models, navstack configs, etc.) in the local filesystem, so it would seem that a web-based approach would make things a fair bit more complex. But maybe that’s just because I’m a dinosaur.

There is probably room for both “offline” and “online” editors in the ecosystem eventually.

You, like me, are a dinosaur.

All the young mammals (including at my company) are running around building beautiful-looking UIs that run locally but are accessed through a web browser.

I don’t pretend to know how it works (OK, I do a little bit but just because I’m curious), but I do know that they seem to whip up new UIs in a matter of hours using nothing but libraries with weird names they found on GitHub.

I certainly agree that there is room for both. I’m more interested in the (system) interfaces side… the data formats taken in and spat out, any online communication channels, etc.

Dinosaur++;

I am also ok with an off-line tool and personally I have most experience in that direction (Qt and Java). However, I agree that it would be nice to have web-based tools to edit the maps. I saw a demo of the MiR 100 robot and it looked like they already had a web-based UI for editing the map and for fleet management. However, when I look at their website I can’t find anything about it.

We also came up with two other features that might be interesting for this tool. One is support for .pgm files, because that is the format that ROS uses to store maps. The other is to support curves, because our robots need to have smooth trajectories and no sharp turns. This could also be solved at the planner level, but I think it is better to do that in the editor. I did some experiments in Java with curves and used 4 vertices to define bezier curves.

Would it be interesting if I integrate the curves in your traffic-editor?

I have written this very web-based application twice now for different companies, but both in proprietary formats.
What you’re looking for, if you don’t want to write your own custom UI/system, is QGIS (https://qgis.org/en/site/). At its core, what you’re looking to do is pretty standard GIS stuff. QGIS is designed for these kinds of spatial data workflows. It contains everything you need to digitize, manage, analyze, and ultimately serialize (eg. into GeoJSON) your robot “traffic plan”.

Also be aware of QGIS-ROS, which I wrote to help bridge QGIS into the ROS world (https://github.com/locusrobotics/qgis_ros) ROSCon presentation linked in the readme.

I strongly encourage trying to utilize available open source GIS tooling (that’s been in development for decades) before deciding to make a ROS-specific flavouring of a subset of these tools. These aren’t novel spatial data authoring problems we’re trying to solve.

3 Likes

Thanks @Wilco ! Yes certainly, adding curves/splines would be great, since they are actually physically realizable by robots. We started off with straight-line segments just because they are super easy and because (in my limited experience) it seems that is what many/most companies are currently doing in their proprietary editors anyway.

Thanks @Andrew_Blakey for the feedback. Indeed, it seems that every robot company has their own internal proprietary editor. Thank you for the pointer to QGIS; it’s an interesting idea to use a GIS system for this. I guess I had been stuck in a mental rut that “GIS is for outdoor large-scale maps expressed in lat/lon,” but I see the overlap with indoor mapping now. GIS seems particularly relevant for indoor+outdoor robot operations (deliveries to loading docks, etc.). I guess the purely-indoor, large-building domain still feels “a bit different” to me, in that multi-level buildings are so much more constrained than entire city maps, so an “intentionally limited” UI subset of something like QGIS might make the tool easier to use and the workflows more straightforward. The input (in my limited experience) is often a pile of PDF floorplans provided by the building operator, rather than satellite imagery or other traditional GIS data. But I’ll definitely dive deeper into the GeoJSON RFC and QGIS to challenge those assumptions. Thanks for the pointers!

Perhaps there is enough interest in this area to create a new ROS Discourse category called “Multi-Robot Operations” or something like that. It wouldn’t necessarily be locked to a particular robot software platform (ROS1 / ROS2 / various other options) but instead about creating higher-level tools to deal with multiple robots sharing the same space, no matter what software is running on the robots themselves. I’ll create a category proposal now and anyone interested can help evolve the definition and direction. Cheers!

1 Like

If you’re going to consider GeoJSON as an option for interchange, just know that you can completely ignore long,lat in the spec and just pretend it’s x,y. Been doing this for many projects well over a decade and can say it works fine, assuming you’re utilizing tools that don’t just assume a CRS like WGS84. There’s hundreds of geodata formats so pick what makes sense, but resist making your own. GeoJSON is great because of the countless tools and libraries already available for it.

For QGIS all you need to do is set a custom CRS to a planar Cartesian system (just set everything to zeroes). Everything else just works including the extensive suite of raster and vector tools (see my ROSCon talk for examples of QGIS showing an indoor facility with robots in real-time).

Orthorectified imagery or PDF floorplans are basically the same thing when you think about it. Set control points from known fiducials and begin drawing. Thinking all the way back to school, we’d just rasterize the PDF and run it through QGIS akin to how one would an unrectified satellite/aerial image.Though often a SLAM map is used as the “ground reference” and you draw on your vectors relative to it.

Yes, there is a lot to be said about a limited UI, which is probably why I keep end up making them. Depends what your goals, timelines, etc. are. QGIS may fill a gap shorter or longer term, and it’s always a phenomenal analysis and data processing tool.

Feel free to email me with any questions you’ve got with getting started.

1 Like

Thanks! Ticket added: https://github.com/osrf/traffic-editor/issues/6

The QGIS approach is definitely interesting. I think it is better to use existing tools if they are available. I will have a look at the software to see if it is possible to use in our projects.

Currently, we are aiming at OpenTCS as the fleet/traffic manager, because it is open source and one of our partners already has experience with it. However, OpenTCS manages on an very abstract level. It uses a graph of the environment and there is no direct connection between the information in OpenTCS and, for example, a planner in the ROS Navigation Stack.

Our current approach is to have one data set with maps based on sensor data (from GMapping or Carthographer) with an overlay of the paths and areas where the robot can navigate. From that data set we can export graph data to OpenTCS and use the whole structure in ROS Navigation.

It’s been over 8 years since I was involved in anything that used GIS, but I do recall that back then the GIS community was making a really big push into indoor GIS. A quick Google search turned up plenty of results so I’m guessing the community didn’t give up. I can’t remember details, unfortunately, but there were open format specifications for buildings and built spaces and things like that.

I dived into the QGIS software and it really has a steep learning curve. However, based on my experiences it definitely has the functionality to draw vectors on top of raster images. That was what I was looking for. I still have to look into the interoperability between QGIS projects and ROS. Thank you @Andrew_Blakey for the pointer to QGIS-ROS.

I have also been looking into standard map data formats. I encountered the IEEE Standard for Robot Map Data Representation for Navigation. Based on the name, it seems like the thing I was looking for. However, I am not sure whether to use such standards, because they are closed source and we work in an open source projects.

I am still interested in your view on the last part of my previous post. We are looking into data exchange between different navigation systems. Specifically, we want to use OpenTCS with our our robots that are running ROS. The standard is really targetted at this topic. However, the standard is not open source and I can’t find fleetmanagers implementing the standard.

I found a question on this topic on answers: https://answers.ros.org/question/318247/implementation-of-1873-2015-ieee-standard-for-robot-map-data-representation-for-navigation/ They have a similar conclusion: not open source and no implementations.

In our project we need functionality to exchange map data. So, we can implement it according to a standard or we can build our own custom solution. At the moment, I don’t know the best solution yet.

Word of warning, below is a textwall that may be interesting to some, but may not answer the posters question. Read at your own peril.

I did some research as part off my Master thesis where I created a map using JOSM (java editor for OpenStreetMap) using indoor mapping plugins and then hosted the database (map nodes and relations) locally (could also be hosted online). A simple python node can then be used to query this database using overpass API. I put in information like different floors, hallways, door colours, which sensors could be used in which area. It was all kind of experimental, but technically possible. It gave me a multi-layered map containing the low level x-,y-, occupancy information, a low level topological map, high level topological map all the way up to a high-level semantic map. I never got to the point of actually using the map for navigation though :stuck_out_tongue: (the mapping effort was luckily enough to graduate on).

The advantage of creating such a map is that the robot does not have to think a lot. All of the information about traffic rules and which methods of localization and navigation to use in which area are all embedded in the map. The obvious disadvantage is the insane mapping effort required to get such a map. Additionally there is no standard way to this that I am aware of. I came up with my own model, my own hierarchy and own key-value combinations. I just wanted to share my experience and point out that OSM can be used for indoor mapping as well and that OSM has a pretty big open source community so lots of tools and plugins are available.

1 Like

Another student from Germany was working on the same project at the time.
This could be an interesting read for you (summary of a couple of slides):
https://www.researchgate.net/publication/333507583_Semantic_Mapping_extension_for_OpenStreetMap_applied_to_indoor_robot_navigation

1 Like

There’s nothing to stop you using a closed standard to design open-source software, except in rare situations (such as the AUTOSAR specifications) where the license explicitly forbids it.

The catch is that only people who have access to the standard will understand your design decisions.

2 Likes