Application dependant and varies with the stages of development, but usually some combination of the following under a foxy stack:
We deploy our installs with docker containers baked in from a foxy base with internal owned layers then customer layers on top. Each container has a startup script that either launches a baked in launch file, or if it exists, a volumed in workspace (including a fixed path launch file) to support field testing modifications. Per machine configurations are usually setup with either volumed in configs or environment variables.
Our simulators will pull together a number of containers as above using a docker-compose file to try and keep everything the same (software versions, configs, etc) between the simulation and the real machines. We have a single “IS_SIMULATOR” environment variable to enable/disable some settings/nodes as needed
Fortunately for ros2, ros2 launch is much more expressive since you can write launch files in Python. This allows for quite complex conditional logic if you want it, be it based on launch arguments, configuration files, or environment variables. For our team, however, we mostly use multiple docker-compose files run with a single command to create a composable systems for different situations.
@aposhian I agree
ros2 launch is quite powerful, but it can help you to shoot yourself in the foot (or at least introduce code smells) by introducing and maintaining more than just configuration information in the configuration file.
How do you manage to keep pure configuration info separate from application (and other) logic not related to configuration in the launch file?
For ROS1, I’ve grown fond of using a more generic launch-file approach by taking advantage of directory structure and parameters.
In our use-case, we have several robots that all have different hardware stacks, but we want them all using the same software stack (to the extent that that’s possible). We also need it to be as simple to use as possible, since we have a lot of users that are either new to ROS or that need to be able to not worry about what’s happening under the hood.
So, a single top-level launch file handles everything, but takes in a parameter to specify what robot is being controlled. This parameter is then fed directly into the path of an <include> tag, allowing the command line parameter to specify which launch files are being launched. This occurs with a directory structure that looks something like this:
So the main.launch pulls in slam/mapping/etc, but is then also able to bring in specific robot launch files. The downside to this approach is that it provides a higher bar of entry to maintenance, but it’s worked well for us overall.
I like to use catmux for that purpose. (Disclaimer: I’m the author, so I’m definitely biased.)
The base idea behind catmux is to kind of script multiple launchfiles running in different terminals. It also offers a parameter system, so the startup of individual launchfiles (or any other commands) can be easily activated/deactivated/modified.
One other benefit of using catmux is that since you basically script the characters being inserted into the shell, running ROS nodes across multiple machines can be easily done without using any additional mechanisms such as machine tags.
As you can distribute your whole application over as many shells as you like, it is very easy to restart individual launchfiles / rosrun commands, which is quite handy inside a research kind of development. Also, when having your ROS application in your system’s autostart, I like the possibility, to ssh to the robot machine and attach to the running tmux session to inspect the system / perform runtime changes.
We use catmux quite extensively inside our research lab and our usual workflow is to create a tmux config inside an application launch package and add a script to this package containing something like
catmux_create_session $(rospack find my_application_launch_pkg)/etc/catmux_session.yaml
Then, the whole system can be started using something like
rosrun my_application_launch_pkg my_start_script.sh
Note: As catmux has become a standalone python package it is not necessarily connected to catkin or ROS in general, it can be used to startup any system that requires running multiple commands in a couple of shells.
At Magazino we use GitHub - magazino/systemd_ros: Systemd support for ROS to generate individual systemd services based on a launch file. Together with some patches of ros_comm (checkout the repo for documentation) we managed to get a clean and reliable way of launching our robots where all the typical log output is fed into systemd-journald.
We on purpose chose the generator approach here so that we can continue using roslaunch for development and also rely on roslaunch-check and all these helpful integrations that a developer likes to have. But for production we found roslaunch too limiting. A clear advantage of systemd is that it has so many knobs for tweaking the system (e.g. systemd.resource-control) or ways to declare relations between different services which roslaunch simply does not offer.
Coming back to the original question: similar to @cst0 we also have one top level launch file which takes arguments to condition which nodes or rather which subsystems are launched. Separating hardware related parts from general subsystems allows us to manage different generations of our hardware.
That looks like a handy tool. It deserves at least a ROSCon lightning talk if not a short talk.
Wow, great answers so far! I did not expect to see such a variety of creative solutions in such a short amount of time.
I’ve been working on tools to analyse ROS code for a while, but it seems that limiting these tools to launch files won’t cut it. There’s more to do.
In any case, I see that the parameterized top-level launch file approach might be more common than I thought, which is probably a good thing.
Keep it going!
Regarding ROS2, I would recommend looking into nav2-bringup
It features also top level launch and lots of conditions for multiple setups (tests, single, multi-robot) while staying most of the time ROS2 native regarding the launch system.
ROS2 is really powerful for that, but missing widespread examples and documentation is fogging this star…
The amount of creative solutions is probably explained by the general frustration with ROS launching mechanisms.
In general, I am using single, robot specific launch scripts, which are wrapped in systemd units when deployed. I avoid using script parameters and never use them to launch nodes conditionally, IMO that makes scripts more manageable. Common launch script parts are extracted and moved to higher levels in package dependency tree. Multiple launch scripts are occasionally used for testing.
Another useful tool not mentioned so far is environment variables, which are handy for managing node startup process in general, for example, launch nodes with valgrind, control node crash handling, etc, e.g. ccws/bringup.launch at master · asherikov/ccws · GitHub.
ROS2 launch, I believe, is an unfortunate step back compared to modern unit-based service managers like systemd. I am inclined to Magazino approach, but I would prefer to completely replace launch scripts with unit files.
There’s also a multiverse of different docker-based solutions where ros(2) launch is used in its simplest form while most of the job is taken care of by a launch system managing docker containers. This way you don’t even need your entire software stack to be buildable on the same environment which is often desirable when you have different dependencies, os needs or ros distro needs.
Any reference examples in the wild?
And remember, when you are frustrated you can always launch them out the window… xD
I work on projects that typically use ROS in two different operating styles, 1) as the execution environment for real-time mobile robot control and 2) as the execution environment for constructive time-based simulations which may or may not involve robotics at all (mostly unmanned systems).
In the first case, we typically build a singular launch file that contains everything that may be needed to operate the robot. We typically use a global manager that publishes states and sometimes sub-states and each node may come alive or go dormant depending on the state. Here ‘alive’ and ‘dormant’ means that the ROS node is active but may or may not do anything depending on the state. To start everything, we setup a startup script to invoke the ros launch file at boot and that’s about it. Powering up the bot, automatically starts everything.
In the second case, where there is more variability in input parameters, we build launch files that take some command line arguments and pass them on to nodes as a means to adapt to required options. We try to avoid implementing any functionality or smarts in the launch file itself, even when using ROS2 where this is very tempting, we try to avoid it as a rule. A while ago we experimented with using a simple GUI that would allow us to specify all our options (execution frequency, visualization, init position, etc etc.) and then the GUI would generate the launch file but for us, the complexity of maintaining yet another GUI didn’t add up. Each node has their own config file and we can specify these on the launch command line (mostly with ROS2) but we try very hard to keep the launch file itself as a singular file that rarely changes.
My team uses a combination of Docker Compose files and ROS launch. The core modules of the system are launched inside a single container using roslaunch but the hardware specific nodes or configuration variables are managed via separate containers launched along side the core container. This is mainly a way to maintain separate dependency trees for different hardware configurations.
All I want for Christmas is for someone to take this thread and write a summary of “real-world” launch
examples in the ROS 2 docs.
I know this problem well. What I’ve done is written a python script CLI wtih commands to declare what model robot this is (because we have multiples of each model), what the name of this individual is (because their network names are derived from that) and a few other declarations, and then the script correctly sets up ROS_MASTER_URI and ROS_IP. I’ve been constructing a hierarchical structure of launch files with appropriate parameterization. So that I think I have well in hand.
However, what I am not there yet is the “best” or "right? way to cause any launch file to be launched automatically when the robot boots up so that we don’t have to ssh to tell it to bringup. I am posting a separate question about this to see what we can learn.
Longtime lurker here, but this can easily be solved with a systemd service. I use ansible to manage our fleet of robots, install dependencies and keep things running and at expected configurations. This gives an example of setting services that your ROS nodes need as a dependency, setting environment variables, and pre-flight commands before launching our ros node. Note the jinja templating inside of the curly braces. This allows us to run the same service/template regardless if we are prototyping on Ubuntu or on our custom operating systems compiled from yocto. Here is a generic version of our template for the systemd service:
Looks like you forgot a paste?
We use docker compose that start launch files, separate launchfile as much as we can to keep it modular and have one container per launchfile, then pass yaml files for common configurations and environment variables or cli arguments for more specific ones.
It’s quite verbose but easy to determine what is the problem when it pops and it ensures not everything fails at the same time. The only pitfall we experience at the moment is the accumulation of logs in each container in ~/.ros, we still need to figure out how to disable them.