Hi all, my team just released an alternative continuous integration for ROS 2 using GitHub Actions. This action currently could be used to build and test a ROS 2 project using any version of ROS 2 distribution. This action works by running a Docker’s container using the official ROS 2 Docker’s image. More information for this continuous integration could be found here.
Nice work.
Seeing as you already mention it’s an alternative: what made you create this? What are the main advantages over (for instance) ros-tooling/action-ros-ci or ros-industrial/industrial_ci?
(note: this is not criticism, I’m merely interested to know when users should use ros-2-ci
instead of the two I mention earlier)
This comes at a convenient time because I was looking into setting up CI for a suite of privately-developed packages.
I find that declaring dependencies in ROS is complex, and often a package builds even when it is sloppy with these declarations. Using catkin_make_isolated
and catkin_tools
helps with dependency hygiene between ROS packages, but I don’t know of a way to keep packages honest about their system dependencies (like Boost).
Here is the CI system I had sketched:
-
Discover all of the
package.xml
files and build a dependency tree of the packages. -
Do a topological sort based on
build_depends
directives to find the correct build order. -
For each package (following the build order), create a fresh Docker container.
-
Install any
.deb
files for any packages that were built earlier that are build dependencies of this package, and generate arosdep.yaml
file mapping the ROS package names to their Debian package names. Then runrosdep
to install the remaining system packages to the container. -
Use
bloom-generate
to generate Debian package files, then use these to compile and produce the.deb
package. (Instructions.) Copy it out of the container into an artifacts directory.
If you want to run tests, you would additionally have to compile each package with testing enabled, then install test_depends
and then run the test suite.
Do any of the existing CI solutions do all these steps?
I find the best way to deal with this is to test in a container with just the base OS and ROS installed, and use rosdep
to install system dependencies inside the container. That will catch any you have missed.
I find the best way to deal with this is to test in a container with just the base OS and ROS installed, and use
rosdep
to install system dependencies inside the container.
you can just use the standard ros-tooling/action-ros-ci
along with the provided ROS docker images to do just that. no extra steps needed.
i just set it up over on this repo for anyone who wants to see it in action. (no pun intended!)
The case I want to consider is that your workspace contains two packages, foo
and bar
. Both of them actually depend on, say, libjpeg
, and will fail to build without it. But while foo
declares it with <depend>libjpeg</depend>
, bar
mistakenly does not.
If you simply rosdep install
with both packages in the workspace, then libjpeg
will be install due to the declared dependency in foo
, and then bar
will successfully build because libjpeg
is present, obscuring the issue.
Building each package in a separate container in the way I outlined would catch the mistake in bar
. I’m not sure if anyone is going this far, though. It’s a lot of work.
I don’t know if it could be said as an advantage over those CI. But the reason my team creates this is because we need a CI with a workflow that we usually do when developing a ROS 2 package. Usually, we develop it on a standard Ubuntu system with the latest LTS. That system already exists as a docker image as in the link that i previously mentioned. But it came at a cost, there’s still no Docker image for Windows, Mac, or other OS that’s currently supported by the ROS 2.
Another thing is dependency management. I know there exist rosdep which handles dependency management for a ROS 2 package, but we rarely use that. We prefer to define everything that we need to install either by calling apt, pip, etc.
Also, when the package has other dependencies with our personal package that does not include in the repository, the build process will always fail. We solved that issue by introducing a custom command that could be called anywhere during the process. by providing command in the pre-build, we could clone the external repository to be included in the build workspace.
In short, by using this, the workflow of the build, and test process is more customized according to our need and resembles the workflow that is usually done when building and testing ROS 2 in a standar system (Ubuntu LTS).
If you really want to build each package in a separate container, you should consider using a functional package manager like GNU Guix. Each Guix package is built in its own isolated environment where only explicit inputs to the build function for that package are visible.
It should be relatively straightforward to write an importer from ROS packages to Guix packages.
Then you could do all sorts of cool things, like easily create a container for a ROS package with only the necessary dependencies included and nothing more to minimize container size. Or you could use it to verify the binaries provided by some server really correspond to the source code for safety or security reasons.
We considered doing this in the standard devel jobs built into build.ros.org and build.ros2.org However the extra overhead of doing this higher overhead approach was not worth the longer build times. For reference for most packages the environment setup takes almost as long or longer than the actual tests. If you have N packages it takes approximately N times longer to test each package in isolation. And in addition you still have to build all M dependencies of package n from source (because they’re from the same checkout.) Thus if you then are generally increasing your build time from N to something like the triangle number complexity depending on dependency structure \frac{(N^2 + N)}{2}. As you can see this is costing a lot of resources for every build and is something that changes infrequently, and is really quick and easy to fix. It certainly could be enabled, but it costs a lot of compute time. If you release as binary packages they are built in full isolation and will catch anything that’s undeclared.
This sometimes will cause first releases to fail, but is usually fixed with one quick patch release. After that the maintainers are usually a little bit more conscious of the dependencies and rarely is there a followup problem. Incurring ~N times the cost plus a much more complex build that takes longer has a lot of drawbacks compared to the likelyhood that any specific change causes a masked dependency.
We thought about implementing your approach, but then realized that that is exactly what the buildfarm is doing when you make a binary release. So it could be extended to do prereleases, but that too is very complex and then would need the ability to stage and cache multiple different versions of packages in repositories etc and suddenly you’re creating full repositories for each individual build to be able to be self consistent but also isolated.
How do you share these dependencies with other developers or users who may want to leverage your package in another deployment or system?
I think you’ll find that the following workflow is quite standard building and testing ROS 2 systems.
- Configure your workspace for the rosdistro of choice. (Unnecessary if you’re building everything from source.)
- checkout the source repos you’re interested in developing using a
.repos
or.rosinstall
file (Maybe generated byrosinstall_generator
or manually curated.) - use
rosdep
to install all declared dependencies - build your workspace
A maintained package in any rosdistro can generally be expected to work using this workflow, and they don’t have to worry about researching any system dependencies and the existing CI systems do this too so developers can be more confident that this will work for them too.
Could I suggest you make this very clear in the description of the action in the Marketplace?
As @tfoote already hints at, that doesn’t seem like a standard workflow, and might lead users to conclude they’ve done things according to accepted/best practices if/when they get things to work after using your action.
Again: this is not criticism. Different workflows can all lead to successful results.
slightly off-topic, but could you give an example how you make sure dependencies are installed? Do you run individual apt install ..
and pip install ..
commands? Or perhaps a wrapper .bash
or .bat
script?
I noticed this in the description of the action on the Marketplace page:
Although, The CI process is guaranteed to be fast (approximately 3-5 minutes, depends on each package) as almost every component of the ROS 2 has already been installed inside the Docker’s image.
The problem @rgov describes is related to this: if your builds start with a Docker image which contains “almost every component” already, it becomes harder to detect your own packages are not declaring all of their dependencies.
This is why the buildfarm (but also ros-tooling/action-ros-ci
and industrial_ci
) start with almost empty Docker images, such as ubuntu:focal
or debian:buster
. Any dependency not declared by the packages-to-be-built will not be present during the build, and will cause it to fail.
In a research project I’m involved in (robust-rosin/robust), we found such build-configuration errors make up a large part of the bug population in ROS and ROS 2. Being able to detect those early (as @rgov wants to do, and various CI tools support this, see below) avoids running into those.
At least industrial_ci
supports this.
It allows you to hook all steps of the build process, one of which would allow you to clone additional repositories in underlays, or even build additional source packages after your main build has finished (to build separate test packages fi in a downstream/overlay workspace).
And this is, I believe, exactly what ros-tooling/action-ros-ci
and industrial_ci
do.
Same workflow for ROS 1 workspaces.
Perhaps @ipa-mdl can comment, but with industrial_ci
, you could set this up by pruning the set of packages to-be-built before industrial_ci
runs rosdep
to install all dependencies.
You’d get something like this (all performed by industrial_ci
, these are not separate steps you need to do yourself, or script):
- setup base build environment
- checkout copy of the repository to build
- prune
foo
, so we’re left withbar
- run
rosdep install ..
- start the build
If bar
would not build_depend
on libjpeg
, the build now fails.
The build with just foo
will succeed, as it does declare the build_depend
.