Why does the ROS2 distro need to be pinned to a specific Ubuntu LTS?

Hello folks,

Long time ROS1 user (although I haven’t actively used it in the last 2 years or so), and I was wondering why ROS2 versions need to be pinned to a specific Ubuntu LTS? I see in REP-2000 that macOS, Windows, and Debian get the latest ROS2 versions, regardless of the OS’s specific version (except for really old versions), and am wondering why Ubuntu gets a rougher deal.

So, my main question is: what Ubuntu distro-specific code is used to make subsequent ROS versions incompatible with previous Ubuntu LTS versions?

One a related track, I see some discussion on this forum about the distribution of ROS packages through a conda channel (#11257). Is there some serious effort to do this? This would be incredibly beneficial to developers and to maintainers as

  1. Conda is (mostly) cross-platform. So, building binaries for supported platforms and uploading it to a Conda channel is much easier than uploading packages to each of Ubuntu, Debian, Homebrew, and Chocolatey repositories/registries.
  2. Dependency management and versioning are made easier.
  3. Workspaces are more intuitive as Conda environments, as opposed to having to resort to sourcing workspace files. This is especially useful when a developer uses non-standard/unsupported shells, like fish and Powershell.
4 Likes

Just to be clear; Debian is in the same boat as Ubuntu. We pick a particular version of Debian to target, and that is what is supported for the lifetime the ROS distribution (or the lifetime of the Debian distribution, whichever comes first).

So that leaves Windows and macOS. You are right that in both cases they are “rolling” distributions, and hence we have to keep up with the latest changes. However, that comes at a cost, both for Open Robotics and for the users.

The cost to Open Robotics is that we are constantly having to rebuild and retest older distributions against those platforms. Since we don’t have unlimited resources, the time taken there is taken away from other things for new distributions.

The cost to users is that if they don’t have the exact versions of the packages at the time we built them on Windows or macOS, it runs a (fairly high) risk of the binary distributions not working for users when they download the binary.

Our Ubuntu and Debian packages have none of these problems, because they are built against a stable platform where the API and ABI of the underlying packages is generally guaranteed.

I don’t have an exhaustive list, but things like the Python version, the glibc version, the OpenCV version, and other core libraries are usually the culprits.

It’s an interesting idea, but it has pros and cons. The pro is that you are in an isolated environment, so you don’t have to worry as much about the underlying OS packages. The con is that you are in an isolated environment, so you basically can’t use any distribution packages; you have to get everything from Conda. If what you need is in there, that’s great, but if it isn’t, you still have to build from source or get your dependency packaged into Conda. And since Conda is presumably also a rolling target, it has the drawbacks I pointed out above.

I will point out, however, that in the modern world of containers, much of this is moot. You can use one of the ROS 2 docker images to run ROS 2 on any machine (I personally use Fedora as my main OS, with all of my ROS 2 work in Ubuntu containers). There are sometimes tricky bits on getting networking between docker containers working, and getting hardware passed through into the containers. But those are usually solvable, and for many things this setup works great.

3 Likes

Thanks, @clalancette for pointing that out. :smile:

Can’t this be rectified by using some CI/CD infrastructure like Github Actions, where each ROS package (and hence package maintainer) is running their own testing across multiple platforms? This easily translates to some form of package deployment too (another use-case for a Conda channel where package developers can push updated binaries).

For package versioning and for compatibility things like Python, glibc, and OpenCV, I feel like a stronger move to Conda (or at least a Conda-like development process) can help solve all of them.

For example, for glibc issues, conda-forge and PyPA both use CentOS to build binaries. See here and here for exact details and the rationale.

Similarly, with Boost, Python, and OpenCV, declaring compatible versions of these is as dependencies, downloading the required versions, and checking for conflicting dependencies can be left to a package manager like Conda.

I think the point I am trying to make is that ROS is a phenomenal piece of systems code, and is a great tool to build infrastructure, but I think the package management (which most of this essentially boils down to) needs some work. And I feel like it makes sense to look at more successful attempts at package management and work towards that. We are already making great moves to something like this, for example with package.xml format 3. :smile:

Yes, that helps in pointing out the problems. But then you also need to fix the problems, and that is where most of the time is spent. Homebrew, in particular, is typically very aggressive in pushing out updates to packages, even ones with API breaking changes.

I’d encourage you to contribute to the effort around bringing ROS 2 to the Conda packaging environment. You should be able to reach out to the folks behind it at the link you posted earlier. I can’t promise we would make any change to make Conda the default, but if there is a critical mass of users using Conda, it would bear a closer look.