Why does the ROS2 distro need to be pinned to a specific Ubuntu LTS?

Hello folks,

Long time ROS1 user (although I haven’t actively used it in the last 2 years or so), and I was wondering why ROS2 versions need to be pinned to a specific Ubuntu LTS? I see in REP-2000 that macOS, Windows, and Debian get the latest ROS2 versions, regardless of the OS’s specific version (except for really old versions), and am wondering why Ubuntu gets a rougher deal.

So, my main question is: what Ubuntu distro-specific code is used to make subsequent ROS versions incompatible with previous Ubuntu LTS versions?

One a related track, I see some discussion on this forum about the distribution of ROS packages through a conda channel (#11257). Is there some serious effort to do this? This would be incredibly beneficial to developers and to maintainers as

  1. Conda is (mostly) cross-platform. So, building binaries for supported platforms and uploading it to a Conda channel is much easier than uploading packages to each of Ubuntu, Debian, Homebrew, and Chocolatey repositories/registries.
  2. Dependency management and versioning are made easier.
  3. Workspaces are more intuitive as Conda environments, as opposed to having to resort to sourcing workspace files. This is especially useful when a developer uses non-standard/unsupported shells, like fish and Powershell.
4 Likes

Just to be clear; Debian is in the same boat as Ubuntu. We pick a particular version of Debian to target, and that is what is supported for the lifetime the ROS distribution (or the lifetime of the Debian distribution, whichever comes first).

So that leaves Windows and macOS. You are right that in both cases they are “rolling” distributions, and hence we have to keep up with the latest changes. However, that comes at a cost, both for Open Robotics and for the users.

The cost to Open Robotics is that we are constantly having to rebuild and retest older distributions against those platforms. Since we don’t have unlimited resources, the time taken there is taken away from other things for new distributions.

The cost to users is that if they don’t have the exact versions of the packages at the time we built them on Windows or macOS, it runs a (fairly high) risk of the binary distributions not working for users when they download the binary.

Our Ubuntu and Debian packages have none of these problems, because they are built against a stable platform where the API and ABI of the underlying packages is generally guaranteed.

I don’t have an exhaustive list, but things like the Python version, the glibc version, the OpenCV version, and other core libraries are usually the culprits.

It’s an interesting idea, but it has pros and cons. The pro is that you are in an isolated environment, so you don’t have to worry as much about the underlying OS packages. The con is that you are in an isolated environment, so you basically can’t use any distribution packages; you have to get everything from Conda. If what you need is in there, that’s great, but if it isn’t, you still have to build from source or get your dependency packaged into Conda. And since Conda is presumably also a rolling target, it has the drawbacks I pointed out above.

I will point out, however, that in the modern world of containers, much of this is moot. You can use one of the ROS 2 docker images to run ROS 2 on any machine (I personally use Fedora as my main OS, with all of my ROS 2 work in Ubuntu containers). There are sometimes tricky bits on getting networking between docker containers working, and getting hardware passed through into the containers. But those are usually solvable, and for many things this setup works great.

4 Likes

Thanks, @clalancette for pointing that out. :smile:

Can’t this be rectified by using some CI/CD infrastructure like Github Actions, where each ROS package (and hence package maintainer) is running their own testing across multiple platforms? This easily translates to some form of package deployment too (another use-case for a Conda channel where package developers can push updated binaries).

For package versioning and for compatibility things like Python, glibc, and OpenCV, I feel like a stronger move to Conda (or at least a Conda-like development process) can help solve all of them.

For example, for glibc issues, conda-forge and PyPA both use CentOS to build binaries. See here and here for exact details and the rationale.

Similarly, with Boost, Python, and OpenCV, declaring compatible versions of these is as dependencies, downloading the required versions, and checking for conflicting dependencies can be left to a package manager like Conda.

I think the point I am trying to make is that ROS is a phenomenal piece of systems code, and is a great tool to build infrastructure, but I think the package management (which most of this essentially boils down to) needs some work. And I feel like it makes sense to look at more successful attempts at package management and work towards that. We are already making great moves to something like this, for example with package.xml format 3. :smile:

Yes, that helps in pointing out the problems. But then you also need to fix the problems, and that is where most of the time is spent. Homebrew, in particular, is typically very aggressive in pushing out updates to packages, even ones with API breaking changes.

I’d encourage you to contribute to the effort around bringing ROS 2 to the Conda packaging environment. You should be able to reach out to the folks behind it at the link you posted earlier. I can’t promise we would make any change to make Conda the default, but if there is a critical mass of users using Conda, it would bear a closer look.

I realize this is an older topic, but even five years later, ROS remains tightly coupled with Ubuntu distributions. Wouldn’t it make sense for ROS to adopt a solution like Nix to address dependency issues more effectively?

While Docker is mentioned as a workaround, many ROS packages rely on GUI applications that are not exposed through web interfaces. This adds another layer of complexity when trying to run these applications in containerized environments, especially when dealing with graphical interfaces.

It seems like a more robust, platform-agnostic approach (like Nix) could better address these challenges. I’d love to hear thoughts from the community on whether such a shift is feasible or if there are alternative approaches being considered.

You might want to check out the discussion that literally just happened on adopting nix in the ros ecosystem

Nix aims for a high degree of platform independence through its declarative configuration language and isolated package environments. This creates a reproducible and portable software environment. However, the Nix package manager itself introduces a layer of abstraction that can be considered a platform. This platform, with its own set of dependencies and operational characteristics, has its own advantages and disadvantages.

While solutions like Distrobox can enhance platform independence by allowing users to run different Linux distributions within a container, they do not eliminate the need for managing dependencies or the inherent complexities associated with any given package management system.

Nix offers a potential solution for these challenges by providing a robust and reproducible environment. However, it’s not a universal solution and may not be the best fit for every use case. The “ultimate” solution for platform independence likely does not exist, as the specific requirements and trade-offs vary depending on the context.

I think it’s also important to see this problem from the POV of a downstream package maintainer. With the current pinning of versions of all packages, I can be sure that if my package built once for a specific distro, it will most probably continue to be buildable until the distro is EOL.

This is not true for the rolling distros, where I, as a package maintainer, have no way of planning things like “I know a new LTS is coming, I have to find some time to upgrade my packages before it is released”. It’s just constant fear that at any time, your packages can break when an upstream system package is rolled to a new incompatible version.

I very much prefer the stability (with the cost of getting new features slower).

4 Likes