I have read the (now closed) topic: Call For Testing: Standards-based Python packaging with colcon. But it seems not much has happened in the linked repo since then. I have some co-workers who are asking why they need to use setup.py to use ROS, so I’m investigating for them. Should I try using this colcon-python-project Colcon extension? Or was the effort abandoned? Should I be using something else?
I also found this Poetry Colcon extension. I feel indifferent about Poetry, but could try it out. I would appreciate any sharing of thoughts about it.
Hi, I’m the author of colcon-poetry-ros. The company I work at uses it for our ROS projects. I think whether or not you should use it depends on what pain points you’re feeling with the normal setup.py/setup.cfg system. Poetry has quite a nice, modern approach to dependency management. However, if you aren’t interested in Poetry I don’t know if it’s worth introducing colcon-poetry-ros to your stack just for pyproject.toml support.
I’ve been wondering how to create a venv with the packages installed in it from a package.xml. Is this something that any of these tools supports? It sounds like colcon-poetry-ros goes in the opposite direction, letting the ros build system use pyproject.toml.
This topic has been re-hashed several times, so I’ll try to keep this brief (and probably fail to do so).
Two efforts to consider here:
Supporting standards-based Python packages in ROS tooling (both colcon for local development and buildfarm for released packages)
Switching existing packages to a standards-based build pipeline
Obviously (2) will require (1). We haven’t even started working on buildfarm support. The colcon-python-project prototype still works as far as I know, and you’re free to give it a try. There are a few sharp edges that you should be aware of:
Most notably, colcon “symlink installs” don’t entirely work. In particular, any extra “data files” from setuptools are completely missing. The python package itself gets installed, but the data files are used for critical tasks in ROS like ament index registration and launch file installation. If they were getting copied instead of symlinked, we might be able to live with the regressed behavior but their omission is a hard blocker to switching to the new pipeline.
In some situations, the standards-based build APIs force us to do things that are slow. In certain circumstances, package identification (determining the name, version, and dependencies of a package) is much slower than the current setuptools-only code.
In addition to slow identification, some packages will take longer to “build” than the setuptools-only code due to the compression/decompression cycle that standards-based APIs force us to perform.
Next steps in colcon’s standards-based development involve folding the functionality from colcon-python-project into colcon-core and hiding it behind a “feature flag” so that it’s in a disabled-by-default state. I’ve already started this process. I don’t think we’ll consider flipping it on by default until we solve the symlinking problem. Eventually the “legacy” setuptools-only Python build pipeline will get yanked out into a separate package so that colcon can continue to function on platforms we support which don’t have new enough Python packages to perform standards-based builds at all.
A fair amount of the friction here is that PyPA is moving at a breakneck pace compared to the operating system distributions. They’re removing features from setuptools that have widespread use on Ubuntu Focal, which we still need to support. It’s hard to be optimistic about the move-fast-and-break-things approach from this perspective.
Now virtual environments are an entirely different challenge. ROS package dependencies installed using rosdep will install Python packages for the system’s default interpreter. When you activate a virtual environment, you’re changing which interpreter is invoked by the python or python3 command. Depending on how you created the virtual environment, this may isolate you from the system packages that rosdep installed. Another complicating factor is that when colcon is installed outside of the virtual environment, the colcon executable will always use the python interpreter it was built with regardless of what virtual environments you’ve activated, and will then use the same interpreter to build packages, again ignoring your virtual environment.
If you simply must use a virtual environment, you can make things work if you go “all-in”. Ignore ALL python packages on your system and install EVERYTHING in a virtual environment, including your build tools and all dependencies. It’s not easy to do.
--system-site-packages exposes the ros packages and other system-wide dependencies to our project, and --allow-existing ensures that the .venv isn’t rebuilt if it already exists (possibly not relevant to this thread).
we then use uv to lock our dependencies with:
uv lock
and install our dependencies with:
uv sync --frozen --inexact --no-build-isolation
--frozen uses uv.lock and won’t regenerate the lock file if it differs from pyproject.toml
--inexact allows existing packages to stay in the venv, by default uv would remove them. We do this because we have multiple python projects in the same repo sharing a venv.
--no-build-isolation - TBH I forget what this does but some of the Python dependencies with native code wouldn’t compile without it. Oddly, our other Python project is a cython project and it doesn’t work if this parameter is specified, so YMMV on this one.
With this setup we are able to manage our dependencies independently of (but built on top of) ROS and inside a venv, while still importing the rclpy libraries. Its given us a lot of flexibility and keeps things “standard”.
I should note, currently we launch our project directly with something like python package/src/main.py, and we haven’t integrated fully with the ROS launcher system. I’m not sure if that’s possible, though I don’t think its a big problem either way.
I’m not sure if this is useful to anyone else, but its what has been working for us so far.
Back in the ROS1 days we used rospypi/simple and didn’t even source a ROS workspace, we just ran ROS isolated in a docker environment. It was wonderful. Sadly that’s not an option with ROS2.
I don’t understand the underlying systems that much, but I’ve always wondered why can’t the install folder of a ROS workspace serve as a virtualenv at the same time (probably using system interpreter version and with --system-site-packages to be able to see and utilize all system-installed ROS packages). Rosdep could install python packages into this folder if asked, too.
Then you’d have a self-contained folder that carries everything you need and that you can easily activate by just sourcing setup.bash.
You can, and that’s partially what colcon-poetry-ros does. I haven’t messed with changing where rosdep installs things, though, so --system-site-packages still needs to be enabled.
FWIW we’ve used some shenanigans in our setup.py files to just read the redundant data from the package.xml and end up with a mostly generic setup.py in the process. I would love to switch to a pyproject.toml based approach though so will be watching this.
I looked but didn’t really find anything. Are there past threads about this I’m missing and should read? (Links would be appreciated.) Or is that discussion happening elsewhere?
Reading between the lines, it seems if I’m using colcon-python-project then I should not use --symlink-install. Is this something that could be fixed, or is it not possible to implement?
We are not currently using virtual environments but have considered going in that direction. I think we are fine with installing ROS packages as system dependencies, which our virtual environments would access. Understanding that the Python version would need to be consistent.
The main thing we would want to achieve with virtual environments is supporting different versions of dependencies in different components. For example, we have had issues with different code wanting different, conflicting versions of PyTorch. We currently solve this problem with Docker.
There are circumstances where it works correctly, but I would indeed recommend avoiding the use of both until we’ve arrived at some sort of resolution.
While it is a recurring discussion on ROS Discourse that (beside installation) both C++ and Python ROS packages can be consumed as regular CMake and Python packages, I noticed that the many developers (especially the less expert) do not realize this, and they think use ROS tools like colcon are compulsory for consuming , so thanks for sharing this from the Python point of view.
I worked with a similar setup (a virtualenv with --system-site-packages to reuse apt packages, even if at point it was just managed with pip/venv) but something that may be important to monitor over time is transitive dependencies. If one of your dependencies at some point starts depending on a version of a given python library that has a C++ implementation, that is newer then the one available in apt, pip will install a newer library in the venv ignoring the apt version, creating ABI runtime issue if you have some other apt library that assumed the apt ABI and is not compatible with the newer pip-installed ABI. Fortunately such cases are relativly rare.
--no-build-isolation - TBH I forget what this does but some of the Python dependencies with native code wouldn’t compile without it. Oddly, our other Python project is a cython project and it doesn’t work if this parameter is specified, so YMMV on this one.
By default modern python tooling installs your build dependencies in a self-contained environment, if instead you want to use the build dependencies from your system (a bit like you do with normal dependencies in the venv with --system-site-packages) indeed you always need to pass --no-build-isolation.