ROS Resources: Documentation | Support | Discussion Forum | Service Status | Q&A answers.ros.org

Building (without colcon?) and installing to system?

Hi all,
My “standard” way of deploying production-ready ROS1 packages was to install them to the system, using a script like this:

Basically, when run from any ROS1 package folder, it will build and install the package to /opt/ros/<kinetic, melodic, noetic, …>

I want to build a similar script for ROS2, so that I can just git clone a ROS package and directly install it no-questions-asked without needing to manually create a dev workspace, or having to put it into a dev workspace, but colcon doesn’t have an install option. Suggestions?

1 Like

I can’t say if what you’re doing is a good idea…

But colcon always installs, there’s no separate step for installing. You can control where it goes with —install-base and you’ll want to make sure you’re using —merge-install otherwise it will put each package into a separate folder of the install base path. See: build - Build Packages — colcon documentation

If I specify /opt/ros/foxy as the install destination, won’t it overwrite the system setup.bash?
I don’t want it to create setup.* files, I just want it to install to /opt/ros/foxy/lib/, /opt/ros/foxy/share/, etc.

Personally I feel like the whole cd ~/dev_ws/; source install/setup.bash, etc. is really not a good way to deploy production ROS nodes, I find it much more stable to install everything to /opt/ros, put the launch file in /etc/robot.launch, and run the launch file as a systemd service. It feels much cleaner that way in production than having a bunch of colcon/catkin “crap” lying around the home directory of a user. For production hardware I’d generally want home directories to be empty.

The other reason I do this is that there are often several packages I’m working on, say pkga, pkgb, pkgc, and I have stable versions and unstable (dev) versions of all of them in progress, and I want to create 3 different dev workspaces, dev_ws_a, dev_ws_b, dev_ws_c that each run the stable versions of all packages /except/ use the unstable version of the respective pakcage that is being worked on in that dev workspace. So for example, in dev_ws_a/src I would clone the latest dev branch of pkga but clone nothing else; the system would automatically fall back to the /opt/ros (stable) versions of pkgb and pkgc if I’m working out of dev_ws_a. And then when I want to do some work on pkgb but isolating my work to pkgb and using the stable version of pkga I would switch to dev_ws_b and do some work there.

For this kind of setup to be clean and easy, I would want all stable versions installed to /opt/ros, otherwise the dev workspaces become a mess.

The way we handle this is to build debians out of our internal CI builds, then install the debians into docker files for release for production.

Local workspace → testing local code
Merge to Develop → CI build creates package
Update package on machine image → PR complete

In this way, if we need to test field changes we can source the workspace and change out nodes as needed, but otherwise unless it’s installed at a system level through a deb, it isn’t going to work on a default launch at all.

I think so, but I also think there is a way to disable the creation of that setup file (and therefore its installation), that setup file should be the same no matter what is installed there.

I can’t find it off hand, but you could also install to a temporary directory and then copy everything but the setup file. But again, that setup file doesn’t change anyway so overwriting it shouldn’t be an issue I think (you’d need to verify that).


We have a single .deb that contributes this file in our buildfarm:

And then none of the other .deb for ros packages install it. This is a hard requirement from dpkg, that no .deb install the same file.

Perhaps @nuclearsandwich or @cottsay could explain better than me.


As far as I know though, we don’t use colcon to build binary packages, instead we run cmake/make/make install (or something equivalent) for each package, or in the case of a python package we use other logic. You could do the same, if you handle both CMake and Python setuptools packages in your script, or if you ensure only CMake based packages are used. Basically you don’t have to use colcon, but it automates a lot of the logic needed to build things in the right order and with the right dependencies sourced.

It may also be instructive to read this (and verify it by inspecting a local colcon install folder):

https://design.ros2.org/articles/ament.html#environment-creation

(here “ament tool” has been replaced by colcon in the intervening years)