Announcing ROS Docker Images for ARM and Debian

TL;DR: Support for both ARM and Debian with ROS is now reflected in the Official DockerHub library! :whale:

Hello everyone!

As you might have noticed, DockerHub is beginning to support additional architectures other than amd64 [1]. So I’ve expanded upon our dockerfile maintenance infrastructure for the official ROS images to enable arm support.

Additionally while refactoring, support for multiple operating systems, i.e. debian based ROS images, has also been enabled, while also extending to supported arm architectures. To see the listing of supported suites, distros and architectures for the official DockerHub library, you can view the manifest for ROS here [2]:


  • New tags have been added to specify the operating system suite via appended suffix
    • E.g. kinetic-ros-base-xenial, kinetic-ros-base-jessie
  • There are no changes to the original set of tags, as they still point to the same suite
    • E.g. kinetic <=> kinetic-ros-base <=> kinetic-ros-base-xenial
    • Additionally true for amd64 tagged images hosted from osrf/ros automated repo
  • The official registry will internally negotiate what arch is pulled via the manifest
    • E.g. if docker-engine host is arm64v8, docker pull ros should pull an arm64v8 image
  • Multi-architecture ROS images are also mirrored under separate docker hub organizations
    • E.g. docker pull arm64v8/ros OR docker pull arm32v7/ros:lunar
    • You may reference <arch>/ros:<tag> to specifically pull a target architecture
  • There is some build scaffolding you can follow for multi-architecture image builds for ROS

This is all fairly new, so if you’d like to start learning more, here’s a relatively recent article on the subject [3]:

Of course, if you’d like to play around with any of the arm images, but don’t have raspberry pie or other arm based platform laying around, you can easily emulate via qemu-user and binfmt-support. By installing the necessary binfmt-support kernel module and qemu-user static binaries to the host, you can run commands within the arm environment, e.g. on your amd64 workstation. This may require forthcoming patches to your debian binfmt-support package depending upon your distribution, so should encounter runtime issues, you may follow these instructions here [4].

For example:

$ sudo apt install qemu-user-static

$ uname -a
Linux ubuntu 4.8.0-58-generic #63~16.04.1-Ubuntu SMP 
 Mon Jun 26 18:08:51 UTC 2017 
 x86_64 x86_64 x86_64 GNU/Linux

$ docker run -it arm64v8/ros:lunar-ros-core-stretch uname -a
Unable to find image 'arm64v8/ros:lunar-ros-core-stretch' locally
lunar-ros-core-stretch: Pulling from arm64v8/ros
774bc81cd4dd: Pull complete 
Digest: sha256:dd88dce3f840cc963a61881a1da4f36f1c66214dd1b0029fa433580a4f5a142f
Status: Downloaded newer image for arm64v8/ros:lunar-ros-core-stretch
Linux a2a63cc39389 4.8.0-58-generic #63~16.04.1-Ubuntu SMP
 Mon Jun 26 18:08:51 UTC 2017
 aarch64 GNU/Linux

$ docker run -it arm64v8/ros:lunar-ros-core-stretch cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 9 (stretch)"
NAME="Debian GNU/Linux"
VERSION="9 (stretch)"

If you find issues with the images, please be sure to ticket them here [5]:

Also don’t forget to share our official repo [6] so others might discover it!




Although some i386 binaries are supplied by the ROS buildfarm, I’ve deliberately omitted it for now, given:

  1. i386 binaries for docker-engine are not officially shipped or supported by Docker
  2. Current traffic for i386 ROS packages is below that for arm

Very nice!
We’ve been successfully using ARM docker images for ROS in our CI setup for a over a year now and it really makes things a lot easier.
We also included the static qemu binaries in our ARM images to make it easier to run them on our amd64 machines for testing. Is there any plan to also include qemu in the official images?


I would vote against this. Instruction translation is built in for Docker CE in (recent versions of) Linux, Mac, and Windows 10 Pro. Rather than include them by default, and push the additional bloat penalty onto users who might not need it, it’d be more important to me to keep the images small without me having to remember to remove these binaries before deployment.

Hmm, a convenient idea at first, but just like the Official Ubuntu or Debian images that serve as the base images for ROS, our ROS images similarly serve as a primary starting platform for many application, be it continuous integration or target deployment, etc. I would concur with @computermouth that including qemu into the official ROS images may be too much of a niche use case to justify the larger base image size that would require additional bandwidth when shipping and storage from resource constrained embedded targets. Additionally, the current multi-arch generator setup is such that the dockerfiles themselves are designed to be architecture agnostic, keeping it simple to add support for future platforms as they emerge. Necessitating platform specific alterations would complicate mainece a bit.

I didn’t yet reference this given some issues related to this capability on ubuntu and debian [1], and so instead opted for giving a legacy example of mounting the files I know works currently. However, I suspect once some additional issues [2,3] (thanks for starting those by the way, @computermouth) have been patched upstream, then mounting or baking qemu files inside the image should no longer be necessary. Testing with my latest apt sources still failes, but please report back @computermouth when those patches make into debian and ubuntu release.

[1] Qemu instruction translation for ARM · Issue #56 · docker/for-linux · GitHub

1 Like

Tis I, computermouth! The bummer is that it’s a problem Debian and Ubuntu’s distribution of the qemu binaries (rather some supporting files I’ve backed up here: GitHub - computermouth/qemu-static-conf: Docker cross-architecture builds without putting qemu-*-static in the image)

However, it does work out of the box with the latest Fedora. So I assume it’s something coming down the pipe in the qemu dists. And any decisions in the design of your containers should keep that in mind.

1 Like

Hmm. I like the current working solution you have written on your repo more than suggesting folks to mount qemu files (as done with cross-docker [1]). It also simplifies the building dockerfiles from images of other architectures as well. I have updated the original post to reflect this interim solution in anticipation of --fix-binary flag option added into binfmt-support for debian.

[1] GitHub - justincormack/cross-docker: run docker containers for different architectures

1 Like


arm32v7 images for kinetic and lunar have now just released. Although the upstream issues with ubuntu’s cloud image for trusty remains [1], blocking older ROS distro tags that target 14.04 such as indigo, you can still at least use the latest LTS for ROS on your older arm targets, e.g Raspberry Pi 2.

$ docker run -it arm32v7/ros uname -a
Unable to find image 'arm32v7/ros:latest' locally
latest: Pulling from arm32v7/ros
93170abd0836: Pull complete 
Digest: sha256:8cac76fded9e3393bcf6c5605e074829780d1ad58029a0b1f58fd9a3ec23862c
Status: Downloaded newer image for arm32v7/ros:latest
Linux 4ed135bbaf79 4.11.3-041103-generic #201705251233 SMP
 Thu May 25 16:34:52 UTC 2017
 armv7l armv7l armv7l GNU/Linux

Additionally, the official docker library now natively support manifest lists [2]! So instead of previously: docker pull trollin/ros, you should now simply be able to: docker pull ros to download the same image architecture as is your docker host system. Although should you need to pull a foreign architecture, you can still do so by specifying as such the repo path: docker pull arm32v7/ros.



Alright, looks like the fix for the Ubuntu Trusty cloud image for arm32v7 has made its way through the Docker Library pipeline, so now all ROS distros should have arm32v7 images [1].

$ docker run -it arm32v7/ros:indigo-ros-core uname -a
indigo-ros-core: Pulling from arm32v7/ros
98ab4a8d51bc: Pull complete 
Digest: sha256:da0c9394f128e748724aa7d1773bd483e0414d395f3956dc62a92d6d46b41d92
Status: Downloaded newer image for arm32v7/ros:indigo-ros-core
Linux ad0a168953c8 4.13.10-041310-generic #201710270531 SMP 
 Fri Oct 27 09:33:21 UTC 2017
 armv7l armv7l armv7l GNU/Linux

Additionally, as an updated reference for the Docker Multi-arch article from IBM that I linked to in the origin post, here is a more recent talk on the subject by Phil Estes (IBM Cloud) and Michael Friis (Docker, Inc) from DockerCon EU 2017 [2]:

Docker Multi-arch All the Things

In this talk, Phil and Michael will talk about how Docker was extended from x86 Linux to Windows, ARM and IBM’s z Systems mainframe and Power platforms. They will cover the work and architecture that makes it possible to run Docker on different CPU architectures and operating systems; How porting Docker to a new OS is different from porting it to new hardware; What it means for a Docker image to be multi-arch (and how are multi-arch images built and maintained); How does Docker correctly deploy and schedule apps on heterogeneous swarms. Phil and Michael will also demo some of the new features that let Docker Enterprise Edition manage swarms with both x86 Linux and Windows nodes as well as mainframes.

[1] Bug #1711735 “source.list for armhf includes trusty-security whi...” : Bugs : cloud-images

1 Like

I was wondering what the update policy of these docker images is?
Also is there any way to quickly show from which distro sync they were built?

Especially asking this since the last sync to indigo and kinetic accidentally included an ABI breakage [1] in the nodelet package and if you build packages on your CI with the current docker images, they will segfault if you run them on your own up-to-date machine.

It would be nice if we could
a) somehow include the distro sync (date?) in the image (maybe as label or so?)
b) automatically build new images when there was a sync


1 Like

Right now the Docker images are updated manually at @ruffsl discretion.

He’s been working on automating the process, right now all the images can be updated using some metadata but it is not hooked up to the buildfarm yet. We’re planning on having a job on the farm triggered automatically after every sync to update these images, that job will also be ran periodically to ensure there’s no unexpected diff in the Dockerfile and that they still build.

Providing the date of the sync may be a bit more tricky as the docker images get rebuilt everytime the base images change so it would not reflect the actual content of the image.

Ok, let me know if I can help with anything there…

On a sidenote: is there any possibility to get the debian packages for older versions from somewhere (i.e. from before the last sync)?

Regarding the update of the docker images, last build was 24 days ago apparently, so they should still have the previous version for indigo (kinetic and lunar received the nodelet update earlier so they have the new version already (nodelet_core: 1.9.13-0 in 'kinetic/distribution.yaml' [bloom] by mikaelarguedas · Pull Request #16248 · ros/rosdistro · GitHub, nodelet_core: 1.9.11-0 in 'lunar/distribution.yaml' [bloom] by mikaelarguedas · Pull Request #15631 · ros/rosdistro · GitHub).
@ruffsl do we have a way to trigger rebuilds for the images in the official docker library ? or is our only way to submit an updated set of commit hashes?

As of buildfarm integration, that’s something I’d like to see before the melodic release. The goal would be to modify superflore to add a docker entry point / “generator” and on the buildfarm side, create jobs to run it on syncs. I’ll look into it in mor details in a few weeks and get the ball rolling.

Unfortunately no, not that I’m aware of

In regards to controlling how ROS syncs are released into the building of docker images, the safeguard this far has been pinning the installed ROS packages by version number. However, from the looks of things, with the ABI brake from July, but the last version bump in the indigo dockerfile being from back in May [1], seems that didn’t stop it due to implicit dependency changes in the sync?

As @marguedas mentioned, the triggers for rebuilding the official images are either merged PRs to the official library manifest (pertinent to the image:tag desired), or a rebuild of a parent base image, like the official debian or ubuntu images.

We can also ask the library maintainers directly to simply manually trigger a rebuild of the images, say by way ticketing a request on the library github repo. I’ve done so before a number of times.

@flixr, I think you may find it useful to know that past images of Official Library images are archived on the docker registry and can be retrieved. As for longevity of such archives, I don’t yet know what it is (haven’t checked), but they seem to go back quite a ways in time. This can be done by pulling the image by its digest (immutable identifier) [2]:

docker pull ros@sha256:<older_ros_sha256_here>

If you have an older image laying around, you can use the docker CLI to retrieve the digest (I haven’t found a good way to retrieve the archived list of available digests for a given repo from the docker registry as of yet, but would like to know [3][4]). If your CI is sensitive to ROS syncs, pinning the digest of the image in your dockerfiles may be a way to mitigate unplanned disruptions. Just be aware that the FROM images would then be static, and would not receive the latest security updates from upstream like generally tracked tags.

[1] Blaming docker_images/ros/indigo/ubuntu/trusty/ros-core/Dockerfile at ecf2b15a56686c6c7d3fc710c1753cfe3e5e9067 · osrf/docker_images · GitHub
[2] docker pull | Docker Docs
[4] Get image digest from remote registry via API - Open Source Registry API - Docker Community Forums

I’ve always wondered why that was/is done: earlier versions of packages aren’t retained in reprepro repositories, so wouldn’t the Docker build just fail if the package isn’t found anymore?

Also: only the top-level metapackage is pinned right?

This was done intentionally for the official images just for that purpose.
See the context for the decision here:

Also, we now pin the version used for each ROS package. Note that however, we are using reprepro (a tool to handle local repositories of debian packages), and as in FAQ 3.1 is limited to one version per architectur. So from the Repeatability documentation referenced:

“or the build should fail outright”

Dockerfiles will fail to build until updated once a newer version of a package is released.

In the dockerfile for the official images, only the target application focused packages are pinned. There are sometimes other supporting packages installed, but are not necessarily pinned, e.g. gnupg2.

Ok. Makes sense.

I was actually thinking of whether the metapackages themselves do any version pinning (as in: in their Depends fields). But that doesn’t appear so. The current setup appears to work because the repositories don’t retain any old packages.