Conceptual questions on how to best use Docker + ROS to ship an application

Hi all,

(this question/discussion is also in http://answers.ros.org/question/242793/conceptual-question-on-ros-docker/ but I’ve suggested to post here to gain better visibility)

I’ve been reading the different resources about Docker+Ros I found on the web + the video of Ruffin White talk in Roscon2015… So I see great potential of Docker with ROS for usecases such as testing and Continuous integration, or deployment on robots for a robot manufacturer etc… basically when what I need is to run a clean version, without adding any modification to the image/container…

But I think I’m missing a concept or at least I’d like discuss the usefulness of Docker on another usecase…

We are working on an internal “product” that is a big collection of ROS nodes. So we rely on ROS and ROS-Industrial packages, plus some third party drivers for hardware, specific libraries etc… We have our own Continuous Integration Buildbot server that generates a Debian from the catkin-install output (on different slaves to account for the linux/ros-distribution mix).

When we need to set-up a new machine, we need to install ubuntu, ros, then our script to install all the required libraries and dependencies, and finally install our Deb. Then start working, developping and customizing the final application for the robotic cell deployment.

So, building a Docker container to host all this looks like a great solution to provide a clean install of our product/solution both internally or to our external partners/clients. Another option would to provide a kind of Vagrant or Chef cookbook to automate the correct deployment of a new machine, external libraries script execution and my product deb installation… (any opinion on the most suitable strategy?)

However, and this is the point where I’m getting lost on the usefulness of Docker for my usecase, our product is NOT meant to be working isolated and alone… It’s a collection of ros nodes to allow easier programming and deployment of industrial robotic cells. So it can be seen as library to be used by our developers and clients.

So If I am a user of this product, having a docker component would be great to start. But then I would develop my own robotic application/installation using this product + extra drivers form other Ros repo + my own nodes and application-specific Guis, my own config files and programs etc…

So, to my current understanding, I could “save” all this in my local version of the docker container… but when the official product docker container is updated, I would then loose all my specific configuration and application, right?

Or should I create a new docker container for my own developments ? and ask my clients to control Docker programming to be able to create their own application?

As you can see, I fully get the benefits of the docker container when the objective is to use something completely “closed” or ship an application that is just meant to be ran/executed.

But I dont’ get the “how-To” when I need to work on top of a third-party provided container… And maintain the integrity of my work…

Can someone fill out the blanks in my (mis)understanding?

Thanks a lot in advance!

Damien

2 Likes

(I’m not a Docker expert, so the following may be due to my lack of insight into current Docker best-practices and / or capabilities of the system)

I think Docker can be useful in your situation, if you use a container to Dockerise your ROS API (and the pkgs and nodes that implement that API). The container(s) would contain your pkgs and their dependencies + a way to launch them (remote roslaunch?). Then clients can write ROS applications ‘against’ that container. This avoids the lengthy setup process of all your pkg’s dependencies (and their dependencies, ad inf) on your client’s machines (as they would only have to pull the image).

It may not work that well for things that are not part of the ROS API level, such as libraries and headers exported by packages, as it’s not so clean (imho) to expose directories from your container to the host. Msg/svc definitions can (and should!) be put in separate pkgs anyway, so clients can install those outside the container without needing all the other complex bits.

Some difficult aspects that might reduce the return on investment here: access to hardware (possible, but involved sometimes), complex networking setup (but Docker compose exists) and versioning of containers vs individual packages & nodes.

( including a comment by @dsalle from the ROS Answers question):

But how would they write these Ros apps against my container: would they need a full desktop install of ROS on their machine? and my container only contains libraries and my own nodes? or would they create another container with their nodes?

That’s what I alluded to with “doesn’t necessarily work that well for things not part of the ROS API”: developers will need a ROS install, but they won’t need to install all your dependencies or follow the complicated setup for those dependencies. Personally I don’t like to develop inside a container, but it can be done.

Compare this to how you work with Baxter, it runs a ROS compatible interface, but you don’t have access to, nor get to install the sources for the packages running on it. There are developer packages, but those use a ROS API to communicate with the on-board computer(s). I’m not saying it’s ideal, just that it’s an option.

I think the main thing to remember / realise is that Docker containers are mostly meant to containerise processes that expose services, ie: runnable things. I haven’t really seen them used for distributing libraries and header files (other than as a basis for other overlayed containers). The idea then is to avoid having to repeat a complicated setup and configuration process on each host where you’d like to run those services on. The container already contains everything that the service needs. The ROS API of your nodes is then that ‘service’ that you containerise. Not the sources.

I think you are getting way too complicated for the situation.

Docker (container deployment for a single processes style of management) on top of ROS package (Source deployment for an inherently multi-process style of development) on top of Debian package (Binary or source deployment) on top OS deployment…

(If you really think containerization is useful for your situation, take a look at some of the other linux cgroup helpers that don’t have such a narrow minded single process philosophy as Docker.)

Would I be wrong in guessing that all your deployments are on deb style OS’s? If so then at least one guy on your team should know all there is to know about authoring debian packages and preseeding the OS installer (ubiquity on ubuntu).

It should be straightforward to start with fresh hardware and have everything running with the push of a button using the debian tools. Likewise to start with an existing system and apt-get install all-our-stuff. If you can’t do it now then you probably need to learn a bit more about the tools that make up your foundation.

I think what you want is possibly not Docker, but Vagrant. It is for providing easy-to-deploy development environments. It might be easier to adapt to your goal.