Development setup for multiple ROS distros?

Would anyone mind sharing how they have configured their computers to work with ROS Indigo (Ubuntu 14.04) and Kinetic (Ubuntu 16.04) or any other combination of incompatible distributions?
The obvious choice is having separate computers or at least hard drives and completely separated OS. But this is quite annoying to have to duplicate or triplicate everything (IDE configurations, browsers, shortcuts…)
Another option would be VM’s, but we’ve had some performance issues and would like to avoid them if possible.

I know next to nothing about docker, but could it be used to have different ROS installations on a single computer? I don’t know if this contradicts the “one application per docker” mantra, since I would like to log into the docker from mutliple terminals, and checkout code, edit, compile and run it from within the docker, and accessing robots in the LAN of the host computer.

TLDR; How do you guys handle developing comfortably for multiple ROS distributions?

3 Likes

+1. I’ve been wondering about the same question.

Currently I have Ubuntu 14.04 with ROS Indigo, and someone asked me if I would like to package a rviz plugin I’m the current maintainer to ROS using bloom. It would be nice to know the easiest way to compile/test this package using Kinetic, for example, without messing with my current system.

I’ve been using docker for a while and it works very well. Ii is not perfect, and using software that requires to use graphic tools is tricky but it allows you to have 14.04 and 16.04 in the same computer. I use tmux to handle the “one application” issue, since I am used to use it outside docker that is not a problem

As I said, the only thing that takes some time to get working are the graphical applications. Docker requires to use the same drivers on the image and on the host. It is a matter of know your configuration. But once that is done I’ve been able to run RVIZ without any issue. I actually use Docker and Gazebo together without any problem.

So if I can give you some tips from my experience:

  • Use tmux
  • Take your time to get the graphics configuration working properly, even if you don’t need it at the beginning, it can be an issue later.
  • Be careful with the disk usage, docker can take its space on your disk when you use multiple images, it is a matter of periodically clean unused containers and layers.
  • Use transparent network configuration, this allows you to work with a remote ROS node/master.
  • Use a script to start your container (especially how to mount the folders that you want and the graphics card configuration)
3 Likes

If you have an nvidia card in your system, I’d recommend nvidia-docker. It takes care of some of the setup / configuration when exposing graphics hw to containers.

2 Likes

One solution is to install both Ubuntu versions on your computer either on different disks if you have multiple or on different partitions of one disk and dual-boot between them.
To avoid the duplication of configurations you speak of, you can use the same home directory on both installs.
As your home directory contains all your user specific configuration there is very little duplication I can think of.
(Exceptions I think are Wifi connections if they are configured to be used by all users and printers.)

I’ve used this successfully with Ubuntu 12.04 , 14.04 and 16.04.
There could be issues if some software uses the same configurations files differently on different Ubuntu versions.
But I had no problems like that so far with any software I use.
I don’t use the Unity desktop, so I don’t know if this might give your problems, when switching back and forth between different versions.
My gnome fallback/flashback session had no problem with that.

Nevertheless I’d recommend backing up at least your home directory, better also your root partition, especially if you need to resize your partitions!

How to share the same home directory depends a bit on your partition setup.
The default in Ubuntu is to have the home directory on the same partition as the root system.
If this the case for you and you have Ubuntu 14.04 already installed, install Ubuntu 16.04 in a new root partition beside it (resize current partitions to make space if needed).
Then boot into your new Ubuntu 16.04 installation and mount the 14.04 root into e.g. /mnt/root_14.04 by editing /etc/fstab or using the graphical tool Disks.
You can then either change your user’s home directory in /etc/passwd to point to /mnt/root_14.04/home/USERNAME or mount bind /mnt/root_14.04/home to /home in fstab.

After a reboot you then use the same home directory in both systems.
You will need to adjust your ~/.bashrc file to source the right ROS setup.bash depending on which Ubuntu version is running.

Thanks Dorian, I did this in the past, I wanted to try to avoid rebooting because sometimes just for a small fix on a different distro I can lose 20minutes of work just changing from one partition to the other and backwards and recovering all my setup (open terminals, IDE’s, simulation running…).

I was wondering if other people faced similar problems and how they solved, specially with some tools such as docker that are so popular nowadays. I believe we’re getting excellent replies and I am looking forward to get more.

When/If I manage to get docker running with the steps described above I might write a wiki page or something for future reference.

But everyone please keep contributing to the discussion.

1 Like

Another option is debootstrap + schroot. Schroot is a way to install a different Ubuntu version in parallel to your main Ubuntu version. I regularly use this on my Ubuntu Trusty (14.04) host system to change into a Ubuntu Precise (12.04) schroot, where I have ROS fuerte installed (don’t ask). But this would work for any combination of Ubuntu versions. You can mount your home directory so it’s accessible from both systems. Even graphical programs such as RViz or Gazebo GUI work. No bootup time, no background processes or anything required. If I’m not using it, it doesn’t eat up any resources; when I want to use it, I just run schroot -c chroot:ros_precise and I’m there in one second.

Same caveats apply as with Docker (since Docker is basically a pimped up schroot): Your schroot processes share the host system’s kernel, so you cannot access a hardware device that requires a kernel which is newer than the host system’s kernel. This has never happened to me though. Also, just as with docker, you of course need the disk space to install a complete second copy of Ubuntu.

1 Like

In the end I followed @Shokman and @gavanderhoorn advice, I am using official ROS docker images, with nvidia following http://wiki.ros.org/docker/Tutorials/Hardware%20Acceleration .
I also managed to log into docker with a user that is not root and shares the user id with my hosts’ user, I prepared a Dockerfile for that and published it here: https://github.com/v-lopez/docker_images/blob/master/README.md

Instead of tmux I can start terminator and have as many terminals as I want inside the docker, while being able to edit files on my host computer.

Chiming in a bit late, sorry!

Yeah, I recommend using Docker as well for such scenario. Many of my clients are still running Indigo, but I’ve moved on from Trusty, so I run a Docker container for building/running every project. One of the advantages of Docker is that since you have to write every step in Dockerfiles, you can also use that as a starting point for documenting the dependencies of your project.

However I’ve never got graphics acceleration working with nvidia-docker. CUDA works just fine, but not OpenGL. In any case, it’s no big deal because for Gazebo I run gzserver on a Docker container and gzclient on the host, which has proper graphics acceleration.

@ruffsl, is https://github.com/NVIDIA/nvidia-docker/issues/11 still relevant or it’s just a problem on my end? If it’s the former, I can add a note to http://wiki.ros.org/docker/Tutorials/Hardware%20Acceleration#Nvidia clarifying that only CUDA is supported inside a Docker container.

Oh, this thread slipped past me. Thanks for the shout out @esteve.

I’ve been using both nvidia and intel graphics for OpenGL dependent ROS apps inside containers just fine for the past year or more. I just tested both nvidia and intel methods now as described in the ros wiki you linked, and rviz and gazebo render just fine with both discrete and integrated graphic cards that I have. Rviz is buttery smooth for both intel and nvidia, however qzclient only has a decent frame rate on my 4K host’s screen if I either switch to my discrete nvidia card, or shrink the ogre3d preview to something smaller than fullscreen. Also note, that when you start gazebo for the first time in a fresh container, it’ll take a bit before you see the rendered preview, as I think gazebo is initializing (downloading?) some files on first startup, so perhaps best to make a volume for that to make them persistent between runs and save time.

Here is my setup for reference:

$ uname -a
Linux dox 4.9.0-040900-generic #201612111631 SMP Sun Dec 11 21:33:00 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

$ lsb_release -a
No LSB modules are available.
Distributor ID:	Ubuntu
Description:	Ubuntu 16.04.2 LTS
Release:	16.04
Codename:	xenial

$ lspci -v | less
00:02.0 VGA compatible controller: Intel Corporation Skylake Integrated Graphics (rev 06) (prog-if 00 [VGA controller])
        DeviceName:  Onboard IGD
        Subsystem: Dell Skylake Integrated Graphics
        Flags: bus master, fast devsel, latency 0, IRQ 127
        Memory at db000000 (64-bit, non-prefetchable) [size=16M]
        Memory at 70000000 (64-bit, prefetchable) [size=256M]
        I/O ports at f000 [size=64]
        [virtual] Expansion ROM at 000c0000 [disabled] [size=128K]
        Capabilities: <access denied>
        Kernel driver in use: i915
        Kernel modules: i915
...
01:00.0 3D controller: NVIDIA Corporation GM107M [GeForce GTX 960M] (rev a2)
        Subsystem: Dell GM107M [GeForce GTX 960M]
        Flags: bus master, fast devsel, latency 0, IRQ 138
        Memory at dc000000 (32-bit, non-prefetchable) [size=16M]
        Memory at b0000000 (64-bit, prefetchable) [size=256M]
        Memory at c0000000 (64-bit, prefetchable) [size=32M]
        I/O ports at e000 [size=128]
        [virtual] Expansion ROM at dd000000 [disabled] [size=512K]
        Capabilities: <access denied>
        Kernel driver in use: nvidia
        Kernel modules: nvidiafb, nouveau, nvidia_375_drm, nvidia_375

$ nvidia-docker run --rm nvidia/cuda nvidia-smi
Wed Mar 15 05:38:39 2017       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 375.26                 Driver Version: 375.26                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 960M    Off  | 0000:01:00.0     Off |                  N/A |
| N/A   52C    P0    N/A /  N/A |   1433MiB /  2002MiB |      6%      Default |
+-------------------------------+----------------------+----------------------+

$ nvidia-docker -v
Docker version 1.13.1, build 092cba3

The latest kernel/nvidia driver/plugin are not really necessary, I recall everything working fine stock trusty.

One thing I’d like to try and do is create a command line tool and bootstraps a development container, mounting your source workspaces, attaches devices and display/audio unix sockets, maintains your user permissions, attach containers to host’s subnets, etc. Sort of like parts done with docker-browser-box, but only more ROS focused. I wonder if I go searching again if I’d find another project already doing this for general desktop applications.

If you’d like to dockerize other common CLI/GUI apps, I’d highly recommend checking out Jess Frazelle’s collection of dockerfiles for desktop applications from android IDEs, virtualbox, wine, vlc, spotify, htop, ect:

Yeah, nvidia-smi works fine inside a container for me, it’s just glxinfo that reports using the integrated graphics card, instead of the NVIDIA GPU. Must be something on my end though, didn’t investigate much since I can just run gzclient and rviz accelerated from the host. In that case, I’d say the issue on GitHub can be closed if OpenGL works for you.

Not sure how that would work with Intel GPUs, but I have a system with multiple Nvidia GPUs and I use GPU Isolation to expose only a specific one to a specific Docker container. See nvidia-docker/GPU-isolation.

I have had a lot of success with nvidia-docker, and I have an example Dockerfile hosted on my bitbucket account that I use for gazebo development with kinetic / gazebo8 on xenial, since I’m still running trusty. It has scripts for building and running the container that work for me, so maybe that will have some clues as to how to get the configuration working on your end.

@gavanderhoorn @scpeters thanks for the pointers, I’ll give them a try!

Actually, I read upper information and from there you will find some important information and from there I found some good tricks. I also used Toshiba device which is good to use, So if you interested to collect some information about it from toshiba error 0xc0000185, you will fine to use some proper information.