What environments do you use for training and courses?

Hi all,

I work at a university of applied sciences and we offer ROS courses to our students and to companies. One of the challenging issues we often run in to is how to run ROS and ROS 2 on the computers of the participants. We have tried a different approaches:

We have created bootable flash drives with a persistent part, so files could be stored on the sticks. This worked for about 80% of the laptops. Performance is really nice. The sticks reach the maximum read/write cycles quite soon and then you get strange errors because files get corrupted. It also takes a lot of time to prepare the sticks.

We have prepared images for virtual environments such as Virtual Box and VMware. This worked for most of the laptops. Performance was not good enough to run simulations (Gazebo). Moreover, Ignition Gazebo did not work in my Virtual Box (did not try in VMware yet). So, this is not a good option anymore.

We provided instructions on how to create a dual boot system. I like this approach, but some students don’t have the disk space to do this. Moreover, there are always some students breaking their system.

I now have a student who boots from an external harddisk. So far, this works ok. It is similar to the flash drives, but without the read/write issues.

We did not try to use containers for these courses yet.

I did try out the Construct Sim environment and this works fine for writing code and running simulations. But I think it is difficult to transfer the knowledge gained in that environment to an actual robot.

I am wondering what approaches you are using and what your experiences are.


I think the weapon of choice here is containers, for the reasons you have explained. We use them extensively for deploying, testing and visualization. It works very well. I have not tried Gazebo in a docker a lot, but perhaps someone else can chime in.


Hi @peredwardsson, thank you for your answer. Can you elaborate on how you think containers should be used? We have people coming in with systems running Windows, Mac and some Linux. I have some experience with lxc in Ubuntu, but I did not work with containers in other systems. Do I understand it correctly that the same container (e.g. a docker container) also runs in Windows?

1 Like

Coming from an academic perspective, in research labs I’ve worked at of course encounter similar difficulties, particularly when onboarding new students with ROS developer workspaces, most notably those with less of a computer science background, as is commonly the case when teaching such a multidisciplinary field as robotics.

For smaller course sizes with a sufficient budget, a simple approach is to allocate the necessary number of lab computers, same as you would for allocating robotic hardware to groups of students. This obviously simplifies the setup for students and comes with a number of other benefits applicable to most CS courses, E.g. well defined and controlled environments, sufficient minimum hardware requirements, ensured compatibility and smooth deployment, etc.

Obviously this trade offs or front loads the system configuration from students and onto instructors, not to mention the added burden of IT administration. Additionally this doesn’t scale as well, nore is it as applicable for remote learning curriculums. Lastly, I’d argue that despite the headaches it can induce, development environment setup remains a valuable portion of the learning process, as its most likely something they’ll have to repeat on their own in their careers, not to mention the added insight and valuable intuition of knowing how and what you’ve installed when it comes to eventually debugging projects.

Alternatively, I’ve had a lot of success in introducing colleagues to container based workflows, as it not only simplifies project collaboration, but also the onboarding process in general. That’s not to say it’s as easy or intuitive for everyone, particularly those with little experience in linux. While you may then have to allocate lesson time to explain the basics of building and running containers, many of the same challenges you described would still be applicable: such as dual booting linux on student owned hardware or managing Virtual Machines in order to host a linux kernel for containers. I’ll note that docker on windows has come a long way, particularly it’s integration with Microsoft’s WSL, but given ROS’s heavy reliance on networks, any added layer of virtual networking can complicate things.

That being said, while introducing container similarly front loads some complexity at the beginning of a course, I think the rate of return in productivity and collaboration makes it worthwhile. Instructors can essentially codify the lesson environment in the same manner as they would the lesson material, e.g. checking in Dockerfiles inline with each lesson repo, rather than managing gigabytes of VM images outside revision control. For grading and evaluation, the same container environment can also be used for reproducible project demos, such as when the instructor must independently assess project deliverables, or leverage auto-grading via Continuous Integration, as demonstrated in many recent robotic competitions.

With containers, I felt students where less worried about breaking things and spent more time exploring the ROS ecosystem with exotic ideas and dependencies. If the student ends up ‘borking’ there debian packages when attempting to install a new deep learning library, no big deal, just respawn the container and try again. When they eventually figure it out, they can just append to the teams project dockerfile so the rest of the group or class can all benefit from any solved logistical issue. Most students will also be taking multiple classes at a time, so being able to isolate a courses lesson environment from others was also helpful in avoiding dependency or system conflicts from other classes or from different projects in the same class. Container images themselves can also be shared directly via any container registry, saving students the time in compiling the world from scratch, such was the case for me personally when I was working with a lot of custom forks of PCL, OpenCV or any other large library that would cause my laptop to catch fire :fire: .

Finally, much of this complexity can be abstracted away from students using higher level tools like rocker or ADE. E.g: automatically mounting volumes so students don’t accidently keep there workspace in an ephemeral container, or automatically mount hardware so the container has access to the same network interfaces and robotic peripherals that are connected to the host OS.

A little dated now, but this goes into more detail about my thoughts on the subject:


To the best of my knowledge, the same image can run on all platforms of the same architecture, regardless of OS. That’s the beauty. :slight_smile:

Make an image (for instance, write a Dockerfile) which contains all the dependencies you need for your course. Typically, ROS version of choice, various ROS packages, visualization engines, command line tools, etc. Then, either package the image into a compressed archive, like a tar ball, or publish the image (to Dockerhub), and allow the students to download it. They need to install docker, which is available for all platforms you mention.

I only have experience with working with GUI from linux, I suspect it might differ between platforms, so perhaps do some research in this field.


I used docker with Gazebo (for system tests) and it works pretty well :slight_smile:

1 Like

+1 for docker, has good integration with Windows, Linux, and Mac. And also helps in maintaining the versions and an isolated environment.

For our autonomous robotics course, we’ve settled at using Singularity containers: courses:b3m33aro:tutorials:ros [CourseWare Wiki] . Contrary to Docker, the usage doesn’t need almost any explanation and the integration with the host system is mostly seamless (no setup required for GUIs to work except for passing the --nv argument; networking needs literally no setup). Performance is close to native (including GUI apps like Gazebo).

Another nice thing about Singularity is that it is the same system used for running containers on most HPC computers, so when the students advance and start doing research on our cluster, they stay with Singularity.

We even run Singularity on the physical Turtlebots which the students use during the labs, so it’s super easy to maintain the state of the robots without any kind of centralized deployment (each robot just downloads the image into a cache and runs it).

The only drawback is that Singularity is Linux-only, but as our students are anyways required to use Linux in other courses, it still seems to me installing Linux is the best thing the students can do in the beginning of their study.


For academic purpose, as @ruffsl suggested, you can have a fix setup of systems in your lab. What I do is I have purchased couple of Raspberry Pi and the monitors and different SD cards with Ubuntu OS and ROS installed are provided to the students. When they come they insert their cards and run the codes on Pi. This way they can have their codes stored safely with them, it becomes easy to integrate with physical robots and less chances of chaos.


Hi @peci1,

thank you for your answer. Your point about Linux is a good one. I also think the installing Linux is a good thing to do for all our engineering students. However, currently we don’t do that and there is a wide variety of systems.

Is the effort to setup the software the only consideration for going for Singularity? I also spoke to some colleagues who said that consider systems like Podman. However, Docker is widely used in the build systems for ROS. There seems to be more experience with Docker in the community. Because of that I have a preference for Docker. However, I do not have experience with using Docker on Windows yet, but I am currently trying that out. I did not make up my mind yet…

1 Like

Hi @Wilco,

We as Robolaunch Team working on our Cloud Robotics Platform which we also name Robolaunch in order to address the challenges you’ve also mentioned such as setting up a development environment, high computer resource requirements, transferring robotics applications to physical robot, and etc.

With our platform, we are aming to reduce these barriers that exists in robotics development to accelerate the development of various robotics use-cases in a shorter time then with traditional methods so that developers can focus on application, not the infrastructure.

In this scope, Robolaunch provides the full technology stack to develop&simulate, deploy and manage robots at planet scale!

Main benefits & functionalities Robolaunch offers for Academic purposes can be listed as follows.

  • Easy to use: Just get started with adding your git repositories. No need to install any software locally, only need is a browser.

  • Collaboration: Software development on the cloud with Cloud IDE and shared VDI. Collaboratively develop robotics projects with colleagues, team members, etc.

  • Acceleration: GPU-accelerated cloud-scale simulations for testing, designing robots or training AI systems

  • Sim-to-real: Remotely deploy the robotics applications that developed on Robolaunch platform to the physical robot(s)

  • Visualization: Monitor and manage your robots in run time from Robolaunch dashboard for a full control over your robots.

  • Flexibility: Leverage from entire platform, or pick the components according to your needs.

We believe that Robolaunch will provide great convenience to robotics developers. This is just a small part of the whole benefits possible with our platform.

How to start with Robolaunch?

If you would like to try Robolaunch for your robotics projects, you can join our e-mail list here. We’ll soon be ready for our first release!

Until then, if you have any questions, feedback or suggestions you can contact us anytime you want.



@Wilco I represent another start up aiming to solve your specific problem actually. I was previously in academia and I am pretty familiar with the problems you are facing with teaching ROS. My approach aims to be somewhat different from the competition by leaning heavily into WebAssembly and running all the necessary components directly in the browser. You can already see some of my core technology working over at https://rosonweb.io/

Once I finish building the platform, I plan to host it at https://learnrobotics.io/. In the meanwhile though, feel free to reach out to me since I am interested in learning more about what your exact needs are.


This is some really good work @allsey87! I look forward to following this.

1 Like

Sounds great! I will definitely try this one for my next project.

1 Like

Today, I tried out running ROS 2 Humble on Ubuntu 22.04 in WSL2 on Windows 10 and actually that worked quite nice. I can smoothly run navigation with RViz and Gazebo in Windows. I didn’t expect it to work that way. I am not sure if I render in software mode or in hardware now, but it runs smoothly. Next step is running ROS 2 in Docker in WSL. If that all works, that could be a solution for both Windows and Linux users.


We choose Singularity mainly for three reasons:

  1. simplicity to use by students (requires explaining the students just one or two commands they need to type)
  2. simplicity of distributing the images (you just upload a single file to a HTTP server or copy it on a USB stick)
  3. because Docker is hard to run on HPC infrastructure, while Singularity is the framework that is generally used (and we’re a research university, so our students are expected to get in touch with HPC at some point)
  4. additional points are:
    1. users know exactly and implicitly where are the files that are taking gigabytes of space from their hard drives
    2. No need to explain the difference between images and containers (there are no containers in Singularity, just executables running in the altered environment), no need to manage running containers
    3. all running processes from Singularity can be managed by the system process manager as if they were running on the host OS

There are, of course, cons to this approach:

  1. You generally find more people who can write good Dockerfiles than Singularity recipes
  2. The build process for Singularity is slightly less efficient in incremental development (the whole several-gig image has to be compressed in the last step, which can take a minute or two)
  3. Only Linux is supported
  4. It is not easy to make a snapshot of an image you’ve done changes in
1 Like

Hi Wilco,

applied science uni teacher here, I know your pain. Also, the pain is only going to get worse now that students will run different-arch laptops (e.g., the new Macs). That’s why we decided against running things on students’ laptops.

We used VMs last year on the Openstack GPU cluster we have internally. On top of those VMs we ran a different container (with pre-installed components) for each different lab.
Containers were running net in host mode, so we could connect ROS nodes across containers and robots that were sitting on a uni internal subnet to have a (nearly) seamless transition between simulation and HW (sharing a ROS master).
A video from last year’s setup: Robotic Applications Programming - GPU VM + noVNC students environment - YouTube
As you can see, we had a pretty good frame and sim-to-real rate, and every student can access their work environment with a browser using NoVNC.

This last semester, we converted the same VMs to a K8S cluster, gave students scripts to create / delete containers on the cluster, and managed to “share” GPUs across multiple groups of students.
K8S also meant more complicated networking, and in the end we used ROS bridge + rosduct to have a (very) slow but usable connection between the robots and the cluster when doing the sim-to-real switch.
Here’s an example: Niryo Arm grasp cloud + real - YouTube

We wrote a paper about it and it should be out soon, but in the meanwhile here’s one of the repos we use: GitHub - icclab/rosdocked-irlab: Run our ROS noetic environment including the workspace and projects

Finally, if you have enough CPUs, you won’t even need GPUs, but we use them to run some neural nets (e.g., gpd, or Mask-RCNN for image segmentation).

1 Like

Hi Everyone,

We had informed you about robolaunch above before. I would like to share some new improvements here and recur our collaboration intention again.

As you know, we maintain robolaunch as an open source project and you can deploy robolaunch directly to your servers/computers or we can deploy robolaunch for you. Regarding to your servers/computers capacity, you can deploy robots scalably.

We also support hardware the in loop and hybrid(cloud-powered) deployment use case and you can work with real robot tasks while simulating. Let me share a hybrid demo here.

Please consider below pain points while positioning robolaunch use cases in your mind.

1 - Do you experience recurring challenges during robot development?
2 - Is it more difficult to install the operating system of the robot?
3 - Do you need much more collaboration during robotics development?
4 - Is your robot’s PC lacking in GPU, CPU and RAM resources for your robot software?
5 - Robot batteries cannot provide enough power to run robotics software?
6 - Is working with fleets of multiple or different types of robots your first choice?
7 - Do you need ready to use reference robot designs?
8 - Do you have right infrastructure to uncover new use cases?
9 - Do you need to do predictive maintenance application via digital twins?
10 - Or you need to run sim-to-real deployments or hardware-in-the-loop tests on top priority?

Let us know about your requirements…


Just listing another option that wasn’t mentioned so far, multipass → " Get an instant Ubuntu VM with a single command. Multipass can launch and run virtual machines and configure them with cloud-init like a public cloud." . Available on Linux, Windows and macOS (including M1).

Hi everyone,

I successfully introduced our students/researchers to a combination of WSL2 and docker containers (without Docker For Windows due to license) for visualization/simulation and direct control of research/industrial robots. I consider this to be a very practical way to offer ROS (and similar) to people who have no experience with (or cannot install) Linux and yet offers native performance (afaik), even more nowadays where displaying GUIs from the containers doesn’t require additional steps