ROS Resources: Documentation | Support | Discussion Forum | Service Status | Q&A

Introducing robo-gym: An Open Source Toolkit for Distributed Deep Reinforcement Learning on Real and Simulated Robots

Dear ROS Community,

Together with my colleagues, I am happy to introduce to you robo-gym: an open source toolkit for distributed reinforcement learning on real and simulated robots.

robo-gym provides a collection of reinforcement learning environments involving robotic tasks applicable in both simulation and real world robotics. Additionally, we provide the tools to facilitate the creation of new environments featuring different robots and sensors.

Main Features

  • OpenAI Gym interface for all the the environments
  • simulated and real robots interchangeability, which enables a seamless transfer from training in simulation to application on the real robot.
  • built-in distributed capabilities, which enable the use of distributed algorithms and distributed hardware
  • based only on open source software, which allows to develop applications on own hardware and without incurring in cloud services fees or software licensing costs
  • integration of 2 commercially available industrial robots: MiR 100, UR 10 (more to come)
  • the provided tasks have been successfully solved both in simulation and on the real robot using a DRL algorithm trained exclusively in the simulation environments

robo-gym simulated environments are built with Gazebo and use ROS controllers making it easy to expand the library of RL Environments with new robots and sensors.

Getting Started

You can find all the documentation directly in the GitHub repo, it should be quite easy to start and play around with the already existing environments.

We also included some basic information on how to integrate new environments, but if you are interested in integrating you own robot, sensor or task please reach out, we would be happy to support you with that!

How to contact us

If you encounter any issues with robo-gym the best way to contact us is to directly open a new issue on GitHub.

If you are interested in expanding the framework or start a collaboration please drop us an email at

Matteo Lucchi

Friedemann Zindler



the toolkit looks pretty nice and I’m looking forward to more examples!
After looking through the repository and the paper, I still wonder why you did not mention the ([OpenAI_ROS] package, which has been another ROS interface to OpenAI? As far as I understand, it does not provide distributed training, but it looks pretty extensive and similarly structured.
Since I am pretty new in this field, there might be other reasons for not using this package?

Thanks a lot, Christian


Hi Christian!

Thank you for the interest and your nice words!

You are right, we should have probably mentioned this package as well in our comparison, but I will try to explain the benefits of robo-gym over this package.

Premise: I haven’t tried out the OpenAI_ROS package by myself, I base my knowledge on the documentation provided with it, so if I say something wrong about it please correct me.

  1. To the best of my knowledge, there is no example or description on how to exploit the models trained with the OpenAI_ROS pkg with the real robots, so I cannot say whether this is possible or not. This is a very important thing for us, because we don’t only want to have something looking nice in simulation but we want to use the trained models on the real robots and we aim at actually deploying the trained models in a industrial production scenario.

  2. As you already saw, there is no integrated support for distributed training, and again this is really important for us. In our paper we used the D4PG algorithm training on 20 different simulations at a time and we got nice models after a couple of hours of training. Training a non distributed algorithm (e.g. DDPG) to solve the same task on a single robot simulation takes more than 24h.

With this I don’t want to criticize the OpenAI_ROS package, nor to say it is bad, it looks really nice and well structured, although in my opinion it falls into the category of frameworks that focus on the simulation world and lack that connection to real world. So at the end I would say it all boils down to what do you want to achieve with DRL and what is your final goal.

I hope this will help you :slight_smile:

Cheers, Matteo

1 Like