ROS Resources: Documentation | Support | Discussion Forum | Service Status | Q&A

ROS Developers LIVE-Class

  • ROS Developers LIVE-Class #34: Testing different RL Algorithms with ROS and Gazebo

How to test different RL algorithms for the same robotics task and compare its results, by re-using everything, from the simulation of the robot to the task description, and only changing the RL algorithm.

In this LIVE-Class we will use the openai_ros package to achieve that.

  • ROS Developers LIVE-Class #33: Understanding dead reckoning robot navigation with ROS

Robot Navigation means: How to make robots able to move around AUTONOMOUSLY. This means by themselves (no joystick attached).
Odometry based robot navigation means how to make use only of the odometry to understand where the robot is. That is dead reckoning navigation (For example, the type of navigation that Roomba robots use).

This Live Class is about making a robot autonomously move around by sending velocity commands to its wheels and by using odometry to figure out where in the space the robot is. That is called dead reckoning navigation

By the end of this Live Class you will be able to:
▸ Understand what odometry is, how to compute it, and how to obtain it from a ROS based robot
▸ Understand the different types of velocities a robot uses
▸ Understand the axis (frames) of a robot, and how to show them in Rviz using tf
▸ Send commands to the wheels of a ROS based robot
▸ Move a ROS based robot around using Dead Reckoning (odometry + wheel commands)

  • ROS Developers LIVE-Class #30, #31, #32: Duckietown AI Driving Olympics Series

The Duckietown project is devoted to teaching AI and machine learning with robots. Recently they launched the AI-Do Driving Olympics competition about self-driving cars.

In this series of three Live Classes we are going to see how to use simulations to program de Duckiebots solve the AI-Do competition and program simple self-driving cars.


In this Live Class we’ll see:
  • How to set up a full Gazebo simulation of a Duckietown and Duckiebots with the proper ROS interface
  • How to access the sensors and actuators of the Duckiebots
  • How to create a simple navigation program for the robot

EP 2

In this Live Class we’ll see:
  • How to use OpenAI to create a robot that follows the lines. We will use reinforcement Learning (DeepQ learning)

EP 3

In this Live Class we’ll see:
  • How to manage traffic, avoiding other Duckiebots and handling intersections. We will use deep learning to train the robots.
  • ROS Developers LIVE-Class #29: How to use OpenAI ROS for the Virtual Maritime RobotX Challenge

Step-by-step LIVE class for preparing the Virtual Maritime RobotX Challenge!

You will learn how to use openai_ros package to make the WAM-V sea robot of the RobotX Challenge learn to pass the first Navigation Control.

We will see:
▸ How to create a ROS project for solving the first navigation test of the competition
▸ How to have the simulation of the challenge running
▸ How to install the openai_ros package
▸ How to create your ROS packages for training the sea robot with OpenAI algorithms.
▸ How to actually train the robot

  • ROS Developers Extra-Class: ROS meets Self-driving Cars

Autonomous driving has been a hot topic recently. Technical giants like Google, Daimler and Bosch have invested heavily in this field, however, the development process can be tricky for such a complex system.

In this ROS Extra Class, we will give you a broad overview of core components in self-driving cars and show you how you can easily begin developing self-driving cars with ROSDS (ROS Development Studio).

*View all previous classes:
*If you missed the class, you can find the ROSject files and full-code used in the class at Robot Ignite Academy:


ROS Developers LIVE-Class #35

Testing Different Robots on the Same Reinforcement Learning Task
October 23th, 2018

This live-class is about how to make different robots learn the same task by training them with reinforcement learning and using ROS, Gazebo and openai_ros. Imagine that you want to compare the performance of different robots learning to do the same task. For example, how good is the Turtlebot2 and the ROSbot on learning how to move around a maze. We will show you how to use openai_ros package to reuse the same RL algorithm, the same TaskEnvironment and only change the RobotEnvironment (one for each robot). Remember that the package openai_ros already provides the RobotEnvironment for both those two robots, so it will be just a matter of knowing where to instantiate the class for each robot.

We will see:

▸ An overview of the openai_ros package for training robots with RL with ROS and Gazebo
▸ Where the learning algorithm must be put (provided to the attendants)
▸ Where to put the TaskEnvironment (also provided to the attendants)
▸ Where are the RobotEnvironment for each robot (Turtlebot2 and ROSbot) located inside the openai_ros package
▸ Where in the whole pipeline to put the RobotEnvironment
▸ How to connect everything to make it learn and compare results between the two robots


ROS Developers LIVE-Class #36

** How to use your trained DQNN in the robot**
Live streaming date: October 30th, 2018

In previous Live Classes, we learned how to train DQNNs for specific tasks using OpenAI. However, how can the trained DQNN be used in the robot once it has been finally trained?

That is the subject of this Live Class. In this class, we will train a DQNN to make a robot complete a task. Once the DQNN is trained, we will transfer it to the (simulated) robot and show how to use that network to control the robot doing the task it was trained for.


ROS Developers LIVE-Class #37

How To Parallelize Search of Reinforcement Learning Hyperparameters
Live streaming date: November 13th, 2018 | 18:00 CET

In this class we are going to see that learning parameters (also known as hyperparameters) play a key role when training a robot with Reinforcement Learning. Finding the proper parameters is usually a difficult task that requires many trials and error.

In this class, we are going to see how to parallelize a manual search of hyperparameters by using the Gym Computers of ROS Development Studio. Gym computers allow launching several training instances in parallel, each one training on its own simulation and set of hyperparameters. The results of each computer can be monitored in real time.

We are going to see a manual way of selecting parameters.

You will learn:

  • RL training depends on training hyperparameters
  • Finding the proper hyperparameters can require many trials
  • How to use Gym Computers to parallelize hyperparameters search

(view all live classes:


ROS Developers LIVE-Class #38

How to create a ROS Service in C++

In this class we are going to see how to use ROS services for the control of robots. When to use them and how to implement them.

You will learn:

▸ What is a ROS service
▸ How to create a ROS service using C++ (ROS service server)
▸ How to call a ROS service (ROS service Client)
▸ How to use a ROS service in a practical example

(view all live classes: )

ROS Developers LIVE-Class #39

How to create a ROS Action Server

In this class, we are going to see how to use C++ to build a ROS action server that makes a drone move to a certain location every time the action server is called. We will spot the difference between action server and ROS service.

You will learn:
▸ What is a ROS Action and how it is different from ROS Service
▸ Make an Action Server which receives 3D position coordinates and moves a drone to that position
▸ Run the Action server
▸ Echo and Publish to various topics provided by our Action Server

ROS Developers LIVE-Class #40

Domain randomization with ROS, Gazebo and Fetch | part 1
Live streaming date: December 4, 2018 | 18:00 CET

In this class, we are going to see how to reproduce the results of the famous paper “Domain Randomization for Transferring Deep Neural Networks from Simulation to the Real World” with Fetch robot, using ROS and Gazebo simulation.

You will learn:
▸ What is domain randomization
▸ How to implement it in Gazebo using a world plugin
▸ How the whole pipeline works: from training the vision system to making the robot grasp the spam object
▸ How to create the dataset to train the visual system using simulation images