For those interested in Reinforcement Learning, here’s some recent results obtained at at Erle.
Briefly,
This work presents an extension of the OpenAI Gym for robotics using the Robot Operating System (ROS) and the Gazebo simulator. The content discusses the software architecture proposed and the results obtained by using two Reinforcement Learning techniques: Q-Learning and Sarsa. Ultimately, the output of this work presents a benchmarking system for robotics that allows different techniques and algorithms to be compared using the same virtual conditions.
Given the recent popularity of the OpenAI gym, we’ve used the gym to facilitate a common interface for RL problems in robotics. That is, to have the AI people (that don’t necessarily know about ROS) focus on the ai problems.
Do you have code for this ? I am trying to use Gazebo for reinforcement learning to do hand object grasping with DQN. It would be great if you can share how you coded binding Gazebo and open AI Gym
I would strongly recommend looking into DDPG, TRPO, A3C, or FAN for a grasping task. DQN doesn’t perform well for robotics tasks due to its discrete action space and poor data efficiency.
@Random-Word, that’s interesting, thanks for sharing.
Can you point out any benchmark or results that compares DQN with these different techniques? A pointer to a paper would also do.
hi @spk921, that sounds interresting. I build a robotic humanoid hand (Robotic Humanoid hand) and i am also interrested into building a ai-software for hand grasping. Do you have already any code and where? thanks
is your code connecting ROS/Gazebo to OpenAI gym easy to extend to other robots and tasks? I would like to use the openAI interface with a Robotis Mini robot that I am simulating in Gazebo.
For those interested, a follow up work on this topic that will be presented at ROSCon this year:
Accelerated robot training through simulation in the cloud with ROS and Gazebo Rather than programming, training allows robots to achieve behaviors that generalize better and are capable to respond to real-world needs. However, such training requires a big amount of experimentation which is not always feasible for a physical robot. In this work, we present robot_gym, a framework to accelerate robot training through simulation in the cloud that makes use of roboticists’ tools, simplifying the development and deployment processes on real robots. We unveil that, for simple tasks, simple 3DoF robots require more than 140 attempts to learn. For more complex, 6DoF robots, the number of attempts increases to more than 900 for the same task. We demonstrate that our framework, for simple tasks, accelerates the robot training time by more than 33% while maintaining similar levels of accuracy and repeatability.
The proposal of the paper named “robot_gym” is built upon gym_gazebo. There’re no plans to release our particular setup (which is what’s discussed in this paper).
If you’re interested, you should be able to reproduce such setup yourself using gym-gazebo and customize it to your needs. There’re some community contributions that will facilitate the process.