ROS Resources: Documentation | Support | Discussion Forum | Service Status | Q&A answers.ros.org

Why is robotics simulation hard?


Figure 1. Sad Rover Explodes in Gazebo Simulation

Hello All,

I’d like to start a discussion on what you see as the biggest challenges for the industry to adopt and scale the use of simulation. We think simulation is extremely valuable, although not widely adopted as a standard practice in robotics development, testing, and validation. Want to get some community wisdom to understand why is that so and how we can help either by improving Gazebo or the AWS RoboMaker service (offers Gazebo on managed AWS infrastructure).

To get the conversation going, we’ve heard a few things in simulation that are hard: building URDF models, building Gazebo plugins, building Gazebo worlds, building realistic agents (e.g. humans, goats) behaviors, and measuring and analyzing data from simulation. What’s your take? Have you used gazebo simulation in any project? Please share your GIFs :slight_smile:

If you would like to help the community with quantitative data, you can fill a short survey below:

We will share the results of the survey with the community.

Regards,
Cam

7 Likes

Hi Cam,

Just for your information, there are already some works done on robotics simulation survey. Here is an example: survey on robotic simulation

3 Likes

Debugging Gazebo stuff would be way easier if I could visualize the forces acting on each link/joint in each step (and even better their source).

2 Likes

The easiest way to build urdf models is to build the model in an actual CAD software and then import it. Just like:

If not that, use xacro to build to urdf.

Building worlds in gazebo isn’t that difficult. If you open gazebo with an empty world, you can import models and place them where you want them to be, then generate a .world file.

well in my case, our team build up the supply chain simulation as gazebo.
We check the Product Cycle Time to know about efficiency of robot

5 Likes

Other visualizations would also help - joint alignment error etc. And maybe most importantly - easy custom visualizations. The way visualizations are now done in Gazebo is super complicated. I once tried to visualize a specific force, but I haven’t succeeded implementing a reliable visualizer as a Gazebo plugin - the visuals were getting stuck, duplicated, removed etc. Maybe the only thing that’s missing is a good tutorial that would explain all the required steps. The way ROS handles Markers is, on the other hand, super easy to use.

Also, the “closedness” of SDF format is really limiting. By “closedness” I mean the fact that I can’t easily parse custom flags or attributes.

Another thing that complicates development is the fact that URDF doesn’t support closed kinematics chains, whereas SDF does.

I’m also missing a SDF->URDF converter (exactly for the models built inside Gazebo but inteded to be used together with ROS). Look at the Virtual SubT challenge models - the organizers require a working SDF (that’s a hard constraint) and then also a URDF of the robot, but as you dig deeper in the URDFs, you see that they’re often not up-to-date with the SDF. I understand that requiring the other way would solve this (i.e. having URDF with <gazebo> tags automatically converted to SDF), but why not allow both directions?

In Gazebo (not sure about Ignition) you basically can’t implement your own sensors. Yes, you can implement them as a model plugin, but their code isn’t then executed on the sensor thread, you can’t use the <sensor> tag for them etc. (this goes together with the SDF closedness). So either you’re happy with the few sensors somebody already implemented, or you’re doomed. This isn’t the way to support innovations. I created https://github.com/peci1/gazebo_custom_sensor_preloader to at least allow changing implementation of the already existing sensors if you’re not satisfied with it.

The page describing the available sensor is missing one important thing - the limitations, approximations and omissions the simulated sensors have compared to their real-world counterparts. Did you e.g. know that depth cameras cannot have noise in Gazebo? Or that a simulated lidar captures all points in a single time instant? These are all little problems, but until you know about them, you just wonder why is the simulated model behaving weird?

1 Like

@peci1 May be https://github.com/robotology/gazebo-yarp-plugins/tree/master/plugins/externalwrench will be of some use if you would like to develop a plugin for applying external wrenches on different links

Concerning simulation for human robot collaboration scenarios, a good/usable model for human as an articulated rigid body is not available in URDF/SDF formats. From what I experienced there is a bit of disconnect between the graphics community models (which can be useful for generating articulated human models) and robotics community, check https://github.com/makehumancommunity/makehuman/issues/19. For example http://www.makehumancommunity.org/ is a great software for creating a highly detailed human model but the model can be exported to .dae format without the skeletal kinematic elements that are needed for using http://wiki.ros.org/collada_urdf to convert it to urdf model. Recently introduced gazebo actor http://gazebosim.org/tutorials?tut=actor&cat=build_robot are limited in their use as at the moment only animation scripts are possible with them.

wow, that’s super cool! do you simulate the physics or just the workflow?

we checked Physical interference with individual product line when the Robots were moved and measured the Cycle time when robots pick some and put the other place