I’d like to start a discussion on what you see as the biggest challenges for the industry to adopt and scale the use of simulation. We think simulation is extremely valuable, although not widely adopted as a standard practice in robotics development, testing, and validation. Want to get some community wisdom to understand why is that so and how we can help either by improving Gazebo or the AWS RoboMaker service (offers Gazebo on managed AWS infrastructure).
To get the conversation going, we’ve heard a few things in simulation that are hard: building URDF models, building Gazebo plugins, building Gazebo worlds, building realistic agents (e.g. humans, goats) behaviors, and measuring and analyzing data from simulation. What’s your take? Have you used gazebo simulation in any project? Please share your GIFs
If you would like to help the community with quantitative data, you can fill a short survey below:
We will share the results of the survey with the community.
Building worlds in gazebo isn’t that difficult. If you open gazebo with an empty world, you can import models and place them where you want them to be, then generate a .world file.
Other visualizations would also help - joint alignment error etc. And maybe most importantly - easy custom visualizations. The way visualizations are now done in Gazebo is super complicated. I once tried to visualize a specific force, but I haven’t succeeded implementing a reliable visualizer as a Gazebo plugin - the visuals were getting stuck, duplicated, removed etc. Maybe the only thing that’s missing is a good tutorial that would explain all the required steps. The way ROS handles Markers is, on the other hand, super easy to use.
Also, the “closedness” of SDF format is really limiting. By “closedness” I mean the fact that I can’t easily parse custom flags or attributes.
Another thing that complicates development is the fact that URDF doesn’t support closed kinematics chains, whereas SDF does.
I’m also missing a SDF->URDF converter (exactly for the models built inside Gazebo but inteded to be used together with ROS). Look at the Virtual SubT challenge models - the organizers require a working SDF (that’s a hard constraint) and then also a URDF of the robot, but as you dig deeper in the URDFs, you see that they’re often not up-to-date with the SDF. I understand that requiring the other way would solve this (i.e. having URDF with <gazebo> tags automatically converted to SDF), but why not allow both directions?
In Gazebo (not sure about Ignition) you basically can’t implement your own sensors. Yes, you can implement them as a model plugin, but their code isn’t then executed on the sensor thread, you can’t use the <sensor> tag for them etc. (this goes together with the SDF closedness). So either you’re happy with the few sensors somebody already implemented, or you’re doomed. This isn’t the way to support innovations. I created https://github.com/peci1/gazebo_custom_sensor_preloader to at least allow changing implementation of the already existing sensors if you’re not satisfied with it.
The page describing the available sensor is missing one important thing - the limitations, approximations and omissions the simulated sensors have compared to their real-world counterparts. Did you e.g. know that depth cameras cannot have noise in Gazebo? Or that a simulated lidar captures all points in a single time instant? These are all little problems, but until you know about them, you just wonder why is the simulated model behaving weird?
Concerning simulation for human robot collaboration scenarios, a good/usable model for human as an articulated rigid body is not available in URDF/SDF formats. From what I experienced there is a bit of disconnect between the graphics community models (which can be useful for generating articulated human models) and robotics community, check https://github.com/makehumancommunity/makehuman/issues/19. For example http://www.makehumancommunity.org/ is a great software for creating a highly detailed human model but the model can be exported to .dae format without the skeletal kinematic elements that are needed for using http://wiki.ros.org/collada_urdf to convert it to urdf model. Recently introduced gazebo actor http://gazebosim.org/tutorials?tut=actor&cat=build_robot are limited in their use as at the moment only animation scripts are possible with them.
we checked Physical interference with individual product line when the Robots were moved and measured the Cycle time when robots pick some and put the other place
I find Gazebo very buggy and unreliable. Run a launch file 10 times and it fails - the 11th, it runs perfectly and for no clear reason. I have wasted so many hours just getting my nodes up. I wish there were an alternative simulator to use with ROS. It’s needed. I don’t know how you can do any serious robotics research without simulation.
Yes, we need another ROS simulator. Then we’d have 15 simulators that are buggy and unreliable. LOL
I feel your pain. I am struggling with Gazebo too. Right now my robot is breakdancing…which would be awesome if that’s what I was trying to do. Here it is:
It’s been such a struggle that maybe I just change my goal to “dancing robot” and voila - mission accomplished. I think all the simulators are a struggle in one way or another. Simulation is just plain hard so I think we need to improve what we have. Make Gazebo easier to set up, better testing/diagnostics perhaps, more intuitive surface/friction parameter tuning, and definitely more documentation. This is possible through open-source, and some of this can be done via plugins. I think improving what we have is far easier than starting from scratch.
I dont want to leave you with nothing so perhaps you can work on your launch lifecycle and reduce the debug iteration cycle. I got my nodes to support the “Reset Model”, where the system time will reset, and I can recompile my node and restart the simulation without restarting Gazebo client/server. In fact, my main node actually sends the Reset World or Reset Model Pose ros command on startup for me. I then also setup Hyperopt to tune my parameters. Worth the effort!
I would not use the word buggy describing ROS sims. Coming from a background of writing my own rover os, configuration settings and satisfied library dependencies are critical to a clean robot mission. Simulations double the number of settings and dependencies required for a successful simulation. I have been practicing ROS2 since November, with successes and failures. All of the failures happened when I deviated from any critical steps described in the tutorials. The SIMs may need more in the way of a Recipe of steps to configuring the sim for dependencies and/or settings.
I’d love too Katherine but it has proved a challenge. I am using Kinetic and Gazebo 7 on my main system. A while ago, I upgraded to Ubuntu 18 on a separate machine and installed melodic, intending to then update Gazebo. There was however some kind of nightmare with openCV. And don’t mention Python 3. All of these applications are very difficult to get to work together.
I think my requirements are pretty simple : ROS, OpenCV, QT, urdf and a multi robot simulator. I would have thought this is basic for vision-guided robot simulation.
Gazebo 7 is just totally infuriating. Launch files crash repeatedly then work with no changes to anything. When it crashes, gzserver often refers to log files that are empty. How do you diagnose these “boost” and “transport” errors? I’d love some guidance. When something sometimes work and sometimes doesn’t, I know this is a tough ask.
If you don’t depend on Python 2, I would upgrade to Ubuntu 20.04 and Noetic right away to save yourself the headaches of migrating later. Packages that aren’t released for Noetic usually compile fine.
I can’t help with Gazebo though, sadly I have had mostly problems with it as well.
Nice video Reminds me of when I tried to place a robot arm on a box.
Okay, then you’re doing something wrong. Gazebo and ROS install fine on a clean Ubuntu 18.04 system if you follow the tutorials. I’ve never had problems installation-wise (except on Windows ).
And yes, Gazebo sometimes crashes, but I wouldn’t say I ever had these problems so often that they would start bothering me (and I was running days long simulations on tens of computers). It could be probably linked to some problems with your install or custom plugins (or hardware?). If you want to diagnose the problems, then download the -dbg packages to get debugging symbols and run Gazebo inside gdb.
But as Katherine said, you can take your specific problems to either ROS answers or Gazebo github and you’ll probably get some help.
Did you consider using Webots?
Over the past few year, we have been working on improving the ROS and ROS 2 interfaces to Webots.
Webots is generally appreciated for its reliability, speed and ability to produce reproducible results.
I would love to hear feedback from people using ROS with Webots.