Other visualizations would also help - joint alignment error etc. And maybe most importantly - easy custom visualizations. The way visualizations are now done in Gazebo is super complicated. I once tried to visualize a specific force, but I haven’t succeeded implementing a reliable visualizer as a Gazebo plugin - the visuals were getting stuck, duplicated, removed etc. Maybe the only thing that’s missing is a good tutorial that would explain all the required steps. The way ROS handles Markers is, on the other hand, super easy to use.
Also, the “closedness” of SDF format is really limiting. By “closedness” I mean the fact that I can’t easily parse custom flags or attributes.
Another thing that complicates development is the fact that URDF doesn’t support closed kinematics chains, whereas SDF does.
I’m also missing a SDF->URDF converter (exactly for the models built inside Gazebo but inteded to be used together with ROS). Look at the Virtual SubT challenge models - the organizers require a working SDF (that’s a hard constraint) and then also a URDF of the robot, but as you dig deeper in the URDFs, you see that they’re often not up-to-date with the SDF. I understand that requiring the other way would solve this (i.e. having URDF with
<gazebo> tags automatically converted to SDF), but why not allow both directions?
In Gazebo (not sure about Ignition) you basically can’t implement your own sensors. Yes, you can implement them as a model plugin, but their code isn’t then executed on the sensor thread, you can’t use the
<sensor> tag for them etc. (this goes together with the SDF closedness). So either you’re happy with the few sensors somebody already implemented, or you’re doomed. This isn’t the way to support innovations. I created https://github.com/peci1/gazebo_custom_sensor_preloader to at least allow changing implementation of the already existing sensors if you’re not satisfied with it.
The page describing the available sensor is missing one important thing - the limitations, approximations and omissions the simulated sensors have compared to their real-world counterparts. Did you e.g. know that depth cameras cannot have noise in Gazebo? Or that a simulated lidar captures all points in a single time instant? These are all little problems, but until you know about them, you just wonder why is the simulated model behaving weird?