First, excellent work from the Ignition Dev Team, and congrats on the new release of Citadel!
Reading through the feature comparison between this and Gazebo-classic version 11 and the status of their migration to Ignition Citadel, I noticed the new approach for ROS integration:
ROS integration with Ignition will be done primarily via a transport bridge instead of plugins, contained in the ros_ign package.
ros_ign_bridge provides a network bridge which enables the exchange of messages between ROS 2 and Ignition Transport. Its support is limited to only certain message types.
As opposed to using the ROS client API directly within system level plugins for simulating sensors and actuators, as with
gazebo_ros_pkgs/gazebo_plugins in Gazebo-classic, it looks like the integration approach has shifted: using a network bridging process to exchange bidirectional message traffic between ROS and Ignition Transport.
Correct me if I’m mistaken, but wouldn’t this approach in bridging Ignition Transport with ROS2 add a significant overhead in comparison to the classic gazebo ROS2 plugins? I imagine the additional memory copies and re-serialization between Google Protocol buffers and ZeroMQ to IDL and RMW will introduce additional latencies and QoS complications.
As I understand, the merits behind the new approach include:
- ROS-agnostic gazebo-plugins
- decoupling ROS and Ignition releases and runtimes
- Simpler server-client network topology with Ignition Transport
- Ignition server and ROS bridge can easily run from separate hosts
Indeed, networking bridging is a common design pattern for ROS integration in other robotic simulators:
However, in my recent findings with AutoWare.Auto, it seems these network based bridges eventually become quite the bottleneck for simulation pipelines, given the ever increasing scale and fidelity of environmental sensing brought to bear in robotics. AutoWare.Auto is intent on upgrading it’s integration with LGSVL, replacing the current
ros2-lgsvl-bridge with native plugins using
ros2_dotnet client library, enabling more efficient and low latency messaging between ROS2 and the unity game engine:
Though Ignition Gazebo is perhaps a more general robotic simulation framework then those cited above, I suspect the robotic platforms, environments, and applications users will seek to emulate in Ignition Gazebo will not be so much different or less demanding, necessitating similar native optimizations and efficient intra process communication. High resolution depth sensors, feedback control loops, multi camera or multi robot setups, these are common cases that demand high bandwidth and low latency connections; slowing down the real time factor to avoid missed deadlines or dropped messages would be suboptimal and impair scalability.
Thinking out loud: would it be possible to optionally compose
ros_ign_bridge within the Ignition Gazebo server at runtime? Would sharing the same process help to bypass re-serlizationing of message structs, while maximizing the use of shared memory transport and QoS options from available RMWs? In the current implementation, I’m guessing the protobuf buffers are encoded before being sent through the zeromq layer?
A last thought would be about porting over classic Gazebo ROS plugins to Ignition that may have served to emulate hardware driver interfaces, such as services for controlling sensor scan patterns, camera exposure parameters, or motor control gains. If the
ros_ign_bridge does not support ROS services or RPC or more exotic message types, e.g. radar, ultrasonic, etc, is the user encouraged to use the Ignition plugin API with rclcpp directly to emulate such interfaces? Or is the emulation of the interface logic to be bifurcated between a separate Ignition plugin and accompanied ros node?