I see, and this was the purpose of the development of uROSnode years ago. And I agree that custom protocols could not be considered “most efficient”.
Our internal debate here is motivated by the architecture of ros-control, where as far as I understand if we follow approach 3 the flow is:
- a geometry_msgs/Twist message is subscribed by the kinematics controller
- the kinematics controller computes the joint commands and calls the corresponding
- the joint commands are handled by the corresponding hardware interface class (ActuatorInterface)
- the ActuatorInterface implementation publishes the joint commands over a ROS topic (ActuatorInterface::write())
- the firmware on the motor controller board receives the commands (via micro-ROS or wathever) and publishes the joint state on another ROS topic
- the ActuatorInterface implementation subscribes the joint state, does some logic in the corresponding callback (integration of the position, etc) and whenever ros-control asks the current state it populates the joints[N] structure ((ActuatorInterface::read()))
- the joint states are published by the kinematics controller on a ROS topic (/joint_state)
Does it look like a valid approach to you?
Basically, we use ROS as communication protocol to the hardware, which as you said it might be no worse than other custom protocols, and we need to integrate / interpolate the joint state as those are pushed by the hardware instead of being pulled by the software.
I’m wondering if this approach could be somehow formalized, defining standard topics for commands and states, so that given ROS-enabled motor controllers it would be not needed to write any hardware interface code, but just configure the topics / joint IDs.
To synchronously command the actuators, a single topic with messages like sensor_msgs/JointState would be fine, subscribed by all actuators which will then look for their own ID.
To get the joint status, in our architecture we would need a topic for each actuator, with data joined and interpolated by the “standardized” hardware interface.
We are looking forward to some fully ROS-enabled hardware, i.e. subscribing control_msgs/msgJointControllerState to configure the PID running on the hardware directly from ROS / rviz, again without the need of any hardware_interface code, services to start/stop the driver, and so on.
Do you feel it would be useless, as writing hardware interfaces is not a big deal, or worth it?
I’m 100% with you, but we are speaking of different “embedded” hardware. We target Cortex-M MCUs, let’s say from STM32F3 to STM32H7, and DDS is not trivial to port to such a restricted hardware.
I just read about embeddedRTPS, which I see is still in its early days but we will give it a try, to not rely on a bridge / agent.
Direct participation in the ROS network be our preferred approach, and this was the motivation to develop uROSnode in place of ros_serial.
But the scope of this thread was not really about how to interface to ROS, but how to implement kinematics and interface to the actuators in the case we have ROS-enabled hardware.