ROS Resources: Documentation | Support | Discussion Forum | Service Status | Q&A

micro-ROS enabled robot and kinematics

I would like to hear some feedback from the community regarding a long lasting debate we have internally.
We develop electronics for robotics (motor driver, sensors, etc) for which we also develop the firmware, which then we interface to ROS for high-level logic. We are now finally moving to ROS2 and micro-ROS.

Being hardware developer, we like to embed the most we can, having hardware which natively supports ROS (indeed, we developed uROS-node back in 2013, which I think was the first ROS1 client running on microcontrollers).

While this to me seems by far the best approach for sensors for an easier integration (i.e., an IMU that publishes sensor_msgs/Imu messages), we are still not sure which is the best solution to interface with actuators.

Thinking of a mobile robot, I see different approaches:

  1. the robot hardware natively speaks ROS at robot level, subscribing /cmd_vel and publishing /odom, with the kinematics running on the hardware itself (like the kobuki robot)
  2. the robot hardware relies on a simplier, pheraps more efficient, protocol consuming joint velocity setpoints and producing joint status, using ros2_control for the hardware interface and ros2_controllers for the kinematics (like, I think, most robot platforms)
  3. the robot hardware natively speaks ROS at joint level, subscribing joint setpoints and publishing joint states, still using ros2_controllers for the kinematics

Approach 1. seems more efficient and real-time (control loops run on the MCU with an RTOS), but the embedded kinematics is much simpler than the one from ros2_controllers, without many nice to have features. We also need a software node on the PC subscribing /odom and publishing /tf.
Approach 2. seems very common, but what we don’t have the native ROS interface fostered by micro-ROS, which we love.
Approach 3. is a compromise between the two, with native ROS interface and advanced kinematics. In this case we can have both simple embedded kinematics (let’s say for teleop only) and advanced kinematics for autonomous navigation. But in the ros2_control hardware interface we need to publish and subscribe ROS topics, which seems uncommon and probably not the most efficient solution.

In the past we followed approach 1, while now we are following approach 3, but perhaps approach 2 is the right one, being the most common.

What do you think would be the best one, as micro-ROS users / developers?


I think that there’s not really one best, it’s highly dependent on your specific use case or application.

  1. Is great in cases where there’s limited bandwidth available and simplicity and abstraction is useful and you don’t need to get more fine grained control.

  2. Has been the approach when you have very specific requirements and have to use an existing system that’s off the shelf and cannot be customized. This is the most common because historically it has been the only option. The abstraction is provided around the custom protocol provided by the vendor.

  3. The main difference here is that you’re using a standard protocol instead of the custom protocol per device. As well as supporting some levels of discovery etc. The option to add additional layers of abstraction is a separate consideration.

Part of the vision for ROS 2 has been to enable people to transition from case 2 to case 3 such that embedded devices will become first class members of the ROS network instead of relying on custom drivers to connect them as a sort of proxy intermediate layer. The standardization/abstraction does incur some costs and can’t be the “most efficient” but I don’t think that most of the custom protocols can be considered “most efficient” either. If you’re seeing blockers for specific use cases please speak up and we can look at how to improve the experience.


I would strongly recommend to consider this approach when building new hardware or products. Beyond the intuition @tfoote points out, we did lots research and benchmarking in the past (e.g. 1, 2, 3 among others) which ended up showing how native RTPS interactions offered many advantages, simplified strongly system integration and overall, allowed to build products that were easier to use and independent of other subsystems (note that with micro-ROS you do depend on the DDS bridging on external compute resources, which needs to be properly optimized and fine tuned for each specific application). There’re various open source implementations you can use as a starting point.

For existing hardware, micro-ROS is indeed a great choice to bridge to the ROS world. Note however this comes at the cost of having to optimize every bridge with the RTPS/DDS network.

1 Like

I see, and this was the purpose of the development of uROSnode years ago. And I agree that custom protocols could not be considered “most efficient”.
Our internal debate here is motivated by the architecture of ros-control, where as far as I understand if we follow approach 3 the flow is:

  1. a geometry_msgs/Twist message is subscribed by the kinematics controller
  2. the kinematics controller computes the joint commands and calls the corresponding
  3. the joint commands are handled by the corresponding hardware interface class (ActuatorInterface)
  4. the ActuatorInterface implementation publishes the joint commands over a ROS topic (ActuatorInterface::write())
  5. the firmware on the motor controller board receives the commands (via micro-ROS or wathever) and publishes the joint state on another ROS topic
  6. the ActuatorInterface implementation subscribes the joint state, does some logic in the corresponding callback (integration of the position, etc) and whenever ros-control asks the current state it populates the joints[N] structure ((ActuatorInterface::read()))
  7. the joint states are published by the kinematics controller on a ROS topic (/joint_state)

Does it look like a valid approach to you?
Basically, we use ROS as communication protocol to the hardware, which as you said it might be no worse than other custom protocols, and we need to integrate / interpolate the joint state as those are pushed by the hardware instead of being pulled by the software.

I’m wondering if this approach could be somehow formalized, defining standard topics for commands and states, so that given ROS-enabled motor controllers it would be not needed to write any hardware interface code, but just configure the topics / joint IDs.
To synchronously command the actuators, a single topic with messages like sensor_msgs/JointState would be fine, subscribed by all actuators which will then look for their own ID.
To get the joint status, in our architecture we would need a topic for each actuator, with data joined and interpolated by the “standardized” hardware interface.

We are looking forward to some fully ROS-enabled hardware, i.e. subscribing control_msgs/msgJointControllerState to configure the PID running on the hardware directly from ROS / rviz, again without the need of any hardware_interface code, services to start/stop the driver, and so on.

Do you feel it would be useless, as writing hardware interfaces is not a big deal, or worth it?

I’m 100% with you, but we are speaking of different “embedded” hardware. We target Cortex-M MCUs, let’s say from STM32F3 to STM32H7, and DDS is not trivial to port to such a restricted hardware.
I just read about embeddedRTPS, which I see is still in its early days but we will give it a try, to not rely on a bridge / agent.
Direct participation in the ROS network be our preferred approach, and this was the motivation to develop uROSnode in place of ros_serial.

But the scope of this thread was not really about how to interface to ROS, but how to implement kinematics and interface to the actuators in the case we have ROS-enabled hardware.

Thanks for sharing your experience using micro-ROS. As you probably know, we have developed some basic demos using cmd_vel and odom topics but we don’t have an actual use case where ros2_control or ros2_controllers are interfacing directly with micro-ROS…
Have you had any feedback on that?

When your use case is more mature, maybe you are interested in making an informal talk in the EWG explaining your company use case. Your feedback regarding the usage of micro-ROS with ROS control standard interfaces would be highly appreciated in our ecosystem.

1 Like

Hi, I am answering this as ros2_control maintainer and ros_control power user.

This looks very much as a valid approach. The main idea behind ros(1)_control is to have a direct access to the HW from the hardware interface (ActuatorInterface). This usually means that you use CAN, RS485 or similar interface toward the hardware and use appropriate drivers directly there. This would probably cause more deterministic behavior then using ROS publishers/subscribers, especially under heavy CPU load. ROS2 should have even better performance in this case. Nevertheless, using ROS for this communication is also possible and basically depends on your robot’s hw architecture.

This could be done and actually very welcome. And having a use-case would be great to focus development. This is a great topic for control WG. When thinking/discussing the options we should keep in mind work so ROS-I on trying to standardize messages toward industrial robots using simple message protocol.

[Attention - biased comment]

I am not sure if this make much sense because usually one would like to have some control over hardware execution. This is actually what we are currently working on in ros2_control, that is, hardware lifecycle management. Also thinking on more complex scenarios one have to have some hardware access management and this is what ros_control is for.

Somewhat off-topic, but:

we’re very much in favour of standardisation, but Simple Message should not serve as inspiration here. It’s very old, and was very much limited by the context in which it was created (see also the linked IREP).

Conceptually of course there may be things which could be reused, but if it had been possible, we’d probably had used ROS messages/services/actions instead of a custom protocol.

1 Like

Exactly what I was thinking. We should probably strive to standardize message-format without specifics on its transportation. ROS messages/services/actions are definitely interesting, but they cannot be used on everywhere.

I hope to have some feedback to share very soon

Sure, I’ll come back on this as soon as we have a nice demo.

As we actually use CAN in our hardware architecture, another option would be to directly connect via CAN from the PC running ROS instead of running a gateway between CAN and ROS with micro-ROS.
We are looking for the easier way (from the developer point of view) to integrate hardware with ROS.

ROS-enabled hardware is an option, so we are testing micro-ROS to explore this approach. If this is the way to go, in the future we could add Ethernet to our motor drivers and remove the CAN<->ROS gateway.
Otherwise we could work on a software bridge between the CAN middleware and ROS, but this is a different topic.

We also have lifecycle management implemented by the firmware, maybe those two can be synchronized somehow. While hardware access management would be “broken” if we expose the motor driver to all ROS nodes via topics / services, good point.

Actually, micro-ROS allows the use of custom transport layers as long as you provide the correct interfaces to it.
There was some discussion time ago on the Embedded WG about using CAN as one of the transports, but it didn’t go further. @ralph-lange could extend on that work.

Question for you @destogl: in ROS1 we had a “hardware driver” node running, which was passing a reference to the node handle to the class implementing hardware_interface::RobotHW, which then was publishing / subscribing topics.

Now that in ROS2 hardware drivers are plugins handled via pluginlib, is it actually a good idea to instantiate a node? If not, this makes approach 3 (from my original post) unfeasible.

Sorry, forget my last post.
I just noticed here that the recommended method is still to instantiate a ros2_control node, as it was in ROS1.

I was mislead by ros2_control_demos, where the controller is registered with pluginlib.

I think you got this right in the previous post. You can create your own ros2_control node, but you don’t have to. And in most of the use-cases you don’t need to do this. In ros2_control, both, hardware and controller are used through pluginlib. That is exact reason why you don’t have to write your own nodes again.

I am not sure if you posted the right link up there. You liked “hardware-description-in-urdf” section which only describes how to configure which plugin to load, how to set up parameters and how to describe your robotic-setup to be understandable.

So the demos in ros2_control_demos repository are showing exactly what is expected use/setup of hardware_interfaces.

You can instantiate the node there. We are not doing this because we don’t have any use-case that really needs access to the ROS2-world without controllers. Using controllers for this provides much better controllability and determinism in the framework.
For the 3rd pproach, you can simply start another node. We are doing this actually a lot. Each controller is a node and they are all loaded from Controller Manager using pluginlib.


At the very end of that page:

  1. Create a launch file to start the node with Controller Manager. You can use a default ros2_control node (recommended) or integrate the controller manager in your software stack. (Example launch file for RRBot)

So here it says it is recommended to use the ros2_control node, should it be updated?

Good, that’s what I did in a separated thread spawned by on_configure()
Testing right now :slight_smile:

key word there is: „use default ros2_control node“. I see now that „a“ is misplaced. (The whole paragraph probably needs a bit reformulation)


I would use UAV-CAN.

Indeed, this is one of the option we are evaluating.
Not as plug-and-play as having native ROS-speaking hardware, but the glue between the two protocols could be automagic / generated (e.g., something like ros1_bridge)

Actually, we can now announce that we have been working on this. Now, micro-ROS can be used with embedded RTPS. Please, find all the details here!

1 Like