I wish to make some commentary and discussion on this article: http://design.ros2.org/articles/clock_and_time.html
First, it appears that there has as yet been no attempt to support non-wall-clock time (er, simulator or playback time) in ROS2. True? It appears that rcl/time.c has some support for that, but I can’t see that this support is exposed in rclcpp::Time. Is there something I missed there or is anyone working on a newer use_sim_time? I feel a little bit late to the party. Sorry.
At ASI we’ve worked around this by publishing our own clock signal at high frequency, but I feel like arbitrary clock support needs to be built into the core framework. I believe that non-realtime playback is a critical feature because I believe simulation and playback should work out of the box.
In addition to simulation, though, there is a need to support hardware platforms that don’t have (wall) clock chips. This is not mentioned in the design document. Although rare, there still exists plenty of embedded processor boards without clocks or batteries. On the other hand, there aren’t any remaining embedded devices without runtime frequency counters. I think we’re safe to assume that all platforms have timers accurate to the millisecond, and that 99% of the platforms have timers accurate to the microsecond.
I want to discuss a few options for the /clock topic:
A “steady clock publisher” or simulator would publish this value at high frequency. Use the most recent value if you get it (or fallback to wall clock time otherwise) for any timestamps on messages published. Rely on NTP to synchronize networked nodes’ wall clock sources. This is what is proposed in the design doc.
We keep the /clock message, but it includes more than a timestamp. It would also include a realtime multiplier. It would be published as frequently as the realtime multiplier changed.
The clock message would have a uptime offset and a multiplier but no timestamp.
Assume all nodes have synchronized realtime clocks. Send out the realtime multiplier and the wall clock offset.
The timestamp in the message header would include information about its source, epoch, realtime scale, and futurity.
Concerning more sophisticated clock message schemes, the design document states that “all of these techniques will require making assumptions about the future behavior of the time abstraction. And in the case that playback or simulation is instantaneously paused, it will break any of these assumptions.” I don’t believe that to be entirely true. There will be some propagation delay of a “pause” in any situation. I’ve also pondered some other IPC mechanisms for synchronizing clocks between processes, but I think I would disfavor all of them as being too platform-specific.
For #1, what publishing frequency is enough? Can we really go at 1000Hz? We aren’t running typically on RTOS platforms, and even for an RTOS that’s a hard constraint to meet. I think 200Hz is more realistic. Going much faster than that will just fill up the loopback buffer with multiple clock messages that never see the light of day. It turns into a waste of CPU and networking resources. The cartographer (or other mapper) utilizes the difference in time between IMU readings and laser scans. This is a critical part of the algorithm. Do we really want that rounded to the nearest millisecond (or 5ms)? Even a single millisecond may be too long for a high-speed robot.
Option #2 allows us to publish the clock less frequently. It also allows (requires) us to utilize our onboard timer. We start the timer whenever we receive a clock message. Anything we publish gets timestamped with last_timestamp + timer * realtime_multiplier.
For #3, I can’t guarantee that all nodes have the same system uptime, but it seems common in simulation.
Option #4 doesn’t meet the requirement to support a platform without a clock. However, it may be handy for rosbag replay if you are using real timestamps. Those are helpful for video synchronization.
Option #5 is more sophisticated. The output timestamp on any published message would typically be derived from the input timestamps on data that went in to that calculation. Sensors would still need access to a real or simulated clock. However, transformer nodes would need no external clock. Transformer nodes would need to throw an exception if the timestamps on their input were too disparate. Tools to make these calculations easier would need to be included in the library.
We have traditionally shied away from automatically adding message subscriptions behind the scenes, but is that where we want to go on the clock? And do we have some other general node coordination data that should be part of the magic-message-subscribed-to-behind-the-scenes? Something like a machine ID and a wall clock timestamp so that the message delay could be estimated?
Thoughts? Progress? Do we have a list of out-of-the-box nodes that need this work?