Rate Control Subscription? Any other work-around or ideas?

Hi ROS users and developers,

I want to bring the question and discussion about Rate Control Subscription to get feedback how practically developers are dealing with this situation.

Problem

When we start the subscription, we have to start receiving data with publisher’s rate (there is nowhere we can go, we must get the data). If the publisher sends the data with high frequency rate that the platform or system on the subscription cannot handle or manage, this will be the problem for the application running on that device. It is likely that the application is not processing as expected behavior anymore, lagging, not responding and could be stuck…

Possible Approaches

  • Create multiple publishers on sender node. Multiple topics with required rates would be really complicated and not good design to have multiple publishers to send out the same data with different endpoints…
  • Introduce proxy node between them, so that this proxy divide into multiple topics from original topic with different frequency. but this comes to significant latency overhead for all receivers, besides we need to deal with multiple topic endpoints.
  • Taking advantage of Content Filtered Subscription to manage the receiver frequency with rate control message field. This works but not sophisticated or perfect solution because we need to implement the extra message field and pre-process on publisher side.
    Rate Control example with content filtered topic. by fujitatomoya · Pull Request #1 · fujitatomoya/demos · GitHub

Question

Do we want to develop the feature like Rate Control Subscription? Or any other possible work-around and approaches?
This feature guarantees that subscriber application take the execution time for itself to keep the arbitrary rate to take the message or issue the application callback based on the user specified rate on that subscription.

Note

DDS specification, there is one of the QosPolicy TIME_BASED_FILTER that user application can apply it on DataReader. Filter that allows a DataReader to specify that it is interested only in (potentially) a subset of the values of the data. The filter states that the DataReader does not want to receive more than one value each minimum_separation, regardless of how fast the changes occur.

This QoS policy seems to allow us to optimize resource usage (CPU and possibly network bandwidth) by only delivering the required amount of data to different DataReaders.

Any comments welcome! Please feel free to share your thoughts and experience!

Thanks,
tomoya

6 Likes

Thanks Tomoya for the great next-gen ideas as always.
I’ve very often wished I’ve had this feature. As workaround I’ve been using “the proxy node” you mentioned, namely the throttle node of topic_tools to throttle down topics for visualization/rosbagging purposes. E.g. camera streams, pointclouds, etc…

1 Like

Hm, I don’t see how a generic solution could work. The problem I see is that its hard for a generic approach to define, which samples to drop, and which to send.

In general I would also lean towards the opinion, that the system design is broken in the mentioned scenario, and that should be fixed in the first place.

I am also wondering, if filtering at the subscription makes any sense, as you will get the transmission overhead. It would make more sense, to filter at the sender side in every case.

I’d say for the general case, option 1 or 2 are the only real possibilities, as some form of low-pass filtering is needed to reduce the sample frequency. If you just drop frames, you will get aliasing.

Consider a signal sampled at fs=1000 Hz. The sampled values can (theoretically) represent frequency components up to fs/2 = 500 Hz (Nyquist frequency). Let’s assume we’re sampling a force sensor signal with frequency components up to 300Hz.

Now consider a filter that drops 9 out of 10 values to reduce the sample rate to 100Hz. This 100Hz signal can only represent frequency components up to 50Hz. All frequency components between 50Hz and 300Hz will be aliased into noise frequencies in the range 0Hz to 50Hz. It is not possible to filter out this noise from the 100Hz values.

To avoid the aliasing, you need to apply a low-pass filter to the 1000Hz signal first, with a cut-off frequency below 50Hz, and only then drop 1 out of 10 samples.

On the other hand, if your sampled signal only has frequencies up to 50Hz to begin with, then there is no need for low-pass filtering, but then it doesn’t make sense to send it at 1000Hz either (moreover: the 1000Hz samples could still contain some high-frequency noise, and all of this noise between 50Hz and 500Hz would still be aliased into the 0Hz to 50Hz range).

3 Likes

@JM_ROS

If we use content filtered topic, and with my example

filtering takes place in publisher side.

precisely this behavior depends on rmw implementation and probably configurable.

I don’t think your example fits the problem that is intended to be solved by the proposed functionality (IIUC).

If there is a 300 Hz sensor and you subscribe only at 100 Hz, you must know the data you get have to be damaged one or another way. This has nothing to do with the transport layer and is a purely physical consequence that any reasonable engineer should foresee.

If you look for frequency characteristics, there is just no way to get them from a subsampled topic. The only thing that makes sense, as you have stated, is some statistical filtering. But then you create a new, partially independent stream of data which, in my eyes, deserves a new name. Imagine /imu for the full rate data and /imu_filtered or /imu_slow for the slower filtered version.

As I see it, the proposed subsampling is rather useful in different use-cases, like subsampling a stream of camera images. You know you’ll miss some fast events, but that’s exactly what is to be expected when you ask for slower data, so no surprise here.

Well that’s an interpretation… I’d say the given example (an arm64 which is too slow to process data at 1kHz) doesn’t really convey the message that the intended use case is a video stream.

Actually, I can’t really think of a good example of a 1kHz signal that you’d use at a lower rate without filtering? Even if it’s just for visualisation (e.g. a plot), you will see the higher frequencies as noise in your low-frequency plot.

Yes there is. With adequate filtering, the frequency characteristics of both signals up until half of the low sampling frequency will be reasonably identical. Whereas if you don’t filter, you unavoidably get aliased frequencies, i.e. noise. Again: what are good use cases where one would not want to avoid inducing that noise?

My experience differs. A practical example: a machine at some customer’s site somewhere far away, shows intermittent erratic behavior, which you are requested to solve. The only data you have is in the log file. Obviously, due to disk space constraints it was conveniently implemented to log the data only every X time, while the machine control loop runs at Y frequency. Nobody cared about filtering (as one obviously uses something like RCLCPP_DEBUG_THROTTLE since that is exactly what this method is intended for), so not only does your log data not contain high frequency info, but it also has noisy low frequency info. Good luck identifying the issue.
This a real-life experience, not something I invented.

Anyway, the post requested “thoughts and experience”. These are my thoughts and experience, now do as you please! :wink:

1 Like

I do think there are use cases for this and it’s unfortunate that the DDS time-based-filter is not adopted in ros2 as it is exactly the solution for this. I’m not crazy about the content-based filter solution as it requires every message that needs to be rate controlled to have an extra field.

What I have done in the past is create a wrapper function object class that takes the callback as an argument and rate to limit it and pass this to the create_subscription call. This seems to work but the filtering is done at the subscriber side so not ideal but at least it doesn’t call my callback at the high rate.

1 Like

Have a look at using rmw_zenoh or using a Zenoh bridge.

With these you can configure matched expressions against topic names and limit the publishing rate. We use this for off-board wireless communications. Another advantage of the Zenoh bridge is that it decouples the remote subscribers, so that slow ones no longer impact the publisher’s publishing rate like with FastDDS and CycloneDDS.