We announced this week our new Fast DDS 2.1 release for Foxy, and I promised some data about one of the new features: The improved intra-process & inter-process performance.
Fast DDS was really fast already, but these last months we have been making some modifications because the upcoming Zero Copy shared memory transport for the end of this year. As a result, Fast DDS 2.1 is now striking fast, and the larger the data, the bigger the improvement.
Here you can see several graphs showing latency and throughput for the RMW_FastRTPS, in both single process and two process configurations, comparing Sync and Async alternatives with the prior version of Fast DDS
Let’s start with intra-process
In the next graph, we try to send 1000 messages in one second of different sizes, to measure the end-to-end processing capacity and get how many messages by second RMW Fast RTPS is able to handle.
As you can see, for large data the improvement is around 10X faster in terms of latency, and we increase substantially the throughput, around 50%.
And now, inter-process (two processes):
Similar results, around 10X faster in terms of latency, and we increase substantially the throughput, around 50%.