DDS Performance Comparison on Windows for RMW Providers in ROS 2

Hello ROS community!

Today, we are happy to present to you a DDS middleware performance comparison on Windows platforms.

As you may know, RMW providers that compete to be the default provider for the next ROS 2 release Humble Hawksbill had to submit a report with the results of a complete workbench. You can find these reports here

Comparing the two reports it can be concluded that Fast DDS is the implementation with the best performance in Linux platforms. Results on Windows platforms are not so clear though. Therefore, eProsima has extended the workbench on Windows platforms to offer a better insight of the performance with the different data sizes. This includes latency results and maximum throughput in interprocess communications. eProsima is interested in showcasing Fast DDS performance in comparison with the competing implementation Cyclone DDS.

Latency:

Throughput:

Conclusion

The results are clear, Fast DDS is the highest performing Tier 1 open source DDS implementation for ROS 2! It especially performs considerably better in both latency and throughput when using data sizes larger than 16 KB. But even for smaller sizes, Fast DDS gets better latency values with a similar throughput.

eProsima will shortly publish the complete workbench description and results on its website, so stay tuned!

4 Likes

The throughput performance of Cyclone DDS seems to be limited by a magic 1Gbps line (by the local udp multicast transport ?), so iceoryx ipc was not configured I assume.

Did you check for data sizes larger 4 MB too ?

Hi @rex-schilasky

CycloneDDS uses Iceoryx for IPC, and Iceoryx has no support for windows

Yep, Fast DDS has a clear advantage, as we do support shared memory and zero-copy on windows.

Hi @Jaime_Martin_Losa, yes - reading the initial post would help before asking. Where can I find the performance workbench that you used to take the measures ? Is this eProsima internal ?

Everything is very public. The full description is in the TSC RMW report.

Some quick testing with 8MB show that the throughput starts to fall below the 1GBps with larger sizes on Cyclone DDS (kind of what happens with Fast DDS). I suspect it has to do with the clogging of the history and the publication rate starting to interfere with the maintenance of the internal structures (i.e., deletion of older messages)