Is there any DDS implementation using shared memory inside system?

A nice discussion indeed. @eboasson: For sure you can significantly improve the performance for large payload by using a shared memory concept. eCAL for example is doing it more or less like you proposed … copying data in a shared memory and passing the name of that shm file to the connected subscribers for reading the content after an update event.
Don’t know how OpenSplice can manage these at least two copy actions that fast :sunglasses:, but the performance bottleneck is imho that write/read memory transfer and you need some intelligence to manage the read / writes properly.

well, don’t forget that modern machines are pretty fast w.r.t. (shared-)memory access…
To give you an idea, here’s a screencapture when doing a (bundled) throughput-test with the 4MB sample-size:

… so a decent 26 gbps (whilst twice crossing the boundary between application space and shared-memory)

1 Like

… or even better, when looking what to achieve in an 1-to-n scenario, here’s the same throughput test but now with 3 parallel readers (all in the same ‘federation’ as the publisher):

So an aggregate throughput of 44 Gbps :slight_smile: (on a 7 years old machine)

… although it keeps the related writer/reader threads in the 3 applications pretty busy (so its cpu-bound) … using ‘top’ in irix-mode and show-threads enabled:

And to be utterly complete, here’s the memory-usage during this test with 4MB samples (look for publisher/subscriber lines):


@hansvanthag thank you for all that effort. These informations are really valuable. Staying in Poland currently on a vacation. But I will take a deeper look on a performance analysis later only to figure out where eCAL is wasting time for the larger payload transmissions. Only for sleeping better …

ROSCon JP 2019 was held 25th.Sep in Tokyo, and i did the lightning talk about eProsima and Sony R&D collaboration for Fast-RTPS based shared memory feature. the whole outcome will be open source and then everyone can take advantage of this feature for free!!!

ROSCon_LightningTalk_Sony_eProsima.v3.pdf (198.0 KB)

We hope this is gonna be a part of the contribution for ROS community and all of the users as open source perspective. and we also really appreciate for the corporation of eProsima @Jaime_Martin_Losa



Sounds great ! Is there any kind of roadmap for releasing this approach open source ?

yes, but still considering so once it gets decided we will make announcement. at least, this WILL be open source for sure, so everyone can take advantage of this feature for FREE!!!

Hi @tomoyafujita,

Thanks for announcing this. eProsima will release a shared memory transport as part of a collaboration with Sony, covering Inter and intra process, and as open source, as always.

As it has been commented here, the biggest differences will be in scenarios with big data and multiple subscribers.

We plan to have a first beta release end of this year, and we are now starting. We will provide an update as a lighting talk (if possible) this next ROSCON.

1 Like

Short update, we released eCAL v5.5 today and we could improve the performance of the shm layer again. Method and results described here
Besides the performance improvements we also provide the full record / measurement toolchain now for high performance measuring of distributed systems into standardized HDF5 format. Enjoy :slight_smile:


After a few month of work we proudly released eCAL 5.7.1 and the promised

ROS2 middleware layer (

today. The new RMW is gently providing eCAL5’s performance and the sophisticated record and replay tooling to the great ROS2 framework. So just give it a try and eCALize your setup ;-).



thanks for the effort, i am interested. just to confirm, ecal is non-dds implementation but ready to use as RMW, my understanding correct?

Yes. It’s a non-DDS implementation.

1 Like

@tomoyafujita and eCAL is already boosted by iceoryx

@rex-schilasky if rmw_ecal is used on Linux, can iceoryx be activated as shared memory transport?


@michael-poehnl yes you can simply setup (build) eCAL to use iceoryx and benefit from it’s incredible zero copy shared memory performance.

Iceoryx is used for local ipc only for sure, for interhost connections eCAL automatically switches to it’s own udp multicast protocol.

Independently of this setup all tools (recording, replay, monitoring, languange bindings …) and for sure the new rmw_ecal are working as expected for Windows and Linux OS. This is a nice setup for sure with the power based of different open source solutions :slight_smile:

1 Like

CC: @michael-poehnl

Great work :+1:t2:

I did a quick code scan, a couple of questions came up. could you kindly share your thought?

  • I see that ecal is accessing RouDi if it comes to inter-process communication. but rmw_ecal does not support can_loan_messages. in that case, how does it access the physical page mapped in shared memory provided by RouDi directly to&from rmw? If i understand correctly, there will be copyin and copyout between rmw and ecal.
  • How ecal manages multiple publication for inter-process communication on specific topic? I think that RouDi cannot handle multiple publication on topic, that is because msgq namespace.

I could be wrong at some points, but i am really interested in this. would you enlighten me a little bit?:smile: thanks in advance!

There is still the multi-publisher on a topic limitation in iceoryx, but the extension is on it’s way

true zero-copy is not yet possible with the eCAL / iceoryx combination. But I guess @rex-schilasky will be happy to hear that we started working on making this possible



Your quick scan is not that bad :-). So yes if you are using eCAL on top of Iceoryx you will have the current limitations of Iceoryx like writing with multiple publishers on the same topic mentioned here (, but they are working on it …

eCAL is handling pub/sub n:m connections fine, but it’s not zero copy like iceoryx is. So the publisher write their payload in memory files, detach from the file and the signaled subscriber copy their content out of it. Every publisher is holding it’s own memory file, so that they do not overwrite each other if they are connected to the same topic … simple approach.


Seems to me that we are starting to work the same time ;-). Nice to hear that you will add an additional chunk for payload attributes. Then eCAL will be finally able to benefit of the real zero copy mechanism of iceoryx :+1:



understood, appreciate for your explanation :slightly_smiling_face:

cyclonedds with built-in iceoryx is being released as POC for community feedback. Is joint effort of ADLINK Advanced Robotics Platform Group & Bosch. So iceoryx zero-copy with cyclonedds QoSs & wire support