Is there any DDS implementation using shared memory inside system?

yes, but still considering so once it gets decided we will make announcement. at least, this WILL be open source for sure, so everyone can take advantage of this feature for FREE!!!

Hi @tomoyafujita,

Thanks for announcing this. eProsima will release a shared memory transport as part of a collaboration with Sony, covering Inter and intra process, and as open source, as always.

As it has been commented here, the biggest differences will be in scenarios with big data and multiple subscribers.

We plan to have a first beta release end of this year, and we are now starting. We will provide an update as a lighting talk (if possible) this next ROSCON.

1 Like

Short update, we released eCAL v5.5 today and we could improve the performance of the shm layer again. Method and results described here
Besides the performance improvements we also provide the full record / measurement toolchain now for high performance measuring of distributed systems into standardized HDF5 format. Enjoy :slight_smile:


After a few month of work we proudly released eCAL 5.7.1 and the promised

ROS2 middleware layer (

today. The new RMW is gently providing eCAL5’s performance and the sophisticated record and replay tooling to the great ROS2 framework. So just give it a try and eCALize your setup ;-).



thanks for the effort, i am interested. just to confirm, ecal is non-dds implementation but ready to use as RMW, my understanding correct?

Yes. It’s a non-DDS implementation.

1 Like

@tomoyafujita and eCAL is already boosted by iceoryx

@rex-schilasky if rmw_ecal is used on Linux, can iceoryx be activated as shared memory transport?


@michael-poehnl yes you can simply setup (build) eCAL to use iceoryx and benefit from it’s incredible zero copy shared memory performance.

Iceoryx is used for local ipc only for sure, for interhost connections eCAL automatically switches to it’s own udp multicast protocol.

Independently of this setup all tools (recording, replay, monitoring, languange bindings …) and for sure the new rmw_ecal are working as expected for Windows and Linux OS. This is a nice setup for sure with the power based of different open source solutions :slight_smile:

1 Like

CC: @michael-poehnl

Great work :+1:t2:

I did a quick code scan, a couple of questions came up. could you kindly share your thought?

  • I see that ecal is accessing RouDi if it comes to inter-process communication. but rmw_ecal does not support can_loan_messages. in that case, how does it access the physical page mapped in shared memory provided by RouDi directly to&from rmw? If i understand correctly, there will be copyin and copyout between rmw and ecal.
  • How ecal manages multiple publication for inter-process communication on specific topic? I think that RouDi cannot handle multiple publication on topic, that is because msgq namespace.

I could be wrong at some points, but i am really interested in this. would you enlighten me a little bit?:smile: thanks in advance!

There is still the multi-publisher on a topic limitation in iceoryx, but the extension is on it’s way

true zero-copy is not yet possible with the eCAL / iceoryx combination. But I guess @rex-schilasky will be happy to hear that we started working on making this possible



Your quick scan is not that bad :-). So yes if you are using eCAL on top of Iceoryx you will have the current limitations of Iceoryx like writing with multiple publishers on the same topic mentioned here (, but they are working on it …

eCAL is handling pub/sub n:m connections fine, but it’s not zero copy like iceoryx is. So the publisher write their payload in memory files, detach from the file and the signaled subscriber copy their content out of it. Every publisher is holding it’s own memory file, so that they do not overwrite each other if they are connected to the same topic … simple approach.


Seems to me that we are starting to work the same time ;-). Nice to hear that you will add an additional chunk for payload attributes. Then eCAL will be finally able to benefit of the real zero copy mechanism of iceoryx :+1:



understood, appreciate for your explanation :slightly_smiling_face:

cyclonedds with built-in iceoryx is being released as POC for community feedback. Is joint effort of ADLINK Advanced Robotics Platform Group & Bosch. So iceoryx zero-copy with cyclonedds QoSs & wire support


@joespeed What is the meaning of “with built-in iceoryx” ?

If I understand the matching issues #64 and #65 correctly then there is a kind of gateway for every direction forwarding/copy data from one system into the other, right ?
So if you communicate between 2 cylconedds participants on the same host, how can you make usage of the iceoryx shared memory zero copy mechanism ?

1 Like

@rex-schilasky. That’s another combination approach @joespeed is referring to. And this is more the way you did it in eCAL. I.e. cyclone is using iceoryx when publisher and subscriber are on the same host.


Hi all,

Just an update on our side. The current release of Fast DDS/Foxy comes with a Shared Memory transport included, that shows a lot better performance than the UDP Loopback.

Moreover, we are in the final phase of an improved zero-copy shared memory transport. We are releasing along this year, and I expect this to be available as a patch for Foxy.


cyclonedds “iceoryx” branch is cyclonedds with built-in iceoryx contributed by ADLINK Advanced Robotics Platform Group (a.k.a. “ROS team”) & Bosch. Would love community feedback, GitHub issues, suggestions, PRs #becauseracecar #iac2021


Hi Jaime,
Thanks for this very good news !!!
I would like to test this feature and I encounter some problems.
On I can see that someone else have some problem testing this feature :

I compiled foxy and FastDDS (version 2.0.1). When I publish and subscribe to some data on the same computer, I can see them passing through the network on Wireshark… I published very big data like images, to be sure that I see the data in wireshark (and not the ros “meta data” that will always pass through network if I well understood).

The FastRTPS documentation say that Shared Memory is disabled by default. I tried to set a custom profile.xml file thanks to the environment variable FASTRTPS_DEFAULT_PROFILES_FILE to enable SHM, but it don’t seems to work (I still see messages in wireshark).

Can you point me on a documentation page or a github repo with a demonstration of the shared memory feature in Foxy with eProsima DDS ?

Hi Adrien,

In case you haven’t done it yet, please add is_default_profile="true" to the <participant> profile, otherwise ROS2 will not use that profile when creating the participant. This is shown on the example here.

We are in the process of adding ROS2 related information on the Fast DDS documentation, and will make sure that this is mentioned.

1 Like