ROS Resources: Documentation | Support | Discussion Forum | Service Status | Q&A answers.ros.org

Is there any DDS implementation using shared memory inside system?

dds
#1

Hi all,

im just curious on this, i can see some vendors working on dds implementation.
but is there any dds implementation supports shared memory? or working?

thanks in advance,
Tomoya

2 Likes

#2

A question like this is better for answers.ros.org, but I’ll give you a quick answer here.

  • The RTI Connext DDS implementation definitely has the ability to use shared memory for DDS clients running on the same computing node, and as I recall will do so automatically. However, it will still marshal data because it needs to do so for many of the features of Connext DDS, such as logging and introspection.
  • eProsima’s FastRTPS apparently does not use shared memory yet but it is on their roadmap.
  • OpenSplice DDS does use shared memory internally. I don’t know if this implementation marshals data for shared memory.
1 Like

#3

@gbiggs

thanks, i will look into them.

0 Likes

#4

confirmed that RTI Connext DDS used shared memory, actually it maps shm in the process space. but so far our internal performance test tells us it is not so good for latency. connext dds implementation is provided as binary so we are not sure what’s going on. is there any specific room we should have this conversation. maybe just asking for RTI’s help?

0 Likes

#5

What sort of latency are you seeing? Could the data marshalling and unmarshalling cover it?

0 Likes

#6

@gbiggs

sorry to be late to get back here,

Publisher:Subscriber=1:1, skylake, Ubuntu16.04

msg size [KB] Latency [msec]
4 0.1972224281
64 1.5988755584
256 6.1639215946
2048 59.9750656127
8192 201.675012207

i was expecting much faster since it uses shared memory.

(*) Latency…(end - start), start is right before publish msg, end is subscriber callback fired. so this is just for latency for communication.

we are considering we should use https://github.com/ApexAI/performance_test instead of ours, we will check how it works.

0 Likes

#7

@tomoyafujita I’d recommend checking out the run_experiment.py script if you want to run a comprehensive batch of experiments with performance_test.

0 Likes

#8

@lyle

thanks for the tip in advance, will check that out.

0 Likes