ROS2 TSN talk during ROS2 real-time WG regular call on Tue, 1st of Feb, 16.00 CET

The ROS2 real-time working group has invited Andrei Terechko, NXP to give a talk about ROS2-DDS-TSN project they have published recently. NXP and partners have demonstrated a ROS2 control application running on several computing nodes connected over TSN. Time Sensitive Networking provides determinism and low-latency capabilities for the computing nodes connected to each other via TSN-aware network interfaces and allows to build a distributed over network real-time robotic control systems. @andrei will give a brief TSN overview, talk in more detail about his work and possible next steps.

Meeting details:
Tue, 1st of Feb, 4 pm CET

6 Likes

That’s an interesting topic. Will there be a recording of the talk?

yes, it will be recorded.

Please find below links to the slides and recording.

Slides: Invited talk on the ROS and TSN integration for the ROS 2 real-time working group.pdf - Google Drive
Recording: 2022-02-01_ROS2_RT_workingGroup.mp4 - Google Drive

Thanks @andrei and NXP for the interesting talk

3 Likes

Thanks to @razr and everyone who participated for the invitation and insightful questions!

I wanted to briefly follow up on the discussion we had after my talk:

  1. During the talk there was interest in our performance analysis tool for the DDS-TSN integration project. I’m happy to share that we just open-sourced the profiling framework we built on top of pyshark, which by itself relies on the PCAP API. Here is the README section about the tools.
  2. To simplify reproduction of our GitHub DDS-TSN project setup, one can use the cheaper SJA1110 automotive switch board MR-T1ETH8 with a bunch of 100BASE-T1 automotive connections. Besides, you can use the T1-to-TX RDDRONE-T1ADAPT media convertor, if you like to experiment with a PC first.
  3. @vmayoral asked a great question if we observed latency overhead and jitter originating from the networking software stack on the processor side (end nodes). Indeed, software task scheduling and “overhead” played a major role in our experiments, because the Linux networking stack’s latency isn’t deterministic. A proper task priority and CPU affinity can help here. Besides, there are a few trade-off options to reduce the latency and jitter: SO_BUSY_POLL, DPDK, AF_XDP. Note, that these are trade-offs because they sacrifice something (e.g. CPU utilization) to improve the latency.
2 Likes