Hi @jakubchytil,

my team has 4 years of experience with ROS 1 (kinetic, melodic and noetic) and ROS 2 (ardent, bouncy, dashing, humble / rolling), mainly in the automotive / augmented reality space.

We have experience with designing a remote control system. I would strongly suggest looking at using Zenoh as the backend (e.g. with the Zenoh DDS plugin), because its network configuration is way more flexible than stock DDS. It is also possible to use the Zenoh protocol directly as a ROS2 backend, but it is still in an experimental state. You may check out my presentation on a related topic (Mobile Gateways) at the Zenoh Summit earlier this summer: Mobile Gateways for ROS2 Systems with Zenoh - YouTube

Regarding low latency vision data processing, in our AR projects the performance target was 120 fps. We have experience with designing low-latency data processing pipelines (including shared memory transport for ROS2). We also have experience parallelizing jpeg decoding from MJPEG stereo cameras and offloading parts of the CV algorithms to the GPU, even on mobile GPUs, like the Adreno 650 that you can find in the Oculus Quest 2 and the Qualcomm RB5 robotics platform.

Feel free to reach out to me, if we can help you in any way.

Kind regards,
Gergely Kis
CTO of Migeran