I am looking for experienced and very hands on ROS consultant who could help our team of enthusiasts to find solutions in ROS. Hands on experience is needed because we would like to create tangible plan and timeline (with tasks and manhours breakdown). For example what plugins could be used…what architecture should be used or what other specialists we need to onboard into our team?
Examples of what we are thinking about:
1, Remote teleoperation system
2, Remote navigation of wheeled robot with robotic arm
3, Integrating advanced computer vision into robot that has to process data in very fast way for object manipulation or HRI.
Thanks a lot in advance, I really appreciate your help!
Acceleration Robotics (contact info) can help with this. We can leverage our experience in hardware acceleration and help create custom compute perception pipelines with GPUs and FPGAs that provide fast responses[1] while remaining deterministic.
We’d be happy to collaborate on this. We are already offering a webrtc-based low-latency video streaming capability that you can easily embed in other solutions, and we are in the process of extending this to also allow remote-control, initially focused on 2d navigation of mobile bases. We do offer consulting services as well.
my team has 4 years of experience with ROS 1 (kinetic, melodic and noetic) and ROS 2 (ardent, bouncy, dashing, humble / rolling), mainly in the automotive / augmented reality space.
We have experience with designing a remote control system. I would strongly suggest looking at using Zenoh as the backend (e.g. with the Zenoh DDS plugin), because its network configuration is way more flexible than stock DDS. It is also possible to use the Zenoh protocol directly as a ROS2 backend, but it is still in an experimental state. You may check out my presentation on a related topic (Mobile Gateways) at the Zenoh Summit earlier this summer: Mobile Gateways for ROS2 Systems with Zenoh - YouTube
Regarding low latency vision data processing, in our AR projects the performance target was 120 fps. We have experience with designing low-latency data processing pipelines (including shared memory transport for ROS2). We also have experience parallelizing jpeg decoding from MJPEG stereo cameras and offloading parts of the CV algorithms to the GPU, even on mobile GPUs, like the Adreno 650 that you can find in the Oculus Quest 2 and the Qualcomm RB5 robotics platform.
Feel free to reach out to me, if we can help you in any way.
Hi Jakub,
If you are still on the lookout for someone I would be happy to help you, as I am an experienced ROS consultant.
You can reach me on andrewjohnson.56782@gmail.com
Cheers and Have a great day ahead,
Andrew