Yes, Isaac ROS 3.0 release with support for Jetpack 6 was out end of May.
This includes support for multi-camera setups, which AI based perception functions depend on for a surround 3D understanding of the robot and its environment. Surround 3D perception is needed for higher levels of autonomy. Multi-camera setups are a common practice in the self driving industry, and this is available natively in ROS 2 using hardware acceleration and the appropriate hardware design.
Eight 2mp cameras across from 4x Hawks for a total of 527 megapixels / second
We measure multi-camera performance to <100us time synchronization between cameras in ROS 2 at the image topic level
Several 3D camera solutions including Hawk, RealSense, Orbec, and LIPS support this, but it needs to be designed into the robot | platform.
Thanks
Why not give this package a try?
It publishes depth, rgb and camera_info from kinectV2, tested on ros2 humble.
1 Like
I made tons of research, I don’t why I didn’t find this github before I started working on something my own, great work
1 Like
I made some RMW testing over wifi with Images and PointCloud2 and so far, Cyclone+Iceoryx+Zenoh has been best performant combination.
It seems the performance is even better than ROS1 which is very promising
Configuration of all these RMW options if far from straightforward, I’ll put my results here:
4 Likes
Thank you for taking the time to evaluate all these options.
Some ideas for improvements:
Detail your testing method. What devices did you use, what resolution/framerate was configured, what WiFi standard in which setting (home, in a lab, crowded environment).
And your results would preferably be numeric instead of smileys.
A checkmark is a good icon for if something is supported or working but not really a quality metric.
1 Like
Thanks for the improvement ideas. You brought some valid points to consider. I specified devices I tested with. Since this should serve more as signpost, I’ll keep the emojis since it’s easier to read.
One thing to consider would be that using for example 120MB/s SD card on RPi 5 vs M2 7000MB/s SSD on Jetson would have effect on whole setup, since it would probably bottleneck there.
There are many variables like these that I don’t think listing numerical values make sense since comparing all these possibilities would explode exponentially.
It’s probably best to use fastest HW parts as possible so you don’t bottleneck data passing somewhere.
Another interesting thing I was inspired by your reply was testing different wifi modules. You also brought up the environment, I think you have to consider network topology.
I found that basic PCIe wifi cards that are shipped with products have less throughtput in hotspot/AP mode than when they connect to a router.
As with using fastest SSDs, it would be nice to have a WiFi6E or WiFi 7 PCIe cards in newer products, maybe @ggrigor can help.
I asked on Nvidia forum, but the gist is that drivers for these faster cards are not there yet. Also lot of them have lesser AP speed (Intel) or don’t support AP mode, which is essential, if you want to take your Chinese robo-dog for a walk outside and check data in Rviz.
It would make a world of difference between 130Mbit/s vs ~2000Mbit/s.
I also put shared memory and zenoh configs on github.
2 Likes