What 3D Cameras Are You Using With ROS2?

What 3D cameras are you using? With ROS1 almost any camera worked without quirks, now I’m trying to get up D455 on Orin with Humble, and I have combinatorial explosion problem. Is it RMW? Is it QoS (I had to set it up in launchfile).
Right now I’m getting some pointclouds but at 5hz :melting_face:

I have more cameras from other vendors (some borrowed, some bought) and I wanted to do a review (YT) of ROS2 functionality but first I’d like to ask others:

  • What cameras are you using?
  • What RMW is working for you?
  • What PC are you using? (RPi, Jetson, Generic)
  • What ROS2 version?
  • Are you connected over WiFi/Ethernet for visualization? What tips do you have?

Thanks for any info shared!

4 Likes

The first thing to check is if your point cloud really comes at 5Hz. Are you using rostopic hz to measure that?

No, the Hz depends on camera settings, right now I’m using lower resolution 640x480 and 15Hz. The low Hz is is probably because there are some queues filling up/blocking or realsense-ros wrapper not matching rgb/depth data when creating pointclouds. You can look at realsense ros repo, there are ton of people complaining about no/slow pointcloud data, especially on Jetsons (although I don’t think that’s it). That’s why I asked about what RMW are people using, I had to manually set up QoS profile in camera launchfile to get anything. So maybe low Hz is due to misconfigured RMW? I had mixed results with using cyclone or fastrtps when playing with Create3, that’s why I’m not blaming Jetson.

I’m using a Kinect v2 on a RPi 4 and ROS2 foxy. Point cloud works fine (rate as expected) on the pi, but when I try to do some processing on pc (tried connecting on both wifi and ethernet), the messages are just very slow.

Almost just the first couple of messages arrive. I still couldn’t figure out what the issue is. I tried lowering the QoS and configuring the DDS connection on both devices, with no improvements:(

2 Likes

Hi Martin,

Currently evaluating a setup with a Realsense D435 connected to a RPi4 sending pointclouds over WiFi to a standard PC, using ROS 2, here is some feedback :

  • Make sure to configure your DDS RMW for wireless networks (using a Discovery Server for Fast DDS / Disabling Multicast for CycloneDDS and FastDDS, configuring a peers list…). Some documentation is available here and there.

  • I get the same throughput when sending pointclouds over WiFI (~5Hz). rostopic hz gives 30Hz on the RPi (where they are produced) but ~5-6Hz on the PC (where they are received over WiFi). I saw no improvement when using the point_cloud_transport package (great tool btw).

  • Double check your QoS profile and use a best effort compatible one (e.g. SENSOR_DATA).

  • When sending/receiving point clouds, iftop shows that I’m basically saturating the WiFi throughput (I get ~100MB/s+), so my interpretation is that pointclouds messages are huge and the bandwith (not the latency) is limiting.

I’m considering sending depth images and reconstructing the pointclouds on the PC instead of the RPi.
I also ordered an OAK-D RGB-D camera to evaluate it, haven’t received it yet (taking feedback on these as well).

Hope that helps, also taking feedback more generally about RGB-D cameras over WiFi.

4 Likes

Try using RealSense Viewer to check that the camera works as expected natively.

Also, the cable matters. Make sure you’re using the right length and quality USB cable to get full bandwidth. We once experienced camera issues in our lab and buying high quality shorter cable fixed this.

I use the OAK-D cameras these days since Intel won’t ship to certain countries.

1 Like

Sending pointclouds over the network will be slow. Try sending compressed depth images instead and reconstruct the cloud if necessary (rviz2 can do this on the receiving machine)

1 Like

You have the computer to directly benefit from Isaac ROS.

NVIDIA uses use Leopard Imaging’s Hawk or it’s sibling StereoLabs ZedX.

Nova Orin Developer Kit was released this week and provides time synchronized surround perception with depth.

isaac_ros_nvblox_zanker_cuboids_sm
(nvblox 3D scene reconstruction for obstacle detection into navigation)

We use these camera’s for depth & point clouds to take advantage of color for AI based perception, where the depth and color image come from the same imager. This has the benefit of advancing depth from the classic monochrome CV based approach has parallax errors between the depth and color imagers. AI based depth is not brittle to systematic errors with specular highlights and reflections as the CV depth approaches are. This is passive stereo as projection solutions have limited range and issues in sunlight.

When using a USB | Ethernet camera costly CPU memcpy of the depth, and or image data occur which limits throughput, and increases latency from photon to perception output. This is unlikely an issue if you are working with a single depth camera on USB | Ethernet however CPU processing doesn’t scale with sensor data, and you need accelerated computing for scaling to multiple camera’s.

Hawk and ZedX use GMSL which over a single high speed SERDES cable delivers power, control signals, and data; this is an industrial grade solution, with a cable length of up to 15 meters. All camera’s can be time synchronized in the Nova Orin design, which uses Jetson AGX Orin, to within <100us of frame capture. Perception pipelines with this solution can run end-to-end in accelerated computing for optimal throughput, and reduced photon to perception output latency.

Isaac ROS 3.0 will ships with ESS DNN for AI based depth.

ess3.1_storage_thres_0
(mobile robot use case using ESS 3.x)

To visualize point clouds over WIFI / Ethernet we reduce the size of the point cloud to something manageable, as it’s not practical to compress depth or disparity images. We do this by choosing a prime number N, often N=7 or N=11, and transmit every Nth point from the cloud over the network connection for visualization. The prime number prevents a systematic reduction of data producing more informative visualizations.

Thanks.

12 Likes

W.r.t. visualizing pointcloud and image data over wifi, my team and i started using the ros2 dds zenoh bridge a few months ago and now it is our defacto solution for communicating ros2 data from our robots to our PCs for visualization and introspection over wifi. It basically worked out of the box and has been a game changer in terms of reliability and latency when visualizing ros2 pointclouds and images over wifi (as opposed to using ros2’s native dds over the network).

This is not a silver bullet though.

W.r.t. the camera, realsense is still a solid option if you want a stereo camera as librealsense and realsense_ros is quite mature, but orbbec has A LOT of very appealing looking hardware options imo and offers stereo, structured light, and time of flight cameras to fit many applications.

4 Likes

Using fast dds in humble we found enforcing asynchronous publication mode (in theory the default) did improve communications, for several sensors including a realsense D455.

https://fast-dds.docs.eprosima.com/en/v2.14.0/fastdds/ros2/ros2_configure.html#changing-publication-mode

1 Like

Experimenting with Xbox 360 Kinect’s 3D camera.

This DNN AI depth could be very useful with Raspberry Pi cameras, but I checked and you need stereo cameras like Realsense or Zed right? (No monocular) Still, I’d like to try it, but just today I upgraded another Jetson to Jetpack 6.0, and all the examples are for JP 5.x. Will you update the repos to work with JP6.0? (I guess you’ll have to eventually :sweat_smile:)

I think there isn’t any depth camera comparison available, and I’d like to make it! :smiley:
So far I’m thinking about putting them on top of mobile robot and try different mapping algorithms like RTABmap, the @ggrigor 's ISAAC ROS also has visual SLAM but again, why only JP5?

It could be also interesting to run and test these cameras with lower spec device like RPi 5, but I guess I’ll have to wait for Ubuntu 24.04 and ROS Jazzy to come out.

1 Like

Isaac ROS 3.0 will release at the end of April on Jetpack 6.0 with Ubuntu 22.04.

Isaac will remain on ROS 2 Humble because Jazzy does not support Ubuntu 22.04.

In our experience mono AI depth looks visually impressive with great relative accuracy, however absolute depth accuracy is insufficient for planning functions, which is why we use stereo for AI depth.

Thanks

1 Like

What cameras are you using?

Sick Visionary-T Mini
As far as we know, this is the sensor of a kinect One (the second one, not the kinect 360) paired with an ethernet interface.

What RMW is working for you?

Cyclone

What PC are you using?

Onlogic Helix 500 / Industry rated Core i7-1070TE

What ROS2 version?

Custom patched ROS2 Iron

1 Like

Hi, I’m interested in how to use KinectV2 with ROS2, specifically its configuration and the use of Point clouds. Could you help me with the source code or any advice? Please.

Hi @bernardo9921
we used these packages:

and then write bringup launch file

I’m trying to run kinect v2 on ros2 humble using the following packages. However, it suffers from a number of issues, and while it can capture images, it does not capture point clouds.

Correct. I was able to resolve the issue, so this package works perfectly. This is probably the most valid way to use kinect v2 with ros2.The cause was that the rviz QoS settings were not set to best effort.

In following weeks I will test these cameras:

  • Realsense D455
  • Stereolabs Zed
  • Orbecc Astra
  • Luxonis Oak-D Pro
  • Luxonis Oak-D
  • Monocular Raspberry cameras

I plan to test them with:

  • Jetson Orin Nano & Humble
  • Raspberry Pi 5 & 24.04 - Jazzy

I will also test @ggrigor ’s Isaac ROS libraries since they were recently released for JP 6.0

I also want to use these cameras with some SLAM libraries and on Create 3 mobile base, to asses how well they can generate maps.

If you have any suggestion for me to try, let me know!

3 Likes

@martinerk0 will you share your conclusions ? I’m curious about the Orbecc Astra.