What 3D Cameras Are You Using With ROS2?

You have the computer to directly benefit from Isaac ROS.

NVIDIA uses use Leopard Imaging’s Hawk or it’s sibling StereoLabs ZedX.

Nova Orin Developer Kit was released this week and provides time synchronized surround perception with depth.

isaac_ros_nvblox_zanker_cuboids_sm
(nvblox 3D scene reconstruction for obstacle detection into navigation)

We use these camera’s for depth & point clouds to take advantage of color for AI based perception, where the depth and color image come from the same imager. This has the benefit of advancing depth from the classic monochrome CV based approach has parallax errors between the depth and color imagers. AI based depth is not brittle to systematic errors with specular highlights and reflections as the CV depth approaches are. This is passive stereo as projection solutions have limited range and issues in sunlight.

When using a USB | Ethernet camera costly CPU memcpy of the depth, and or image data occur which limits throughput, and increases latency from photon to perception output. This is unlikely an issue if you are working with a single depth camera on USB | Ethernet however CPU processing doesn’t scale with sensor data, and you need accelerated computing for scaling to multiple camera’s.

Hawk and ZedX use GMSL which over a single high speed SERDES cable delivers power, control signals, and data; this is an industrial grade solution, with a cable length of up to 15 meters. All camera’s can be time synchronized in the Nova Orin design, which uses Jetson AGX Orin, to within <100us of frame capture. Perception pipelines with this solution can run end-to-end in accelerated computing for optimal throughput, and reduced photon to perception output latency.

Isaac ROS 3.0 will ships with ESS DNN for AI based depth.

ess3.1_storage_thres_0
(mobile robot use case using ESS 3.x)

To visualize point clouds over WIFI / Ethernet we reduce the size of the point cloud to something manageable, as it’s not practical to compress depth or disparity images. We do this by choosing a prime number N, often N=7 or N=11, and transmit every Nth point from the cloud over the network connection for visualization. The prime number prevents a systematic reduction of data producing more informative visualizations.

Thanks.

14 Likes