New official Depthai ROS driver

Hello there!
Last year I’ve developed an unofficial driver for OAK cameras from Luxonis. It got some attention so we decided to join forces and work on an upgraded version :slight_smile:

Official blog post, copying contents here to save a click:

At Luxonis, we are committed to creating robotic vision solutions that help improve the engineering efficiency of the world. With our stereo depth OAK cameras, robust DepthAI API, and growing cloud-based platform, RobotHub, our goal is to provide a start-to-finish ecosystem that uncomplicates innovation.

And, with that in mind, we’re pleased to announce the release of our newest DepthAI ROS driver for OAK cameras, which is part of our ongoing effort to make the development of ROS-based software even easier.

With the DepthAI ROS driver, nearly everything is parameterized using ROS2 parameters/dynamic reconfigure, thereby providing you with even greater flexibility when it comes to customizing your OAK to your exact use-case. Currently you can find over a hundred different values to modify!

There are tons of ways for this driver to make your life easier, some of which include:

  • Several different “modes” that you can run the camera, depending on your use-case. You can for example use the camera to publish Spatial NN detections, as well as publish RGBD pointcloud or just stream data straight from sensors for host processing/calibration/modular camera setup

  • Set parameters, like exposure, focus for individual cameras at runtime.

  • Set IR LED power for better depth accuracy and night vision.

  • Experiment with onboard depth filter parameters.

  • Enable encoding to get more bandwidth with compressed images

  • Easy way to integrate multi camera setup with an example provided

  • Docker support for easy integration, build one yourself or use one from DockerHub repository

Having everything as ROS parameter also gives you the ability to reconfigure the camera on-the-fly by using stop and start services. You can use low quality streams and switch to higher quality when you need, or switch between different neural networks depending on what data your robot needs.

Here is an example of adjusting LED power for better depth quality:

Here is another example demonstrating manual control of RGB camera parameters in runtime:

Here we see an example of RGBD depth alignment:

Multi camera setup with OAK-D PRO, OAK-D W and OAK-D Lite, with one camera running RGBD and Mobilenet spatial detection, one running Yolo 2D detection on one running semantic segmentation.

And here we see an example of Real-Time Appearance Based (RTAB) Mapping of an interior room:

The DepthAI ROS driver is being developed on ROS2 Humble and ROS1 Noetic (with versions on other distros coming soon), allows you to take full advantage of ROS Composition/Nodelet mechanisms, and can currently support detection (2D and spatial) and semantic segmentation networks, with lots more on the way as we continue refining and enhancing.

To discover more about the DepthAI ROS driver, including walkthroughs for how to install and get started, visit our repository. You can also track progress on new features and Roadmap here.

And as always, if you need help or have any questions, you can reach us at support@luxonis.com or on our Discord.

9 Likes

This is fantastic news, awesome work!
Perhaps it’s a good time to revisit my OAK-D Lite and do some more tutorials…

2 Likes

Thanks! I’ll be happy to provide help for them as I consider your tutorials one of the best entry points to ROS :slight_smile:

2 Likes

Yes, it would be great if you do that.

1 Like

Thanks a lot for this, was just about to start working with depth cameras in ros2