Luxonis - Community input for next generation cameras

Hi ROS community! Stuart from Luxonis here again. For anyone not familiar with us, we specialize in making robotic vision simple and accessible. Our primary business focus is our range of OAK stereo/depth cameras, which incorporate AI and CV on-device, for example our OAK-D Pro.

We’re in the process of developing our new series of robotic perception cameras and, for those who are familiar with us and have experience using OAK, it would be great to hear about the kinds of features and functionality you’d like us to include out of the box and running on-device for this next generation.

We recently got back from ROSCon and got some helpful input there, but wanted to reach out to the broader ROS community as well. Thanks in advance for any input you may have!


I would say include a ROS2 vslam package.

On-board sensor data filtering would be awesome, so much compute time I find is spent on post-processing sensor data to then be usable by navigation / perception algorithms. If that was handled coming to us, that would be fabulous (and I believe very doable with the embedded processor).

I’m not sure if you have a hardware time synced IMU on board, but that would be pretty much required for much of modern VIO research and would be very helpful in general.


OAK Series 3 — DepthAI Hardware Documentation 1.0.0 documentation is all we need. The question is when will it be available?

if you can create a replacement of Realsense T265, it will be awesome!


I second this. It would be awesome to be able to run something like GitHub - peci1/robot_body_filter: Filters the robot's body out of point clouds and laser scans. onboard the camera. The most computationally demanding part of this filter is ray-tracing of the camera rays against the mesh- or primitives-based 3D model of the robot (or maybe in the case of camera-computed depth clouds, it would be sufficient to run a OpenGL-based renderer of the robot body onto the camera images, so that it can be masked out).

1 Like

Thanks for the feedback.
Onboard sensor-data filtering and VIO something I have been thinking about for starters too.

Current versions IMU is not time synced with the Cameras. We were hoping we could so softsync. I have seen new works doing triggering cameras on IMU message as triggers. Need to see if we can incorporate in the next platform. Any further insights on this would be appreciated!!

1 Like

Interesting. I will look into that.

We plan on starting with VIO followed by SLAM hopefully.
Thanks for the feedback. Will keep you guys updated!!

1 Like

Better autofocus (at least on the OAK-D platform). It often takes a couple seconds to hunt for focus and winds up missing it sometimes. Manual focus is a good temporary workaround though :slight_smile:

Thanks for all the great comments so far, everyone! Keep them coming!

I’d like to “third” this (onboard filtering). We’re using a pair of OAK-Ds on our UGV, and would love to get a filtered point cloud straight off of the device, because…ok, it’s because we’re just lazy. :slight_smile:

1 Like

Also, if we’re discussing wish lists: a more-robust onboard implementation of AprilTag detection would be fabulous. I tried the existing pipeline, but didn’t get great results. Admittedly, I didn’t spend much time experimenting. For our purposes, we would “just” need the tag info, and the 2D<->3D point correspondences, and would do the pose estimation part ourselves.

Just to be clear, I mean noise filtering, but also body filtering would be useful as well. Noise filtering however is (in my experience) significantly more expensive and more bang-for-your-buck if that can be offloaded. And I mean this seriously, if the depth quality is pre-processed on these sensors so what I get out is nice and clean reminiscent of Astra cameras or PrimeSense, that would make these cameras massively more useful to me as a roboticist than the Realsense cameras. It wouldn’t even be a question for me anymore which to use.

1 Like

Has any examples for integrating oak with IMU into Ros2 for self-driving?


I think that one is IMX378 sensor limitation on the OAK-D

We have a 6 DoF being published in our ROS node. not sure if it qualifies for AV though. Will be modifying it to 9DOF soon.

We bought OAK-D lite to replace Intel’S D415 for cost and compute performance reasons.
However there are many shortcomings.

First of all we will like the 3D point cloud post processing (all filters) has to be done on the device. This will make it a perfect solution

Om software side we were expecting extensive support for ROS especially eye-in-hand and hand-eye calibration. Surprisingly I have not seen much here.
Don’t know whether anyone using the cameras for pick and place kind of solutions where position accuracy is very critical and where we need these joint calibration.

We use OAK-D LITE in an open aire environment but the temperature is too high for it to be used. Not sure there is a solution.
I was hoping we can start and stop the pipeline at desire but this seems not the case.

Keep the examples and documentation upto date.

Regards, Mano

where I can access the 6 DoF sample?

By order of priority:

  • Wide angle and active (as in the OAK-D Pro Wide)
  • Better short range. It makes it complicated for mobile robots to be blind at close range.
  • On board noise filtering (ideally several levels changeable at run time)
  • On board voxelization
  • Visual odometry (IMU fused)
  • Safety rated for obstacle detection (for usage in a fonctional safety chain). But probably super hard to do. However that would be killer feature
1 Like