Hi ROS community! Stuart from Luxonis here again. For anyone not familiar with us, we specialize in making robotic vision simple and accessible. Our primary business focus is our range of OAK stereo/depth cameras, which incorporate AI and CV on-device, for example our OAK-D Pro.
We’re in the process of developing our new series of robotic perception cameras and, for those who are familiar with us and have experience using OAK, it would be great to hear about the kinds of features and functionality you’d like us to include out of the box and running on-device for this next generation.
We recently got back from ROSCon and got some helpful input there, but wanted to reach out to the broader ROS community as well. Thanks in advance for any input you may have!
On-board sensor data filtering would be awesome, so much compute time I find is spent on post-processing sensor data to then be usable by navigation / perception algorithms. If that was handled coming to us, that would be fabulous (and I believe very doable with the embedded processor).
I’m not sure if you have a hardware time synced IMU on board, but that would be pretty much required for much of modern VIO research and would be very helpful in general.
I second this. It would be awesome to be able to run something like GitHub - peci1/robot_body_filter: Filters the robot's body out of point clouds and laser scans. onboard the camera. The most computationally demanding part of this filter is ray-tracing of the camera rays against the mesh- or primitives-based 3D model of the robot (or maybe in the case of camera-computed depth clouds, it would be sufficient to run a OpenGL-based renderer of the robot body onto the camera images, so that it can be masked out).
Thanks for the feedback.
Onboard sensor-data filtering and VIO something I have been thinking about for starters too.
Current versions IMU is not time synced with the Cameras. We were hoping we could so softsync. I have seen new works doing triggering cameras on IMU message as triggers. Need to see if we can incorporate in the next platform. Any further insights on this would be appreciated!!
Better autofocus (at least on the OAK-D platform). It often takes a couple seconds to hunt for focus and winds up missing it sometimes. Manual focus is a good temporary workaround though
I’d like to “third” this (onboard filtering). We’re using a pair of OAK-Ds on our UGV, and would love to get a filtered point cloud straight off of the device, because…ok, it’s because we’re just lazy.
Also, if we’re discussing wish lists: a more-robust onboard implementation of AprilTag detection would be fabulous. I tried the existing pipeline, but didn’t get great results. Admittedly, I didn’t spend much time experimenting. For our purposes, we would “just” need the tag info, and the 2D<->3D point correspondences, and would do the pose estimation part ourselves.
Just to be clear, I mean noise filtering, but also body filtering would be useful as well. Noise filtering however is (in my experience) significantly more expensive and more bang-for-your-buck if that can be offloaded. And I mean this seriously, if the depth quality is pre-processed on these sensors so what I get out is nice and clean reminiscent of Astra cameras or PrimeSense, that would make these cameras massively more useful to me as a roboticist than the Realsense cameras. It wouldn’t even be a question for me anymore which to use.
We bought OAK-D lite to replace Intel’S D415 for cost and compute performance reasons.
However there are many shortcomings.
First of all we will like the 3D point cloud post processing (all filters) has to be done on the device. This will make it a perfect solution
Om software side we were expecting extensive support for ROS especially eye-in-hand and hand-eye calibration. Surprisingly I have not seen much here.
Don’t know whether anyone using the cameras for pick and place kind of solutions where position accuracy is very critical and where we need these joint calibration.
We use OAK-D LITE in an open aire environment but the temperature is too high for it to be used. Not sure there is a solution.
I was hoping we can start and stop the pipeline at desire but this seems not the case.