It is passive right ? Do you think an IR projector could enable it to work in the dark ?
I have had good success with the Asus Xtion using the openni2 drivers in ROS1. This is similar in cost (~$300 US)to the Oak-D camera. The Xtion is the unit used by TRI on the head camera for the HSR and Stanford on the head camera for Jackrabbot2. The Xtion is limited to indoor and close (0-3m) range. I still would like to see a side by side comparison, of not only the cameras mentioned, but also other depth and stereo cameras. I understand that besides the sensor, the computing overhead of using optical sensors fro both SLAM and object recognition and tracking, is also a major consideration.
Has there been an official statement from Intel regarding ending the RealSense line? All I have found are the posts that have been referenced but these are not official Intel comments, thus it would appear to be only a rumor and speculation at this stage.
I’ve liked these as hobby cameras, but it looks like they sell a model with a bit better packaging for inclusion into a product. The idea of having AI running on the camera itself I have mixed feelings on
- One side: Yay! Can be done on the camera and I don’t need to change my compute to support a GPU and deal with that additional complexity in my system.
- Another side: Can it handle my specific
<insert model or need>for my specific product with my team of people knowledgeable with the usual methods of deploying edge models. Can it run fast enough to be an actual replacement to a GPU?
Unquestionably though, if it does non-AI things like some powerful filtering and/or clustering of depth information on the camera and returned the pointcloud filtered/unfiltered and with quality clusters, that itself would be of serious value. Filtering pointclouds when you’re working with many sensors on a robot can be nontrivial on limited compute resources. +1 if they include basic tracking of depth blocks.
Compared to many other cameras it has a compute power that does not drag a significant compute power from the host machine. This was a selling point of Realsense at least for me as we could use on embedded ARM machines.
So as long as it can make an ok depth prediction for obstacle avoidance it is a very good alternative to Realsense
To add to this list, we also have a list with more depth/stereo/etc. solutions here.
And thanks everyone for the feedback on OAK-D! Since knowing what is coming down the pike in terms of OAK-D might be helpful here (spoiler: we’re not cancelled) I figured it would be worthwhile to share what we have in the works.
Agreed - our ROS is in development. We think we will have a solid ROS solution in September. Right now basic support is there - but things like manual framerate control, exposure settings, etc. are missing. @saching13 is working on adding these now and then we will get official apt-get installable packages out.
For higher FOV, we don’t support this off-the-shelf, but we have had a slew of folks integrate high-FOV into their own custom products/solutions (using our OAK-SOM in their product). See the ArduCam pin-compatible version here for example. And we now support mesh calibration/rectification/etc. so these wide-FOV can be used for depth/etc.
And this actually can even be installed on the OAK-D PCBA, if you’re so brave to take it out of the enclosure:
We’re more than up for making requests as well. So if say this wide FOV + laser emitter/etc. is of interest, we may be able to make it happen.
And for custom solutions, we already do have quite a few end-user-products where such a thing has been done (particularly in small robots). The open-source designs make this a lot easier/faster.
Yes - all our current solutions (Hardware Github here) use passive stereo depth (Census Transform based).
We are working on the following though:
- OAK-D with IR laser dot projection (active stereo depth) and IR LED blanket illumination. We’ll likely call this OAK-D-PRO.
- Time of Flight (w/ option to add stereo depth as well - and potentially also active).
Here’s OAK-D-PRO with the laser dot projector and IR blanket illuminator:
We’ll release this as soon as we get through laser-safety certification.
And here’s the initial (not-yet-dialed-in) ToF on OAK-D:
The timeline on ToF is a bit less known, but it’s working!
Our mounting/enclosure/etc. on the current OAK-D is not what it should be. (It was my fault; wrong calls on my part). We’ve since listened to feedback from our customers (and thanks for being one!) and corrected this in future designs.
So all of the next-gen we’re working on will have VESA- (75mm-) spaced M4 mounting on the back for easier/more-secure mounting and inclusion of the finished camera into a robot/etc. The current plan is below:
It’s often a bit hard to tell what CAD renderings really look like in terms of size, so below gives an idea:
So this has the tripod mount on the bottom, and then 2x M4 mounting holes spaced at 7.5cm on the back. (And then of course we have our OAK-SOM series for inclusion of this into custom products/etc.)
-Brandon from Luxonis
I am Vincenzo from Terabee (www.terabee.com). I am Product and R&D Project Manager at Terabee for industrial automation products.
At Terabee we are time-of-flight experts and have recently released a compact and rugged 3D industrial ToF camera, the Terabee 3Dcam VGA. It streams depth / point cloud / IR / quality factor with VGA resolution (640 x 480), 90° × 70° FOV and full aluminium case (<500 gr weight, IP65 and IP67). As a ToF nIR camera, it works very well in the dark.
We also offer Linux/Windows SDK, GUI, ROS package (Noetic, Melodic) and “open” on-board Linux computing power.
You can contact me directly if you want to know more: email@example.com
Of course I am also eager to know from you a first impression on the product itself.
Azure Kinect is closest you’re going to get that compares to the RS lineup for compatibility (openni, etc…) and support.
In reference to the T265 series realsense you can use the ModalAi voxl/voxl-cam series, which is a snapdragon based version with the same feature set and then some. Problem is I don’t see the buy link (edit: found it)…
Wondering what’s going to happen with openvino/movidus since it was nicely coupled with the RS sdk. What an ecosystem built around these cameras, sad.
Also wondering if open3d development is going to be impacted (it was starting to be mature)
Awesome, we should talk !
Thanks! Feel free to shoot me an email at brandon at luxonis dot com.
Indeed, love to hear with Intel has in mind for Open3D: will it die or will it broaden to support an Intel CPU use case for any stereoscopic and ToF camera?
We just built our industrial reconstruction around Open3D. Intel Labs is technically a different group, but you could see how they would be impacted.
RealSense stereo cameras are mostly sticking around
RealSense CTO posted a comment in:
The author in this article has his doubts that the stereo cameras will be supported like before even though they will still be available for some unknown duration. Probably do not want to integrate it into a new product at this point I would guess:
Intel Will Keep Selling RealSense Stereo Cameras
Ah, you beat me to it. It’s close enough to being an official statement from Intel.
One thing though, doesn’t Autonomous Driving use Cameras and Computer Vision? I kinda thought that was Musk and Tesla’s thing. It would appear Intel is missing the boat on something again.
Okay, so they are keeping:
D410, 415, 430, 450, modules
D415, 435, 435i integrated product lines
Open Source SDK
Facial Authentication (F450) and Tracking (T265) product lines this month
D455 will be EOL’ed (but not the module)
A new version of the D455 may come later.
Oh phew … the L515 is actually kinda neat, a bit of a pity, but at least keeping the stereo cameras is a good thing since literally everyone uses them. Couldn’t care about the tracking line personally.
The closest that came to RealSense IMO was the Structure Core, many others including ZED and Mynt wanted to eat into my GPU and that wasn’t cool.
It’s kind of sad though they are EOLing D455. It seems like they picked the wrong product out of the hat to EOL?
The D435 has different field of view for RGB and depth, which IMO is the biggest design blunder in their entire line of products. You literally have to throw away half your depth data to create RGBD images. D455 fixed that and has similar FOV for RGB and depth. But they’re EOLing the fixed product and keep the buggy product … did the EOL person get their numbers mixed up?