The 635 is only $165 and looks interesting to me for two reasons: the min range of 0.1m and the dual beam. The resolution, 160 x 60, is terrible compared to the D435, but I’d be curious how that looks in practice. If the camera still detect objects much smaller than its resolution then it could still be a good fit for close-up obstacle avoidance.
The TOFcam635 is $170 and the higher resolution TOFcam660 is $770. There is no fee for the GUI, ROS and Python. The SDK is available if needed. The $4700 unit you are looking at is the DevKit for those wanting to build their own camera. Click this link to look at the TOFcam635 and TOFcam660
https://www.digikey.com/en/products/filter/camera-modules/1003?s=N4IgTCBcDaIKYGcAOAnA9gkBdAvkA
I’m the MD in NA. email me @ uge@espros.com The products are best in class in Lux environments compared to other 3D TOF imagers.
All,
I think the message isn’t as bad as originally posted, see this follow up explanation here: Anders Grunnet-Jepsen on LinkedIn: I can share this more detailed communication from Intel regarding | 34 comments
Intel will continue to support most RealSense camera models, particularly for the AMR market.
AIRY3D has been developing a passive, single sensor depth solution that has lightweight power and compute requirements. With a single CMOS sensor you get both 2D and depth real-time streams.
I work at AIRY3D and we have advanced prototypes with some samples available for eval/purchase (limited supply). Definitely could be a replacement for RealSense for certain applications (eg under 1 meter range, and ideal for outdoors). Our sensors are ideal for machine vision and obstacle avoidance.
In the video above, we put a 2Mp Bayer sensor on a push cart to mimic the view from a floor cleaning robot. The distance to furniture, walls and thin objects such as cords under desks is clearly evident.
If you are interested, feel free to contact to contact me directly at pier-luc.tardif@airy3d.com
If you want to test one of the Stereolabs ZED cameras you can take advantage of the Stereolabs’ return policy, ordering a camera, testing it for a while, and returning it if it does not satisfy your requirements.
I’m sure you will be impressed by how easy is to use the ZED SDK and by all the features available out of the box.
Take into consideration the ZED2i which was born with robotics as its target: ZED 2i - Industrial AI Stereo Camera | Stereolabs
The above discussion covers the stereo depth / RGB-D aspect quite well, but not the SLAM and 6-DoF pose tracking capabilities in the RealSense T265 tracking camera, one of the definitely killed product lines. Here’s some information about that.
Only some of the products mentioned in this thread have an IMU and even fewer of those provide their own VIO/VISLAM implementation.
ZED 2 has a working VISLAM, but as also pointed out by others, requires rather heavy CPU & GPU capabilities from the host machine. I have not tried ZED 2i yet, but the product website lists, e.g., “Dual-core 2.3GHz or faster processor” as an SDK requirement so it is unclear if it has really improved the situation.
OAK-D and MYNT EYE do not have built-in VISLAM. The OAK-D SDK is of high quality and under active development, while the MYNT EYE SDK seems to be effectively abandoned (since 2019).
In principle, you add a software VISLAM capability to anything with cameras and IMU using Open Source alternatives such as ORB-SLAM3 or LARVIO, but the computational requirements may be surprisingly high and the licensing terms difficult for commercial projects. Also the general production readiness and suitability for embedded systems varies. If 2D SLAM works for your use case, instead of full 6-DoF tracking, there are more options and this may not be an issue.
And then some advertisement: Our company, Spectacular AI, provides VISLAM & VIO tracking capabilities for devices that have the necessary sensors and reasonable computational resources (e.g., Raspberry Pi 4), but no built-in VISLAM. Our accuracy is generally better than what the T265 provided, especially in challenging use cases such as fast-moving vehicles. Here is a demo with an OAK-D + RTK-GPS: GPS-aided visual-inertial odometry on the OAK-D - YouTube. The solution is available as an SDK (commercial license). Contact us at https://www.spectacularai.com if you are interested.
The Terabee 3Dcam VGA has an IMU provisioning, available upon request and not currently in the baseline features of the product. Drop me an email if you are interested ( vincenzo.forte@terabee.com ) .
Best,
Vincenzo
Those requirements are mainly for AR/VR applications. The ZED 2i, like all the other cameras, runs correctly on any Nvidia Jetson board, including the Jetson Nano 2GB that is the less powerful of all of them.
Naturally, you cannot get the depth map, the point cloud, and detect and track objects simultaneously at the maximum resolution/framerate on a Jetson Nano, but surely you can use it for robotics applications and to correctly run the positional tracking module for VI-SLAM.
Furthermore, the ZED 2i is IP68 and provides a magnetometer and a barometer that can be used to improve the results of the attitude estimation process.
We (Nerian Vision) continue to manufacture our ROS-compatible line of real-time stereo vision sensors. There is also a low-cost sensor in the planning that will be released mid/end next year and should be a great replacement for realsense. More details will follow.
We are working on indoor table manipulation robot. We had a lot of problems with TOF technology: shiny, black and semiglossy materials. TOF is distoring pointclouds lot of items. We tested Azure Kinect, Helios and Intel L515. Its realy hard to grasp any TOF scanned objects… Intels D455 cameras works much better in such situations. I am dreaming about structured light, but prices…
For completeness of this thread, here the official notice from Intel reiterating the continuation of the D-series stereo cameras:
So I was torn on whether to post this. But figured there would be a good number who’d want to know this exists. For folks who are working in outdoor situations and/or don’t need active stereo depth, this past week we launched pre-orders for our OAK-D-Lite. One of the first to find out about it prior to launch dubbed it “The Swiss Army Knife of Computer Vision” and we stole it and ran with it.
`This is not active depth - which is required in some situations (and will come on a future model) - but it does give you passive Census-transform-based disparity depth, the capability to run onboard-neural-assisted disparity depth (which may prove to surpass active depth), and a bunch of useful things crammed into a thing that’s about the size of a Swiss Army Knife.
- Depth at 200+FPS at 640x480 (18cm min depth, 18 meter hard-max)
- Onboard 4k Video Encoding.
- Real-time 6DoF Object Detection (3D location and 3D Pose).
- Tough as nails.
- <$100 pre-order on KickStarter
Seriously, it’s the size of a Swiss Army Knife.
Cheers,
Brandon | OpenCV/Luxonis
Thanks for the posting it.
Any chance you know of an “Adding Stereo Depth Obstacle Detection To A ROS2 Mobile Platform” tutorial so my robot can be ready for this new sensor when it arrives?
Hi @RobotDreams - definitely. @saching13 may actually already have this tutorial. But we’ll make it for OAK-D-Lite either way before the hardware is delivered.
Hey @RobotDreams ,
As of now we actually made a turtle bot navigation using Nav2 and OAK-D-Lite. But the code currently uses conversion of depth to laser scan. Will release this soon.
And will also work on a new example which can use depth directly.
This looks very interesting!
What’s the field of view of the 640x480 global depth cameras?
Thanks, @Adrian_Onsen
OV7251 Stereo Global Shutter Pair Specs:
DFOV: 85.6 deg.
HFOV: 72.9 deg.
VFOV: 57.7 deg.
And full specs below:
Thanks again,
Brandon
This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.