Yes, I think so. Would love to see it used in that application.
WRT the D435i. So OAK-D’s premise is real-time spatial AI. So the RealSense cameras can’t do any neural inference/etc.
So here’s the lineage:
D4 chipset (Gen0): Intel D400 series (D410, D415 D435/i, D455): Depth only
Myriad 2 (Gen1): Intel T265: Depth + Tracking (i.e. great for SLAM)
Myriad X (Gen2): OpenCV AI Kit: Depth + AI + a whole slew of accelerated computer vision functions like H.265 encoding. Stereo neural inference, etc.
So to get the equivalent functionality of OAK-D you’d have to buy 3 things:
- 12MP camera
- Depth Camera (e.g. Intel D455)
- AI processor (e.g. NCS2).
And unlike previous depth or SLAM cameras, OAK-D isn’t for making maps of rooms or objects or for doing SLAM… it’s new use-case:
Giving real-time information on the physical location of objects (e.g. people, strawberries, fish and how big they are, seeds when seeding a farm, etc.).
It supports two ways of doing this:
Monocular Neural inference (e.g. off the color camera) fused with stereo depth. An example of that is below:
Or stereo neural inference, like below:
So the monocular AI + stereo depth is useful for objects, while the stereo AI can be used for objects or for features (e.g. facial landmarks, pose information, etc.)
Both modes support standard 2D models (since 3D training data is extremely limited compared to 2D data) and use the stereo cameras to produce 3D results all onboard.
Sorry that was a bit long but I hope it helps!