ROS Resources: Documentation | Support | Discussion Forum | Service Status | Q&A answers.ros.org

OpenCV AI Kit (OAK)

Two M12 lens arducam camera modules for those who need to import to Altium for evaluation.
Arducam 3d model for drop-in replacement IMX219 camera module


Arducam 3d model for OV2311 camera module

As far as I know the die package sensors are designed for mobile devices which have strict low profile requirement. The light enters into the lens with big bending in order to cover all the sensor surface, the sensor CRA (actually they are micro-lenses on top of the sensor) should match with the CRA of the lens on the module. That’s why the OV9282 is designed with bigger CRA (Chief Ray Angle), although the electrical performance is the same as OV9281.
Actually the fisheye module is using OV9282, sorry for the confusion.

2 Likes

Thanks for both! We’ll be trying these out in Altium today.

Fits almost perfectly. Just happenstance that the existing mounting hole fits with the lens housing mounting hole, but nice! We could make room for this on the design pretty easily, I think. Would need to move a few passive components and punch a hole (doesn’t need to be plated or have annular ring).



This is just the physical M12 housing - the connector/MIPI pinout would need to be figured out. Likely shorter cables and then either changing the connector on OAK-D, or if possible changing the connector on the ArduCam module.

Thoughts?

1 Like

@ArduCAM @Luxonis-Brandon We’re calling y’all the postmen, because y’all deliver! Expedited shipping.

deliver

Now to figure out an open source calibration routine. I think the camera calibration packages could probably use some ROS 2 love.

2 Likes

Heh. Thanks and agreed WRT ArduCam. Very looking forward to working with ArduCam on this. The NDVI applications alone will be super cool, afforded by lenses with filters

1 Like

Hi guys, I’ve checked in a working first draft of ROS2 support for the OpenCV AI Kit. It can be found here along with some setup instructions:

It’s essentially a ROS2 wrapper for the python interface defined here: https://github.com/luxonis/depthai. It’ll broadcast a topic for each stream specified with the cliArgs parameter. It will also take any input parameter that the depthai-demo.py will take. The included demoListen component can be used as an example on how to receive those topics in your own ROS2 nodes.

For a more detailed list of arguments that can be passed see the depthai-demo.py help or add ‘help’ as a cliArg to the depthai_wrapper talker component.

I’ll be offline for about a week but am interested in any comments and opinions you guys might have. I’m pretty new to both ROS2 and Python so I’d very much welcome suggestions and criticism.

Thanks for reading!

2 Likes

Brought me to this thread a new search about OAK. The search was “OAK FISHEYE”. The reason of this search is I’m opening a small robotics company focused in affordable R&D robots. My first product use D435, T265 and AI accelerator. As main feature they are AI and SLAM capable.
My second product, more capable use from 1 to 4 cameras Arducam imx477, D455 and T265.

They are customized to the customer requeriment, so I was looking a while ago to make one platform based in OAK as I knew it gonna be successful and probably some customer would ask for it, but unfortunately I knew that with that FOV it could not be integrated in any of my mobile robots as main sensor.
2 months after I searched again in case there is some mod released in this time or an alternative version, and what a surprise!.

I very happy of what I’m reading here, happy because Arducam will have hands on it, and happy that the decision spot on in the robotics requirement, and very happy how OAK team is listening. Just great.

If you allow me I would like add , the desirable of a dynamic calibration feature, please check this feature of Realsense, also is desirable the RGB camera be pixel to pixel aligned with the depth frame and very important global shutter, just like D455. They are quite challenging features, but I think they deserves at least study the possibility. An input via i2c would be great (in example to can correct or receive external odometry and a external synchronization input in case have no one, in example for multi-camera setup.
Looking forward to the new modification be released to build a prototype and can offer it to my future customers.
This features could convert OAK in a software defined camera, in example pass from a spatial object detection camera, to a SLAM camera or tracking camera, CV camera and who knows with the right hardware a semantic slam camera (for which the dynamic calibration could be great, well almost necessary) .

My better wishes

Andrés Camacho

1 Like

Hi @FPSychotic,

Thanks for reaching out and for the kind words.

To summarize the requests:

  1. Dynamic calibration. So where intrinsics are calibrated at the factory, but extrinsic can be recalibrated w/out a calibration target. We will work to support this. TBD timing on it though.
  2. Color camera that matches grayscale camera resolution exactly. We are actually already working on this with ArduCam. So the OV9782 is color, global shutter, and the exact same resolution/etc. of the OV9282 grayscale. So we plan to make this an option. So all 3 cameras would be the exact same resolution, view angle, etc. and all global shutter in this permutation. We will likely have this in both integrated camera module variant and also M12 mount variants (so you can do your own view-angle/fisheye/etc.)
  3. Synchronization between multiple OAK-D. So we actually have the I2C, UART, and SPI brought out on our System on Module (which OAK-D is built around) for this (and other) purposes. https://shop.luxonis.com/collections/all/products/bw1099 That said, we haven’t investigated this. The Gen2 Pipeline Builder we are making is quite relevant though, as it will provide user-code inside the Myriad X access to the SPI/UART/I2C interfaces: https://github.com/luxonis/depthai/issues/136 Note that this Gen2 pipeline builder functionality is planned for December. And perhaps it could be used for this timing/sync. We’d need to build/test the Gen2 pipeline builder first, and then see if it is precise enough for this.

Thoughts?

Thanks,
Brandon

1 Like

You understood right my requests, thanks by take then in account, it is great to know you are working in them already.
If you allow me a little more, you could install the modules without IR filter, and the filter be incorporated in the m12 lens. This would allow use the external inputs to control a IR structured projector and an illuminator for night vision, both serial controlled, that sounds cool for me, I’m sure the DARPA guys would love it XD.
Another thing would be great in ROS is have solid info about TF and if it is possible UDRF files an

My thoughts are that you have a great product and very talented partners in every area and you have delimited already the key points to get . About the camera no many to add, at the moment as it fits to the research tendencies and requeriments, and Myriad X I don’t think will allow much more things.

After be able to perform VSLAM, spatial object detection and probably tracking (even if it is done by users or 3rd parties), next natural milestone should be Semantic SLAM, which is a very big lack in ROS/2 . Obviously OAK’s Myriad X won’t be enough to perform it in real time, maybe could you enjoy developing a PCI-e AI accelerator? Other good thing could be team with software developers as Rtabmap, Kimera, Open3D to improve the SLAM new capability and and help to get Semantic Slam software suite, even if it must be done in post-processing.

You know, ask is for free XD? That is what I would ask to make robots or if I could develop a camera.

Sorry by so long post with so basic english.

Better wishes

Andrés Camacho

1 Like

Thanks @FPSychotic,

Yes, really like the idea of IR versions of the cameras. In fact we have tested with IR-capable and IR-only modules. So here are some examples. These modules had 15,000 MOQ so the ArduCam solution will be much better. And we are working with ams to make an active-illumination version, starting with the BELICE-SD (see here).

In terms of TF and UDRF, yes, great idea. So we have STEP and Blender files available here. I’m guessing it should be straightforward to make UDRF and TF from these. We will look into it, but are currently slammed with some additional support (e.g. see the laundry list here) so it might be a bit until we get to it.

And thanks for the kind words. :slight_smile:

WRT VSLAM, yes, with the wide-angle optics afforded by ArduCam, I think this will work well. We do already have object tracking. See here.

In terms of performing more functions than the Myriad X can handle, OAK-D (and OAK-1, etc.) all run cleanly on the Jetson/Xavier series (and presumably, on the Edge TPU, although we haven’t yet tested that).

And sounds great on Rtabmap, Kimera, and Open3D. Would love intros (even on this thread if they’re on here) to strike up making this happen. Once our feature-tracking support (here) is out and ArduCam wide-angle OV9282 and OV9782 cameras are available, I’m thinking this could be extremely useful for the use-cases you describe.

And you could pair it with Jetson/etc. if additional accelerated AI/CV is needed on the host.

You know, ask is for free XD? That is what I would ask to make robots or if I could develop a camera.

Invaluable feedback - thank you!

Sorry by so long post with so basic english.

French and German are the only to I can take a stab at - and I think your English is better than both of mine. :slight_smile:

Thanks again,
Brandon

Thanks by your detailed answer, just amazing to know all that will be included the Arducam version and either the software part.

I’m very happy knowing it will work with Jetsons and probably with Coral TPU, my first prototype which is already done has incorporated a TPU USB, and my next prototype will work with Jetson NX instead.

I will start to work on a OAK version of them, and start to follow the development closer, starting by the links that you kindly gave me.
I think, maybe if I have a mobile robot to perform SLAM and to develop that software area, and is ready or short time after the new SLAM version be released, maybe could help me to get some customer in the company’s starting.
Maybe something very small like this

Or a little bigger like this


1 Like

Thanks @FPSychotic. I love seeing robotics like this. We are actually working with University of Maui on their autonomous racing (evgrandprix and indie autonomous challenge) with OAK-D (multiple of them).


Looking forward to this being on your platform!

Thanks again,
Brandon

Ohh, that is a cool and very nice platform. Very high level indeed. I don’t know many about this kind of competitions, but I know there is a lot of talent into them and as I’m seeing good funded too.XD. Really nice, I love it. Thanks by the chat and pictures. Very interesting.

Thanks!

So these will very likely be using the Xavier NX as the base system for OAK-D. And actually likely many Xavier NX.

And the whole project will be released open source so folks will be able to rebuild the whole thing if interested. Or repurpose parts for 1/10-scale application. :slight_smile:

1 Like

That sounds specially interesting for me, precisely it is what I’m doing right now, small, medium and big size affordable car like robots for R&D companies and organizations as Universities.
The big prototype of the picture works with Xavier NX, with VSLAM, 360° AI and 360° Lidar.

I’m not specially fan of Nvidia ecosystem and their partnership cameras program, will be great have a reliable software suite compatible and suitable with my robots and do not look back by avoid ZED cameras on Jetson platform. You just brought the right product that many need.

IMG_20200916_204308|375x500

I don’t want cautive the post with my opinions, I hope we will have another great chat like this.

Better whises and take care, you know, still is 2020…XD

1 Like

Thanks! And those look great!

1 Like