Hi everyone! I’m Stuart from Luxonis, where we specialize in robotic vision systems and use ROS as an essential tool. It’s my first community post, and hopefully the first of many!
I mostly just wanted to introduce myself here and offer ourselves up as a resource for any/all questions to help make robotic vision as easy as possible, and specifically navigating ROS within that context.
Anything and everything you may want to know related to depth, AI, ML, CV, and lots more…let me know! We’ve got tons of applications and examples to share and not enough places to share them. Would love to hear any topics that you’d be interested in us doing a deeper dive in as well.
Hi Stuart! In case you missed it, a couple of weeks ago I posted a tutorial on depth cameras in ROS, with a good chunk dedicated to interfacing with an OAK-D Lite.
Despite a couple of hiccups, I’m keen to keep making tutorials, demonstrating its capabilities, and using it in more projects!
Stuart, We would be interested in your services with our Espros TOF cameras. Our 3D TOF works in 100k lux and we have some outdoor projects needing software support www.espros.com. please shoot me an email at uge@espros.com. thanks Uly Grisette
We were users of Intel Realsense D415 and we are trying to use Oakd-lite for the same application now.
We were using using ROS MOVEIT extrinsic calibration package for calibrating the camera joint (eye-in-hand) in our robotic arm.
This was well supported by the IntelRealsense ROS package https://github.com/ros-planning/moveit_calibration
However moving to OakD-lite these seems not so straight forward. Do you have any example implementation of OAKD-LITE Supported in MOVEIT intrinsic calibration
@joshnewans This is so great, thanks for sharing! I would love to share this on our social media channels to help get you some more visibility! Do you have a Twitter and/or LinkedIn handle so we can tag/credit you?
Hi @Manohar_Sambandam - I have a couple possible suggestions for you. This may also be a good topic to move over to our Discord server so our support team can engage with you directly.
Here’s the answer I got back from our ROS expert:
We haven’t done hands on directly on this. But from this issue it looks like you may need to subscribe to image topics.
If you’re looking to do this with Left/Right camera of OAK-D-Lite, then you can use this launch file by disabling depth_alignment and rectify . If RGB, you can run it in default Configuration.
(I’m still getting the hang of what I have to do to get the YouTube embed to work properly in those posts as some do and some don’t. I think it has to do with whether I used the short URL or not. I might try to edit the LinkedIn post now to see if I can fix it. Edit: No luck, still no preview )
@Manohar_Sambandam - Ah sorry for the confusion. I’d need you to go to Discord and engage there directly. My previous response was our best initial recommendation, but if you need follow-up support you’ll be able to get it there much faster.
Good dialog as I am doing similar experimenting to emulate/replace the existing Realsense packages described for a ROS2 Galactic based Linorobot2 to enable using my Oak-D-Lite. As I have had good experience with Luxonis Discord forum, I made a couple question posts on Discord ros resulting from my experimenting, some of which are unexpected behavior, using the depthai-examples launch files. And to clarify, is not the note that the launch file you referenced in the Aug 11 post is for ROS1, as the ROS2 launch files have a .py suffix?