ROS Resources: Documentation | Support | Discussion Forum | Service Status | Q&A answers.ros.org

Power-over-Ethernet Spatial AI Sensor - Auto-Discovery

Hi there!

Huge ROS fans here but with very little hands-on experience so far… wanting to get more into implementing support for ROS. The most we’ve done is this quick (now old/stale) stab at support here for the USB variant of DepthAI.

So we’re making a power-over-ethernet version of DepthAI (what that is, here) and we’re wanting to implement the Ethernet stack so that it’s maximally compatible with ROS if possible (since we’re starting from scratch, we might as well!).

So along those lines, we wanted to see if there as a standard auto-discover sort of UDP system or otherwise in ROS that we should be implementing on DepthAI directly.

We were thinking of doing a UDP-broadcast discover for example… which would return device config information (firmware version, number of cameras, capabilities, etc.), but wasn’t sure if such a protocol for auto-discovery already exists in ROS. This would be implemented directly into the firmware in DepthAI (so on the Myriad X).

Some quick googling we found zeroconf (here). Is this what we should be implementing to?

Or should we go ahead and make our own discovery protocol/implementation?

Thanks in advance,
Brandon & The Luxonis Team

And for a quick video of what DepthAI does, here’s depth data overlaid on bounding boxes. DepthAI does the whole bit here, at 30FPS. The depth data is optional and just the meta-data-can be requested (i.e. object class, and 3D position of the object). And in this example the host used couldn’t keep up with the full 1280x720p depth data, so it’s shown at ~8FPS here as a result:
Spatial AI

And here’s a similar version but for finding 3D location of facial landmarks in real-time (running the face detector -> facial landmark detector in series, and those together in parallel off of the global-shutter/synched stereo pair):

Spatial AI

3 Likes

I just want to say I had a long talk with Luxonis last week after getting introduced by a friend. They are working on some super cool tech and a lot of it is open hardware. I am super excited to see the results.

3 Likes

Wow, the more I look at this, the cooler this gets. I’m going to have to get my hands on a couple of these.

2 Likes

I just introduced you two over e-mail. I think you’ll have a lot to talk about.

1 Like

Thanks @smac for the kind words and @Katherine_Scott equally and for the e-intro!

So in terms of getting a unit to play with we do keep these in stock here: https://shop.luxonis.com/

The USB3C with Onboard Cameras (here) is usually my go-to recommendation as it works w/ any host and has all 3 cameras onboard and comes factory calibrated for ease in getting running.

Thanks again,
Brandon and the Luxonis team!

What sort of computing power would be available for this?

One of the things which I’d like to do with ROS 2 is get rid of as many bridges (we call them drivers most of the time) as possible and “just” run a ROS 2 node (or something which pretends to be a ROS 2 node) directly on-device.

This could be a native node, DDS-XRCE or using something like Micro-ROS (although the latter typically include a bridge again … at least not a custom one).

That would solve the auto-discovery (in DDS/ROS 2 environments), as well as allow seamless and plug-and-play integration (ie: no drivers to install, nor for you to maintain).

Super interesting, thanks!

So DepthAI runs a small RTOS with standard C++ library support. So for example APRIL TAGS compiled with no errors on it (although we’re porting that to run on the SHAVES).

So is there’s ROS 2 node C++ library (or example) we could try out to see if it compiles and could work for auto-discovery/etc. with ROS 2?

Thanks again for the help!
-Brandon

If the RTOS is Linux, then it should be enough to just write a ROS 2 node. All discovery-related stuff will be handled by the DDS.

Getting ROS installed on the device might be a bit more complicated. You can get inspired e.g. in https://github.com/ROBOTIS-GIT/ros2arduino .

1 Like

This has been a sort-of goal of ROS since the very beginning, and when the ROS 2 effort started OSRF put some effort into it. I think it was @codebot who was mainly involved? Anyway, I agree. It would be awesome to plug in my newly unboxed ethernet camera or lidar or whatever and have it already running the ROS 2 nodes I need.

For me and probably also for the original author it would be very helpful to sketch out that idea a little bit further. Am I right in understanding that multiple DDS can be interconnected? So as a hardware manufacturer I could decide to use Fast-DDS/Fast-RTPS while some of my customers might prefer to use eclipse DDS on their side? DDS-XRCE seems to require a bridge (DDS agent), would that still feature auto discovery? Are there information on hardware requirements for being a full dds client? (The hardware I’m thinking about has some dual core ARM or so)

If the RTOS is Linux, then it should be enough to just write a ROS 2 node. All discovery-related stuff will be handled by the DDS.

It’s not Linux. So it’s a small RTOS called RTEMS. Googling any support I saw some questions and rumblings/suggestions but nothing seems to exist so far.

Getting ROS installed on the device might be a bit more complicated. You can get inspired e.g. in https://github.com/ROBOTIS-GIT/ros2arduino .

Thanks, this may be super helpful in terms of figuring out which bits we should run directly on DepthAI so that when plugged into a PoE (or Ethernet port + 5V power) it just shows up automatically in ROS 2.

We’ll investigate.

This has been a sort-of goal of ROS since the very beginning, and when the ROS 2 effort started OSRF put some effort into it. I think it was @codebot who was mainly involved? Anyway, I agree. It would be awesome to plug in my newly unboxed ethernet camera or lidar or whatever and have it already running the ROS 2 nodes I need.

Yes! This is exactly what we’d like to make DepthAI do w/ ROS 2. Plug it into the same subnet, and BOOM ROS 2 node is there and ready to be used.

The ros2arduino link above seems like it might be enough to get us there, but we need to investigate more on that and if we can use it as a reference (or not) given the license (I know nothing about licensing other than MIT License, which is what all of DepthAI is… does Apache 2 allow for use in closed-source firmware?). Anyway, it looks like C/C++, which is good, and depending on how much it hooks into other Arduino code (or not, more ideal) it may serve as a quick reference.

For me and probably also for the original author it would be very helpful to sketch out that idea a little bit further. Am I right in understanding that multiple DDS can be interconnected? So as a hardware manufacturer I could decide to use Fast-DDS/Fast-RTPS while some of my customers might prefer to use eclipse DDS on their side? DDS-XRCE seems to require a bridge (DDS agent), would that still feature auto discovery? Are there information on hardware requirements for being a full dds client? (The hardware I’m thinking about has some dual core ARM or so)

Yes, this would be super helpful. I tried googling around and digging a bit into the protocols this weekend but got a bit stuck as I’m not yet familiar with implementations inside ROS and the permutations thereof. So if someone has answers to this questions and/or pointers to articles that do, it would be super helpful for us as we implement this.

FWIW, we are OpenVINO compatible (see Intel’s official ROS-OpenVINO here), but unlike I think all other OpenVINO-compatible devices DepthAI does not need a host to run (it can run neural models standalone) and so the modality below of using the ROS adapter is perhaps not ideal considering DepthAI can be a standalone PoE device (as above, if it could show up as a ROS 2 node when plugged in that would be way easier/better UX).

Thoughts?

I think it could also be easy to support ROS 1. You’d just let the user to set the ROS_MASTER_URI to be used, and possibly also ROS_HOSTNAME (or ROS_IP), and that’d be it. I’m not sure how easy would it be to compile ros_comm (the communications library) on the machine, but it could be doable.

1 Like

I meant to say thanks @peci1 for the details here.

We do have standard C++ available… so I’m thinking ros_comm should be doable. But we’d have to just try!

Thanks again,
Brandon