ROS Resources: Documentation | Support | Discussion Forum | Service Status | Q&A

Edge AI Working Group

Edge AI Working Group is launched. We welcome your contribution this effort.

Objective: make Edge AI easier and ubiquitous in ROS 2

Overview: Edge AI, specifically ML has important applications in ROS 2. For example navigation, perception & picking, inspection, motion planning. We’ll work to integrate and support technology for ML. We’ll approach this via a process of assessing the current state and working to make it easier for ROS 2 users.

The WG seeks to do the following for ROS 2:

  1. perform a survey of ML use in ROS and ROS 2
  2. identify gaps and build a roadmap to close them
  3. make it easier to use machine learning in ROS 2
  4. enable machine learning on embedded processors found in typical mobile robots
  5. enable HW acceleration of ML when present e.g. CPU SIMD, GPUs, FPGAs

Edge AI WG:

  • Joe Speed @joespeed ADLINK Technology (WG leader, contributor)
  • Steve Macenski @smac Samsung (contributor)
  • Geoff Biggs @gbiggs (will assign) TierIV (contributor)
  • Aaron Blasdel @Aaron_Blasdel Amazon (member)
  • Harold Yang @hyang5, Lewis Liu @LewisLiuPub Intel (contributors)
  • Katherine Scott @Katherine_Scott Open Robotics (contributor)
  • Christoph Hellmann @c_h_s Fraunhofer IPA (contributor)
  • Sumandeep Banerjee @sumandeepb Rapyuta Robotics, Tokyo (contributor)
  • Zhen Ju @crystaldust Huawei (contributor)
  • Amit Goel amitgoel NVIDIA (member)
  • Bob A boba (contributor or member?)
  • Alex Tyshka atyshka (contributor or member?)
  • Adrian Bedford Adrian_Bedford (contributor or member?)
  • Jeremy Adams adamsj Object Computing (contributor or member?)

Edge AI WG agenda and meeting minutes

Please list packages that make use of ML here ROS 2 packages using ML


Hi Joe,
I’ll check internally, but I believe Intel should be able to attend / contribute also.

I’ll let you know. I assume this would kick off in January 2020?

1 Like

Something I’m really interested by, and hopefully something Intel can help with, is an apples-to-apples comparison of a few classes of machine learning models with OpenVino-optimized model, raw CPU, and TensorRT-optimized models on a comparable Nvidia GPU (which, you know is a question of itself. But I have a Jetson Nano, a TX2, and I’m sure I could get access to a Xavier if we wanted to run the gambit).

In particular, starting from a common basis tensorflow model, instructions for how to run, instructions for how to optimize with openvino, instructions on how to optimize with TenorRT (which I can provide). This way you have a full document for “hey I trained this model, and now I want to deploy it on [all the major options]” and as the instructions we can use to reproducably get test results for different classes of models.

Now what classes of models am I interested in?

  • Some robotics-capable CNN detection model. Preference for SSD mobilenet V1/2 or inception. This class of model is typical and will give a good basis for DL solutions that are generally capable of running on the edge on Nvidia hardware, but I’d love to know if I can run int on intel CPUs as well.
  • Some lower dimensional detection or classification model. Think: IMU, laser scan, or narrow/shallow CNN. This class of model is interesting to test for simplier problem sets, how much the optimizations really help, and again with an eye for “can I run this on my i5 CPU tractably” without needing a separate GPU unit.
  • [other] I’m sure there’s another good extreme to test here with other types of machine learning interests.

This is a really valuable investigation but perhaps not directly ROS related


Could someone explain to me what the difference is between “Edge AI” and perception / computer vision? How would this be different than a perception working group? I would be more than glad to help out but I would prefer we keep the buzz words to a minimum.


I think its just buzz words and the specific application of AI-concepts on embedded vs cloud hardware (CPU, GPU, FPGA, AI accelerator…). I think the perception working group would be more higher level to perception tasks, which may also include some AI-concepts but also more general perception frameworks/SLAM/etc.


Great! Yes, the intent is to kick off in January 2020

@joespeed One suggestion for topic of discussion would be better support for GPU builds of software. I recall frustration when trying to use OpenCV compiled with CUDA for ROS, as many packages depend on it and they all would have to be build from source. Similarly, PCL offers limited CUDA functionality but it runs into the same source-compilation issues as OpenCV. If we want Edge AI to be a big part of ROS2, we need better support for GPU acceleration, and CUDA is where a large part of the community is at right now. GPU opens up so many possibilities for ROS and it’s currently underutilized. I might like to see alternate builds of binaries with CUDA support. I understand such an endorsement of proprietary software might be a controversial step, but it would be immensely useful.


@atyshka sounds like something you’re passionate about! You should join the working group and help make that a reality!

1 Like

@atyshka CUDA libraries are proprietary but endemic. Doing something about it maybe fits in #5. @smac is right, you can be part of the solution😁

Hello, I would be interested in contributing to this effort. My background is on the FPGA embedded systems side, and I am working on a basic Ultra96 project to this end.


Bob Anderson


These features have been supported by this project.

  • It supports ROS and ROS 2 frameworks.
  • It supports Intel OpenVINO and encapsules the utility of openCV.
  • It supports Intel CPU / GPU / FPGA. (FPGA is not fully tested although).
  • It support SSD mobilenet, YovoV2, master RCNN and other Intel trained models and public pre-trained modles.

I hope our work about ROS/ROS2 OpenVINO Toolkit projects can contribute a bit to this topic.

1 Like

@LewisLiuPub loved seeing that work in ROScon Day 2 talk and MoveIt Macau workshop, @mkhansen is working on having Intel contribute to Edge AI WG.

Hi Katherine,
Imagine many autonomous cars all being driven by NNs. Each of these cars could gather vast data sets from all their cameras, lidars and other sensors and transmit this data back to the manufacturer. This would be a prohibitively huge amount of data, but would be useful for training and improving the next generation of NNs for the cars.

If instead, any small improvements, ie changes in the weightings of the NN, were transmitted back, these could be averaged over all the cars, and then the improved weightings could be sent back to the cars.

All the cars will be learning from each other’s experiences. This is just one example.

1 Like

Hi Joe,

We here at Fraunhofer IPA are also interested in the topic of how to deploy AI modules to robotics systems running ROS. Recently, we have started a research project that touches the area.
I think this can be one major differenciator of ROS in the future compared to other industrial robot systems.

We should certainly also focus on FPGA and neuromorphic chips as this seems to be an upcoming topic.

Best regards


Great proposal, is there some designs on architecture yet? If possible, I would like to make contributions, we have our smart device which accelerate the deep neural network, hope the arch will adapt multiple devices.

1 Like

Sounds lovely, happy to include you

Hi Folks, Please pick the times that can work for you next week, thanks!

Hi @joespeed, great initiative. This in lines of something I have been ideating on in the past few months. I have also jotted down some ideas and approaches to attack this type of task. I am relatively a new comer to ROS but have extensive background in a variety of computer vision / machine learning / deep learning algorithm and product development including academic research, working on mine and other’s startups. I definitely want to contribute and be part of this.

I have worked on Intel Movidius, Jetson, Tegra GPU platforms, sensors such as RealSense, embedded boards such as RasberryPI, Tinkerboard etc. Portability and reusability of implementations and compute hardware / sensor support for widely varying hardware platforms has been a common challenge I have faced and continue to do so. Anything I can do to help the greater community and help speed up the overall pace of work in this domain is very much my intent.

I am currently working on Robotics Perception Problems at Rapyuta Robotics, Tokyo. We are a cloud robotics platform based on ROS. Kindly have a look at my profile

1 Like

Edge AI WG kick off will be 2020-02-27T14:00:00Z. Excited to form a plan together and make ROS better!

Here is the work in progress agenda, please add your interest, agenda items, what you’d like to do. If long, please give brief description and link(s) to more details.

Please join the Edge AI WG (thanks Tully!) to receive the meeting invite. I’ll assume Zoom is OK a’la Real-Time WG, right? It will include a dial-in option but quality is best with Internet audio + headset. Here are two headsets with great mics:

1 Like

Don’t forget to add the meeting to the ROS events calendar so we don’t miss it!

1 Like