@joespeed One suggestion for topic of discussion would be better support for GPU builds of software. I recall frustration when trying to use OpenCV compiled with CUDA for ROS, as many packages depend on it and they all would have to be build from source. Similarly, PCL offers limited CUDA functionality but it runs into the same source-compilation issues as OpenCV. If we want Edge AI to be a big part of ROS2, we need better support for GPU acceleration, and CUDA is where a large part of the community is at right now. GPU opens up so many possibilities for ROS and it’s currently underutilized. I might like to see alternate builds of binaries with CUDA support. I understand such an endorsement of proprietary software might be a controversial step, but it would be immensely useful.
@atyshka sounds like something you’re passionate about! You should join the working group and help make that a reality!
Hello, I would be interested in contributing to this effort. My background is on the FPGA embedded systems side, and I am working on a basic Ultra96 project to this end.
These features have been supported by this project.
- It supports ROS and ROS 2 frameworks.
- It supports Intel OpenVINO and encapsules the utility of openCV.
- It supports Intel CPU / GPU / FPGA. (FPGA is not fully tested although).
- It support SSD mobilenet, YovoV2, master RCNN and other Intel trained models and public pre-trained modles.
I hope our work about ROS/ROS2 OpenVINO Toolkit projects can contribute a bit to this topic.
Imagine many autonomous cars all being driven by NNs. Each of these cars could gather vast data sets from all their cameras, lidars and other sensors and transmit this data back to the manufacturer. This would be a prohibitively huge amount of data, but would be useful for training and improving the next generation of NNs for the cars.
If instead, any small improvements, ie changes in the weightings of the NN, were transmitted back, these could be averaged over all the cars, and then the improved weightings could be sent back to the cars.
All the cars will be learning from each other’s experiences. This is just one example.
We here at Fraunhofer IPA are also interested in the topic of how to deploy AI modules to robotics systems running ROS. Recently, we have started a research project that touches the area.
I think this can be one major differenciator of ROS in the future compared to other industrial robot systems.
We should certainly also focus on FPGA and neuromorphic chips as this seems to be an upcoming topic.
Great proposal, is there some designs on architecture yet? If possible, I would like to make contributions, we have our smart device which accelerate the deep neural network, hope the arch will adapt multiple devices.
Sounds lovely, happy to include you
Hi Folks, Please pick the times that can work for you next week, thanks!
Hi @joespeed, great initiative. This in lines of something I have been ideating on in the past few months. I have also jotted down some ideas and approaches to attack this type of task. I am relatively a new comer to ROS but have extensive background in a variety of computer vision / machine learning / deep learning algorithm and product development including academic research, working on mine and other’s startups. I definitely want to contribute and be part of this.
I have worked on Intel Movidius, Jetson, Tegra GPU platforms, sensors such as RealSense, embedded boards such as RasberryPI, Tinkerboard etc. Portability and reusability of implementations and compute hardware / sensor support for widely varying hardware platforms has been a common challenge I have faced and continue to do so. Anything I can do to help the greater community and help speed up the overall pace of work in this domain is very much my intent.
I am currently working on Robotics Perception Problems at Rapyuta Robotics, Tokyo. We are a cloud robotics platform based on ROS. Kindly have a look at my profile https://www.linkedin.com/in/sumandeep-banerjee-1436a17/
Edge AI WG kick off will be 2020-02-27T14:00:00Z. Excited to form a plan together and make ROS better!
Here is the work in progress agenda, please add your interest, agenda items, what you’d like to do. If long, please give brief description and link(s) to more details.
Please join the Edge AI WG (thanks Tully!) to receive the meeting invite. I’ll assume Zoom is OK a’la Real-Time WG, right? It will include a dial-in option but quality is best with Internet audio + headset. Here are two headsets with great mics:
Don’t forget to add the meeting to the ROS events calendar so we don’t miss it!
more info about WG here https://discourse.ros.org/t/proposed-edge-ai-wg/12011
For best audio quality join Zoom Meeting
Meeting ID: 989 101 4875
One tap mobile
+16699006833,9891014875# US (San Jose)
+14086380968,9891014875# US (San Jose)
Dial by your location
+1 669 900 6833 US (San Jose)
+1 408 638 0968 US (San Jose)
+1 646 876 9923 US (New York)
+44 203 051 2874 United Kingdom
+44 203 481 5237 United Kingdom
+44 203 481 5240 United Kingdom
+44 131 460 1196 United Kingdom
+49 69 7104 9922 Germany
+49 30 5679 5800 Germany
+49 695 050 2596 Germany
+82 2 6105 4111 Korea, Republic of
+82 2 6022 2322 Korea, Republic of
+81 524 564 439 Japan
+81 3 4578 1488 Japan
Meeting ID: 989 101 4875
Find your local number: https://zoom.us/u/aARly0t9u
next Edge AI WG call is 2020-03-12T14:00:00Z details are in ROS events calendar.
Zoom conference, use Internet audio and a headset for best results https://zoom.us/j/9891014875
Agenda is here
please contribute to list of packages using ML https://discourse.ros.org/t/packages-using-ml
(yes, we discussed making the list in GitHub instead, but this was easiest to get started)
As to ROS2 cv_bridge, I have discussed with its maintainer (Ethan Gao), and tested the guideline, it works and fits the latest ROS2 system. It likely needn’t update for the general use.
Furthermore, I updated its code to support OpenCV 4.x and submitted one PR, which is under code review now.