Please let me know your thoughts about the below proposal.
Preface
Mobility industry is getting highly impacted by the COVID-19 virus as this McKinsey study details it. As a result we now see lots of professionals in the automotive industry:
working from home
studying from home
being laid-off
working reduced working hours
It has been proven in the past that such situations result in the individuals investing into their education, primarily in the new and the upcoming technologies.
Proposal
Create an online class for the 2 upcoming technologies: ROS 2 + Autoware as used for the programming of self-driving cars.
In this class the participants will learn about
ROS 2 - a framework and a set of tools to program self-driving cars
Autoware - a stack of drivers, algorithms and tools to program self-driving cars on top of ROS 2
Complementary tools for code analysis & optimization, physics-based simulation, system for data recording, storage and analysis, HD maps, …
Best practices for automotive coding, testing, validation and verification
This will be a medium level class targeted at the individuals who develop the pre-production autonomous driving systems. Participants should have knowledge of of C++ (including testing), robotics frameworks and system integration. An ideal participant would’ve completed the https://www.udacity.com/course/self-driving-car-engineer-nanodegree--nd013 course.
Syllabus (Note: in bold there are proposed lecturers):
Development Environment (Apex.AI / StreetScooter - )
ROS 2 101 - (OSRF - )
ROS 2 Tooling - (OSRF - )
Platform (HW, RTOS, DDS): (ADLINK - )
Architectures of Autonomous Driving Stacks (**Virtual Vehicle Research Center - )
@Sanjay_Krishnan find out about translation of courseware (videos, documentation) into other languages. Based on Autoware users, these should start with Japanese and Chinese.
Sounds excellent. I would be happy to present! We might want to consider a primer on ADE / Docker too. Many users might not be familiar enough with Docker to do things in ADE that we hear requests for commonly like developing Autoware modifications in a private fork, saving changes made to the ADE image, etc.
I think something missing is a thesis statement of what you want to want people on the outset to understand/apply. With a thesis statement, I think a number of your open questions would have more clear answers. It would also give participants correct expectations going into it, for example: whether its a survey overview of autonomous driving or Autoware-specifics only.
One topic that would be great too that isn’t in the proposed list is architecture. Giving several examples of how these technologies intertwine can be helpful. Especially while looking at multiple common situations like highway architecture vs free space vs in-town. Or analyzing the system architecture used by different companies (autoware, waymo, etc)
Hi @Dejan_Pangercic, sounds very appropriate and on time !
I think we should also be ready to answer to the question: I want to test on hardware, and point, recommend platform. We are building this as we write but having a pointer should help. Let me know if you want me to suggest a couple of slide on that.
I am very excited about this class.
I have a few suggestions to your open questions
1 - Does the class make sense
Yes, this would definitely help everyone.
2 - Which educational platform to select
I think ROS Development Studio would be great, here everyone can directly practice while learning, well I’m not familiar with other platforms.
6- Where to advertise the class
The Construct organizes ROS live classes every week and hundreds of student attend live, they have a good network, it would be the best place to advertise for this class
I like the initiative too and it makes sense, there is a lot logistics to handle though, hope we have resources.
For platform, ROS Development Studio might be too slow to run a full AV stack with LGSVL, in their free tier. It would be great to have Constructsim or Amazon AWS sponsor more free credits for this class.
I think that RoboMaker would be most likely to be able to provide the most robust platform, simply because they own a significant portion of the world’s computers.
However, even LGSVL requires a significant amount of computing power. A better option could be to handle the perception part of the class on pre-recorded rosbags, and use a simpler simulator to do planning and control.
What sort of logistics help is needed?
As for advertising, I’m not sure what your thoughts are, but Hexagon/AutonomouStuff could help promote on social media.
For platform, ROS Development Studio might be too slow to run a full AV stack with LGSVL, in their free tier. It would be great to have Constructsim or Amazon AWS sponsor more free credits for this class.
I have written to Constructsim, Amazon AWS and Udacity. Lets wait what comes back.
However, even LGSVL requires a significant amount of computing power. A better option could be to handle the perception part of the class on pre-recorded rosbags, and use a simpler simulator to do planning and control.
That is indeed a very good point. Robomaker and Amazon AWS both have already Gazebo integrated out of the box. I will add your point to the Open questions.
We are starting pretty much from 0 so the entire logistics is to be figured out (participants sign-up, where to run/show the code, live-vs-pre-recorded, …). However I want to first see:
Happy to contribute to DDS courses and even training material. In the past I had written a little tutorial which is on GitHub and licensed under CC. Let me know how we can contribute.
Great initiative and I feel like there will definitely be an audience out there if this course becomes reality.
I’m not in the autonomous car industry but in the mobile robotics industry and I feel like there would be some benefit for the mobile robotics community in getting some knowledge of how (if?) they can make use of Autoware and all the associated tools and HW/SW components, also relating to what @smac said about the architecture.
I feel like there might be an opportunity for some trickle-down software that could benefit the mobile robotics community , but it’s hard to get an understanding of the individual components of a complex stack like Autoware and if they can be used outside of the autonomous car industry.
E.g.
Is it possible to use HD maps and NDT localization outside of the road networks or are there some tight coupling here?
What is the process of mapping a new area, and can you localize in this map without having modified the map with lanes, traffic signs etc.?
Is the global planner only for road networks?
Is the local planning / motion control only for car-like vehicles that follow roads or can it be used with other kinematics?
Is the object perception pipeline for different sensor modalities stand alone?
@Kasper_Jeppesen I think a bunch of your questions can be answered actually by looking over the autoware repo, they have pretty decent readmes / file structure. Things to my understanding aren’t particularly plugin-able at this moment in time but its been a topic of conversation between @gbiggs and I (amongst others). There’s certainly some opportunity to homologate these stacks some or share common interfaces / create adaptors.
The goal of this course and work appears as it is written above is focused on Autoware for their intended cases so I wouldn’t want them to distract by talking about the mobile robot use-case. That seems like a conversation for another thread.
Thanks for bringing up this great initiative. LG will support on the simulation side.
I have some experiences of taking courses like Udacity, etc. in the areas of Self-Driving cars and Robotics. But the ease of use of development environment is the key to the students and success of the course, I think. And there are many things to be figured out if we consider the commercial cloud-based platform such as RoboMaker, theConstructSim, etc, not just of the cost of using their resources, but also all the technical issues such as loading time and latency, sharing the contents, etc. If we consider a relative simple robotics applications, then their platform might be OK, but Autonomous Driving needs high fidelity environments (3D map, HD Map, Vehicle, traffics, etc.) for perception and planning and others as well as performance sufficient enough to guarantee the correct execution of AD systems.
So, I think running a simulator in local machine (user’s machine) would be much feasible, while the contents (maps, vehicles, test scenarios, etc.) are shared or provided by the cloud.
We are working on this right now, and wiling to provide the simulation environment including sample maps, vehicles, and test scenarios, etc.
For clarification, the course described above will likely be about the Autoware.Auto project (ROS2-based) rather than the Autoware.ai project (ROS1-based).
Autoware.Auto is a from-scratch re-write of the Autoware stack using ROS2 (Dashing, currently) which is designed with as many industry best-practices as we can possibly gather. Both are reference implementations of an autonomous vehicle software stack but Autoware.Auto is where most of the development effort of The Autoware Foundation and its membership is currently focused.
Documentation for Autoware.Auto can be found on gitlab.io.