Autoware Online Class - RFQ

Please let me know your thoughts about the below proposal.

Preface

Mobility industry is getting highly impacted by the COVID-19 virus as this McKinsey study details it. As a result we now see lots of professionals in the automotive industry:

  1. working from home
  2. studying from home
  3. being laid-off
  4. working reduced working hours

It has been proven in the past that such situations result in the individuals investing into their education, primarily in the new and the upcoming technologies.

Proposal

Create an online class for the 2 upcoming technologies: ROS 2 + Autoware as used for the programming of self-driving cars.

In this class the participants will learn about

  1. ROS 2 - a framework and a set of tools to program self-driving cars
  2. Autoware - a stack of drivers, algorithms and tools to program self-driving cars on top of ROS 2
  3. Complementary tools for code analysis & optimization, physics-based simulation, system for data recording, storage and analysis, HD maps, …
  4. Best practices for automotive coding, testing, validation and verification

This will be a medium level class targeted at the individuals who develop the pre-production autonomous driving systems. Participants should have knowledge of of C++ (including testing), robotics frameworks and system integration. An ideal participant would’ve completed the https://www.udacity.com/course/self-driving-car-engineer-nanodegree--nd013 course.

Syllabus (Note: in bold there are proposed lecturers):

  1. Development Environment (Apex.AI / StreetScooter - :white_check_mark:)
  2. ROS 2 101 - (OSRF - :white_check_mark:)
  3. ROS 2 Tooling - (OSRF - :white_check_mark:)
  4. Platform (HW, RTOS, DDS): (ADLINK - :white_check_mark:)
  5. Architectures of Autonomous Driving Stacks (**Virtual Vehicle Research Center - :white_check_mark:)
  6. Autoware 101: (AWF - Josh - :white_check_mark:)
  7. Object Perception LiDAR: (Apex.AI - :white_check_mark:)
  8. Object Perception Camera: (TBD)
  9. Object Perception Radar: (TBD)
  10. Sensor Fusion: (Tier IV)
  11. Localization (TF, NDT matching): (AWF - Josh - :white_check_mark:)
  12. Simulation: (LGSVL - :white_check_mark:)
  13. HD maps: (Parkopedia - :white_check_mark:)
  14. Global Planning: (Embotech - :white_check_mark:)
  15. Local planning and control: (Embotech - :white_check_mark:)
  16. Data storage and analytics: (Ternaris - :white_check_mark:)
  • :white_check_mark: - confirmed

Platform:

  1. Personal computers with ade-cli
  2. https://www.theconstructsim.com/
  3. https://aws.amazon.com/robomaker/
    1. Related: AWS Robmaker online demo
  4. Udemy
  5. Coursera
  6. Udacity
  7. Youtube (Pre-recorded) - :white_check_mark:
    1. Every lecture will have an .md file in https://autowarefoundation.gitlab.io/autoware.auto/AutowareAuto/ that the students will be able to follow
    2. Every lecturer will follow this md file and record a video which will be uploaded to the youtube channel
    3. Lecturers will use ade-cli

Audience:

  1. Anyone interested in building mobility applications

Frequency

  1. Once per week

Open questions / Feedback

  1. Does the class make sense - :white_check_mark:
  2. Which educational platform to select - Youtube - :white_check_mark:
  3. Who can organize the class - TheConstructSim will do the logistics
  4. Who is willing to help with the logistics - TheConstructSim will do the logistics - :white_check_mark:
  5. Do we have too few/too many topics in the Syllabus - :white_check_mark:
  6. Where to advertise the class
    1. discourse.ros.org
    2. LinkedIn
    3. …
  7. Can we get the basic features for AVP2020 in? - :white_check_mark:
  8. …

Action Items

  1. Get confirmation from the remaining lecturers => @Dejan_Pangercic
  2. Select the educational platform - see above
  3. Create an exact syllabus => TheConstructSim and lecturers
  4. Create a release of Autoware.Auto for the class => @gbiggs
  5. Find out how to advertise the class - see above
    1. Find more advertisement channels => Nicole, Sanjay
  6. Select the dates - May 11 to start
    1. 1st class - Dejan + Tobias
    2. 2nd & 3rd class: @Katherine_Scott
  7. Create a template .md file and instructions for it => @Dejan_Pangercic
  8. Create instructions for how to make a video for each class => TheConstructSim + Nicole
  9. Create a simple website for the class => Nicole
  10. @Dejan_Pangercic to check if we need a new .aderc volume for the class
  11. @Katherine_Scott to look if there is a good markdown to presentation exporter
  12. @gbiggs Clean-up https://autowarefoundation.gitlab.io/autoware.auto/AutowareAuto/
  13. @Sanjay_Krishnan find out about translation of courseware (videos, documentation) into other languages. Based on Autoware users, these should start with Japanese and Chinese.
26 Likes

Sounds excellent. I would be happy to present! We might want to consider a primer on ADE / Docker too. Many users might not be familiar enough with Docker to do things in ADE that we hear requests for commonly like developing Autoware modifications in a private fork, saving changes made to the ADE image, etc.

2 Likes

I think something missing is a thesis statement of what you want to want people on the outset to understand/apply. With a thesis statement, I think a number of your open questions would have more clear answers. It would also give participants correct expectations going into it, for example: whether its a survey overview of autonomous driving or Autoware-specifics only.

One topic that would be great too that isn’t in the proposed list is architecture. Giving several examples of how these technologies intertwine can be helpful. Especially while looking at multiple common situations like highway architecture vs free space vs in-town. Or analyzing the system architecture used by different companies (autoware, waymo, etc)

1 Like

Hi @Dejan_Pangercic, sounds very appropriate and on time !
I think we should also be ready to answer to the question: I want to test on hardware, and point, recommend platform. We are building this as we write but having a pointer should help. Let me know if you want me to suggest a couple of slide on that.

1 Like

I am very excited about this class.
I have a few suggestions to your open questions

1 - Does the class make sense
Yes, this would definitely help everyone.

2 - Which educational platform to select
I think ROS Development Studio would be great, here everyone can directly practice while learning, well I’m not familiar with other platforms.

6- Where to advertise the class
The Construct organizes ROS live classes every week and hundreds of student attend live, they have a good network, it would be the best place to advertise for this class

5 Likes

I like the initiative too and it makes sense, there is a lot logistics to handle though, hope we have resources.

For platform, ROS Development Studio might be too slow to run a full AV stack with LGSVL, in their free tier. It would be great to have Constructsim or Amazon AWS sponsor more free credits for this class.

2 Likes

I think that RoboMaker would be most likely to be able to provide the most robust platform, simply because they own a significant portion of the world’s computers. :stuck_out_tongue:

However, even LGSVL requires a significant amount of computing power. A better option could be to handle the perception part of the class on pre-recorded rosbags, and use a simpler simulator to do planning and control.

5 Likes

What sort of logistics help is needed?
As for advertising, I’m not sure what your thoughts are, but Hexagon/AutonomouStuff could help promote on social media.

1 Like

… that aren’t busy running Zoom:rofl:

1 Like

Thx @JWhitleyWork , I added an item called “Development Environment”.

Thx @smac, great suggestions. I added one paragraph at the end of section Proposal above. I also added a session on AD architectures.

1 Like

@sstrahm

Let me know if you want me to suggest a couple of slide on that.

Please do so. Ideally it would be presented in the Platform (HW, RTOS, DDS) session.

@jitrc

For platform, ROS Development Studio might be too slow to run a full AV stack with LGSVL, in their free tier. It would be great to have Constructsim or Amazon AWS sponsor more free credits for this class.

I have written to Constructsim, Amazon AWS and Udacity. Lets wait what comes back.

@gbiggs

However, even LGSVL requires a significant amount of computing power. A better option could be to handle the perception part of the class on pre-recorded rosbags, and use a simpler simulator to do planning and control.

That is indeed a very good point. Robomaker and Amazon AWS both have already Gazebo integrated out of the box. I will add your point to the Open questions.

@Craig.Johnson

What sort of logistics help is needed?

We are starting pretty much from 0 so the entire logistics is to be figured out (participants sign-up, where to run/show the code, live-vs-pre-recorded, …). However I want to first see:

  1. if we get enough lecturers
  2. which kind of educational platform will we choose

Then we will build the logistics around it.

Thanks for the advertising offer.

@Craig.Johnson - can AS do the above lecture?

@Dejan_Pangercic This looks very interesting and I’d be happy to do a lecture on HD Maps.

2 Likes

Hello @Dejan_Pangercic,

Happy to contribute to DDS courses and even training material. In the past I had written a little tutorial which is on GitHub and licensed under CC. Let me know how we can contribute.

Take Care,

2 Likes

Hi @Dejan_Pangercic

Great initiative and I feel like there will definitely be an audience out there if this course becomes reality.
I’m not in the autonomous car industry but in the mobile robotics industry and I feel like there would be some benefit for the mobile robotics community in getting some knowledge of how (if?) they can make use of Autoware and all the associated tools and HW/SW components, also relating to what @smac said about the architecture.
I feel like there might be an opportunity for some trickle-down software that could benefit the mobile robotics community , but it’s hard to get an understanding of the individual components of a complex stack like Autoware and if they can be used outside of the autonomous car industry.

E.g.

  • Is it possible to use HD maps and NDT localization outside of the road networks or are there some tight coupling here?

    • What is the process of mapping a new area, and can you localize in this map without having modified the map with lanes, traffic signs etc.?
  • Is the global planner only for road networks?

  • Is the local planning / motion control only for car-like vehicles that follow roads or can it be used with other kinematics?

  • Is the object perception pipeline for different sensor modalities stand alone?

2 Likes

@Kasper_Jeppesen I think a bunch of your questions can be answered actually by looking over the autoware repo, they have pretty decent readmes / file structure. Things to my understanding aren’t particularly plugin-able at this moment in time but its been a topic of conversation between @gbiggs and I (amongst others). There’s certainly some opportunity to homologate these stacks some or share common interfaces / create adaptors.

The goal of this course and work appears as it is written above is focused on Autoware for their intended cases so I wouldn’t want them to distract by talking about the mobile robot use-case. That seems like a conversation for another thread.

1 Like

Hi @Dejan_Pangercic,

Thanks for bringing up this great initiative. LG will support on the simulation side.

I have some experiences of taking courses like Udacity, etc. in the areas of Self-Driving cars and Robotics. But the ease of use of development environment is the key to the students and success of the course, I think. And there are many things to be figured out if we consider the commercial cloud-based platform such as RoboMaker, theConstructSim, etc, not just of the cost of using their resources, but also all the technical issues such as loading time and latency, sharing the contents, etc. If we consider a relative simple robotics applications, then their platform might be OK, but Autonomous Driving needs high fidelity environments (3D map, HD Map, Vehicle, traffics, etc.) for perception and planning and others as well as performance sufficient enough to guarantee the correct execution of AD systems.

So, I think running a simulator in local machine (user’s machine) would be much feasible, while the contents (maps, vehicles, test scenarios, etc.) are shared or provided by the cloud.

We are working on this right now, and wiling to provide the simulation environment including sample maps, vehicles, and test scenarios, etc.

@bshin-lge (Brian Shin), @hadiTab (Hadi) , @zelenkovsky (Dmitry) can help.

1 Like

For clarification, the course described above will likely be about the Autoware.Auto project (ROS2-based) rather than the Autoware.ai project (ROS1-based).

Autoware.Auto is a from-scratch re-write of the Autoware stack using ROS2 (Dashing, currently) which is designed with as many industry best-practices as we can possibly gather. Both are reference implementations of an autonomous vehicle software stack but Autoware.Auto is where most of the development effort of The Autoware Foundation and its membership is currently focused.

Documentation for Autoware.Auto can be found on gitlab.io.

1 Like

@Dejan_Pangercic, and All

appreciate for bringing this up, i am really interested.

2 Likes