ROS Resources: Documentation | Support | Discussion Forum | Service Status | Q&A answers.ros.org

Autoware Online Class - RFQ

Hello @Dejan_Pangercic,

Happy to contribute to DDS courses and even training material. In the past I had written a little tutorial which is on GitHub and licensed under CC. Let me know how we can contribute.

Take Care,

2 Likes

Hi @Dejan_Pangercic

Great initiative and I feel like there will definitely be an audience out there if this course becomes reality.
I’m not in the autonomous car industry but in the mobile robotics industry and I feel like there would be some benefit for the mobile robotics community in getting some knowledge of how (if?) they can make use of Autoware and all the associated tools and HW/SW components, also relating to what @smac said about the architecture.
I feel like there might be an opportunity for some trickle-down software that could benefit the mobile robotics community , but it’s hard to get an understanding of the individual components of a complex stack like Autoware and if they can be used outside of the autonomous car industry.

E.g.

  • Is it possible to use HD maps and NDT localization outside of the road networks or are there some tight coupling here?

    • What is the process of mapping a new area, and can you localize in this map without having modified the map with lanes, traffic signs etc.?
  • Is the global planner only for road networks?

  • Is the local planning / motion control only for car-like vehicles that follow roads or can it be used with other kinematics?

  • Is the object perception pipeline for different sensor modalities stand alone?

2 Likes

@Kasper_Jeppesen I think a bunch of your questions can be answered actually by looking over the autoware repo, they have pretty decent readmes / file structure. Things to my understanding aren’t particularly plugin-able at this moment in time but its been a topic of conversation between @gbiggs and I (amongst others). There’s certainly some opportunity to homologate these stacks some or share common interfaces / create adaptors.

The goal of this course and work appears as it is written above is focused on Autoware for their intended cases so I wouldn’t want them to distract by talking about the mobile robot use-case. That seems like a conversation for another thread.

1 Like

Hi @Dejan_Pangercic,

Thanks for bringing up this great initiative. LG will support on the simulation side.

I have some experiences of taking courses like Udacity, etc. in the areas of Self-Driving cars and Robotics. But the ease of use of development environment is the key to the students and success of the course, I think. And there are many things to be figured out if we consider the commercial cloud-based platform such as RoboMaker, theConstructSim, etc, not just of the cost of using their resources, but also all the technical issues such as loading time and latency, sharing the contents, etc. If we consider a relative simple robotics applications, then their platform might be OK, but Autonomous Driving needs high fidelity environments (3D map, HD Map, Vehicle, traffics, etc.) for perception and planning and others as well as performance sufficient enough to guarantee the correct execution of AD systems.

So, I think running a simulator in local machine (user’s machine) would be much feasible, while the contents (maps, vehicles, test scenarios, etc.) are shared or provided by the cloud.

We are working on this right now, and wiling to provide the simulation environment including sample maps, vehicles, and test scenarios, etc.

@bshin-lge (Brian Shin), @hadiTab (Hadi) , @zelenkovsky (Dmitry) can help.

1 Like

For clarification, the course described above will likely be about the Autoware.Auto project (ROS2-based) rather than the Autoware.ai project (ROS1-based).

Autoware.Auto is a from-scratch re-write of the Autoware stack using ROS2 (Dashing, currently) which is designed with as many industry best-practices as we can possibly gather. Both are reference implementations of an autonomous vehicle software stack but Autoware.Auto is where most of the development effort of The Autoware Foundation and its membership is currently focused.

Documentation for Autoware.Auto can be found on gitlab.io.

1 Like

@Dejan_Pangercic, and All

appreciate for bringing this up, i am really interested.

2 Likes

I can answer some of these questions right now, and I think these show that a course that teaches how to use Autoware and how each major component works would have benefits beyond the AD community.

Note that the answers below are with regards to what is theoretically possible but which may not currently be technically possible without doing some message porting, ripping code out of its repository, or similar. We aim to remove these barriers to reuse outside of Autoware.

Yes to HD maps if you are happy to treat your area as a road network (see below about the planner).

Yes to NDT. If you have a point cloud map you can use NDT to localise in it, although there may be some assumptions being made that I’m not aware of such as rotation of the lidar around X and Y.

Localisation using NDT is independent of the HD map, which is what defines road infrastructure. We don’t currently have any localisation capability based on the HD map but we are aware of the concept and may implement something in the future.

Yes, it is dependent on having an HD map of a road network to plan within. However you can treat many areas that you want to navigate within as a “road” network. Delivery robots in a hospital or office building, for example, may prefer fixed lanes rather than free space planning, at least for part of the delivery route such as corridors. Robots that drive on a footpath would also be a good fit for using a “road” network.

Also, Autoware.Auto will include a limited free-space planner for some functionality, such as parking the car.

The planners are for autonomous vehicles, so they are by necessity designed for Ackermann-type vehicles.

Yes, you can use it right now if you have a 3D LiDAR sensor and want to get bounding boxes of Euclidean clusters. We intend to maintain this reusability going forward.

2 Likes

I think pre-recorded would be better for the lecture portions. We could run live sessions for exercises with a prerequisite of participation being that they have watched the lecture.

Although this is all starting to sound very Udacity so maybe we should be talking to them about providing a new course…

2 Likes

I could create a lecture on development processes and methods in robotics. This includes:

  • SCRUM
  • Agile Manifesto
  • Lean Development
  • Extreme Programming
  • Unix Philosophy
  • Data-Driven Development
  • V-Model
  • Continuous Integration / DevOP
  • Triage
  • Gitflow / Github Flow
  • The Mythical Man-Month

The last years as a product owner in robotics showed me that the right mindset, development processes and team structures are a key aspect to build a complex robotic system.

4 Likes

@Dejan_Pangercic, I’m still asking around. We have more experience in camera and lidar perception.

1 Like

@Tobias thanks. This would be excellent. Could you maybe do this as part of the first lecture - Development Environment (I have for now put down Apex.AI for it but would be very happy if you do it)?

Then you could present your points also in terms of what we already have in Autoware.Auto, e.g.:

  1. Milestone-driven development https://gitlab.com/autowarefoundation/autoware.auto/AutowareAuto/-/boards/1517206?milestone_title=AVP%20MS2%3A%20Follow%20waypoints%20with%20the%20ndt_localizer&
  2. CI: https://gitlab.com/autowarefoundation/autoware.auto/AutowareAuto/-/blob/master/.gitlab-ci.yml
  3. Development process: https://autowarefoundation.gitlab.io/autoware.auto/AutowareAuto/cpp-development-process.html
  4. Structural coverage: https://autowarefoundation.gitlab.io/autoware.auto/AutowareAuto/coverage/index.html
  5. Design documents, e.g. https://gitlab.com/autowarefoundation/autoware.auto/AutowareAuto/-/blob/master/src/motion/control/pure_pursuit/design/pure_pursuit-design.md from which the technical requirements could be derived
  6. High-level safety documents from which e.g. safety goals could be derived: https://gitlab.com/autowarefoundation/autoware.auto/AutowareAuto/-/issues/206
  7. Branching model: https://autowarefoundation.gitlab.io/autoware.auto/AutowareAuto/develop-in-a-fork.html
2 Likes

@Craig.Johnson Please do let me know about it.

Hi all, short update:

  1. I got the confirmation from the 10/16 lecturers and it is very likely that the remaining 6 will confirm as well. Which means that we will proceed with the course
  2. I will be meeting this coming week with https://www.theconstructsim.com/ and https://aws.amazon.com/robomaker/ to select the hosting platform. The 3rd alternative is Vimeo (pre-recorded class).
  3. I created the “Next Steps” section in the above description
  4. Could all of the confirmed lecturers start working on the syllabus?
  5. As a teaser, here is a video of a LiDAR-based localization algorithm from Autoware https://drive.google.com/file/d/1lDQIVl02ycv3VZAnwtiTqsWcVh0DD44N/view

Very good points from @Katherine_Scott:

  1. That we host an/or mirror all of this content on the ROS 2 wiki. From the user’s perspective having a centralized resource is the most helpful thing we can do.
  2. We avoid any sort of LMS/Courseware unless we’re willing to put in the effort to develop the associated content like quizzes, tests, homework, projects. The real value of coursework is that it can become a turn-key solution for educators.
  3. The data we do have indicates that videos are best for covering ROS basics. We should shoot to build tutorials/how to/explainers first and then use that content to record a video. The benefit of this approach is that makes the underlying content updateable, searchable, and discoverable.
1 Like

I start tomorrow working on that and will share some first drafts until wednesday. Looking forward working with you guys on this topic.

@Dejan_Pangercic Is there anything you need me to do?

Meeting Kat, Marya, Dejan

  1. Educational Platform
    1. Host the content on https://gitlab.com/autowarefoundation/autoware.auto/AutowareAuto/-/tree/master/docs
      1. @Katherine_Scott to look if there is a good markdown to presentation exporter
      2. Write the documentation in markdown
      3. Rendering is on https://autowarefoundation.gitlab.io/autoware.auto/AutowareAuto/
      4. Format of the content for each class: How to use euclidean clusters.pdf (216.1 KB)
    2. Record the video
    3. Upload the videos to https://www.youtube.com/channel/UCyo9zNZTbdJKFog2q8f-cEw
    4. @gbiggs Clean-up https://autowarefoundation.gitlab.io/autoware.auto/AutowareAuto/
  2. Logistics
    1. We asked Ricardo from the TheConstructSim to support with the logistics (work with the lecturers to deliver the material, review the material (markdown and video), help with the recordings, upload the video, collect the feedback
    2. Each lecture starts with a recap from prior class.
  3. Development environment
    1. We will use https://gitlab.com/ApexAI/ade-cli/-/tags with the .aderc file
    2. @Dejan_Pangercic to check if we need a new .aderc volume for the class
  4. Class start date: May 11 2020
  5. ROS 2 101 Syllabus Proposal:
    1. General introduction of ROS 2 (why we decided to write it,
      architecture, who is using it, …)
    2. Why is ROS 2 better suitable for AD than ROS 1:
      1. Industrial-grade middleware DDS (QoS, efficient support for large data, data and node
        level failure handling)
      2. Different communication patterns (callback+executor, waitset)
      3. rclcpp_lifecycle (deterministic startup, enables node level failure handling, enables split of the memory management between non-steady and steady phase)
      4. Written in modern C++14
      5. Security
      6. High architecture and code quality level (because of CI, linters, test frameworks, design documents)
      7. Supported on VxWorks and QNX RTOSes
    3. Hands-on:
      1. How to create a ROS 2 package
      2. How to create a pub/sub example, demonstrate some of features from above (e.g. security, rclcpp_lifecycle, QoS, …)
  6. ROS 2 Tooling Syllabus Proposal:
    1. ros2cli
    2. rosbag2
    3. rqt_plot
    4. rqt_graph
    5. rviz2
    6. ros2 launch (and launch_testing)
    7. cross-compilation for Arm
2 Likes

@gbiggs I would indeed need help with 2 things here:

  1. Can Tier IV do these lectures:
    1. Object Perception Camera
    2. Sensor Fusion
  2. Could you make the a release of Autoware.Auto (e.g. 0.2) that will cover up to Lecture 7 (Object Perception LiDAR)?
  3. Could you sort out the documentation in https://autowarefoundation.gitlab.io/autoware.auto/AutowareAuto/?
    1. We agreed with Kat from OSRF that we will have 1 .md file for every lecture which will then be used when making the video and when the students will want to replicate the class material

Thx

I will talk to Josh about this. We were planning our next release to be the complete AVP work. That doesn’t mean we can’t do a point release, though.

I can clean up the documentation, but keep in mind that there is a difference in documentation for teaching and documentation for reference. I will look at making a section for teaching materials to keep things better organised.

1 Like

@Dejan_Pangercic Action Item for me to find out about translation of courseware (videos, documentation) into other languages. Based on Autoware users, these should start with Japanese and Chinese.

1 Like