Thx @JWhitleyWork , I added an item called “Development Environment”.
Thx @smac, great suggestions. I added one paragraph at the end of section Proposal above. I also added a session on AD architectures.
Let me know if you want me to suggest a couple of slide on that.
Please do so. Ideally it would be presented in the Platform (HW, RTOS, DDS) session.
For platform, ROS Development Studio might be too slow to run a full AV stack with LGSVL, in their free tier. It would be great to have Constructsim or Amazon AWS sponsor more free credits for this class.
I have written to Constructsim, Amazon AWS and Udacity. Lets wait what comes back.
However, even LGSVL requires a significant amount of computing power. A better option could be to handle the perception part of the class on pre-recorded rosbags, and use a simpler simulator to do planning and control.
That is indeed a very good point. Robomaker and Amazon AWS both have already Gazebo integrated out of the box. I will add your point to the Open questions.
What sort of logistics help is needed?
We are starting pretty much from 0 so the entire logistics is to be figured out (participants sign-up, where to run/show the code, live-vs-pre-recorded, …). However I want to first see:
- if we get enough lecturers
- which kind of educational platform will we choose
Then we will build the logistics around it.
Thanks for the advertising offer.
@Craig.Johnson - can AS do the above lecture?
@Dejan_Pangercic This looks very interesting and I’d be happy to do a lecture on HD Maps.
Happy to contribute to DDS courses and even training material. In the past I had written a little tutorial which is on GitHub and licensed under CC. Let me know how we can contribute.
Great initiative and I feel like there will definitely be an audience out there if this course becomes reality.
I’m not in the autonomous car industry but in the mobile robotics industry and I feel like there would be some benefit for the mobile robotics community in getting some knowledge of how (if?) they can make use of Autoware and all the associated tools and HW/SW components, also relating to what @smac said about the architecture.
I feel like there might be an opportunity for some trickle-down software that could benefit the mobile robotics community , but it’s hard to get an understanding of the individual components of a complex stack like Autoware and if they can be used outside of the autonomous car industry.
Is it possible to use HD maps and NDT localization outside of the road networks or are there some tight coupling here?
- What is the process of mapping a new area, and can you localize in this map without having modified the map with lanes, traffic signs etc.?
Is the global planner only for road networks?
Is the local planning / motion control only for car-like vehicles that follow roads or can it be used with other kinematics?
Is the object perception pipeline for different sensor modalities stand alone?
@Kasper_Jeppesen I think a bunch of your questions can be answered actually by looking over the autoware repo, they have pretty decent readmes / file structure. Things to my understanding aren’t particularly plugin-able at this moment in time but its been a topic of conversation between @gbiggs and I (amongst others). There’s certainly some opportunity to homologate these stacks some or share common interfaces / create adaptors.
The goal of this course and work appears as it is written above is focused on Autoware for their intended cases so I wouldn’t want them to distract by talking about the mobile robot use-case. That seems like a conversation for another thread.
Thanks for bringing up this great initiative. LG will support on the simulation side.
I have some experiences of taking courses like Udacity, etc. in the areas of Self-Driving cars and Robotics. But the ease of use of development environment is the key to the students and success of the course, I think. And there are many things to be figured out if we consider the commercial cloud-based platform such as RoboMaker, theConstructSim, etc, not just of the cost of using their resources, but also all the technical issues such as loading time and latency, sharing the contents, etc. If we consider a relative simple robotics applications, then their platform might be OK, but Autonomous Driving needs high fidelity environments (3D map, HD Map, Vehicle, traffics, etc.) for perception and planning and others as well as performance sufficient enough to guarantee the correct execution of AD systems.
So, I think running a simulator in local machine (user’s machine) would be much feasible, while the contents (maps, vehicles, test scenarios, etc.) are shared or provided by the cloud.
We are working on this right now, and wiling to provide the simulation environment including sample maps, vehicles, and test scenarios, etc.
Autoware.Auto is a from-scratch re-write of the Autoware stack using ROS2 (Dashing, currently) which is designed with as many industry best-practices as we can possibly gather. Both are reference implementations of an autonomous vehicle software stack but Autoware.Auto is where most of the development effort of The Autoware Foundation and its membership is currently focused.
Documentation for Autoware.Auto can be found on gitlab.io.
@Dejan_Pangercic, and All
appreciate for bringing this up, i am really interested.
I can answer some of these questions right now, and I think these show that a course that teaches how to use Autoware and how each major component works would have benefits beyond the AD community.
Note that the answers below are with regards to what is theoretically possible but which may not currently be technically possible without doing some message porting, ripping code out of its repository, or similar. We aim to remove these barriers to reuse outside of Autoware.
Yes to HD maps if you are happy to treat your area as a road network (see below about the planner).
Yes to NDT. If you have a point cloud map you can use NDT to localise in it, although there may be some assumptions being made that I’m not aware of such as rotation of the lidar around X and Y.
Localisation using NDT is independent of the HD map, which is what defines road infrastructure. We don’t currently have any localisation capability based on the HD map but we are aware of the concept and may implement something in the future.
Yes, it is dependent on having an HD map of a road network to plan within. However you can treat many areas that you want to navigate within as a “road” network. Delivery robots in a hospital or office building, for example, may prefer fixed lanes rather than free space planning, at least for part of the delivery route such as corridors. Robots that drive on a footpath would also be a good fit for using a “road” network.
Also, Autoware.Auto will include a limited free-space planner for some functionality, such as parking the car.
The planners are for autonomous vehicles, so they are by necessity designed for Ackermann-type vehicles.
Yes, you can use it right now if you have a 3D LiDAR sensor and want to get bounding boxes of Euclidean clusters. We intend to maintain this reusability going forward.
I think pre-recorded would be better for the lecture portions. We could run live sessions for exercises with a prerequisite of participation being that they have watched the lecture.
Although this is all starting to sound very Udacity so maybe we should be talking to them about providing a new course…
I could create a lecture on development processes and methods in robotics. This includes:
- Agile Manifesto
- Lean Development
- Extreme Programming
- Unix Philosophy
- Data-Driven Development
- Continuous Integration / DevOP
- Gitflow / Github Flow
- The Mythical Man-Month
The last years as a product owner in robotics showed me that the right mindset, development processes and team structures are a key aspect to build a complex robotic system.
@Dejan_Pangercic, I’m still asking around. We have more experience in camera and lidar perception.
@Tobias thanks. This would be excellent. Could you maybe do this as part of the first lecture - Development Environment (I have for now put down Apex.AI for it but would be very happy if you do it)?
Then you could present your points also in terms of what we already have in Autoware.Auto, e.g.:
- Milestone-driven development https://gitlab.com/autowarefoundation/autoware.auto/AutowareAuto/-/boards/1517206?milestone_title=AVP%20MS2%3A%20Follow%20waypoints%20with%20the%20ndt_localizer&
- CI: https://gitlab.com/autowarefoundation/autoware.auto/AutowareAuto/-/blob/master/.gitlab-ci.yml
- Development process: https://autowarefoundation.gitlab.io/autoware.auto/AutowareAuto/cpp-development-process.html
- Structural coverage: https://autowarefoundation.gitlab.io/autoware.auto/AutowareAuto/coverage/index.html
- Design documents, e.g. https://gitlab.com/autowarefoundation/autoware.auto/AutowareAuto/-/blob/master/src/motion/control/pure_pursuit/design/pure_pursuit-design.md from which the technical requirements could be derived
- High-level safety documents from which e.g. safety goals could be derived: https://gitlab.com/autowarefoundation/autoware.auto/AutowareAuto/-/issues/206
- Branching model: https://autowarefoundation.gitlab.io/autoware.auto/AutowareAuto/develop-in-a-fork.html
@Craig.Johnson Please do let me know about it.
Hi all, short update:
- I got the confirmation from the 10/16 lecturers and it is very likely that the remaining 6 will confirm as well. Which means that we will proceed with the course
- I will be meeting this coming week with https://www.theconstructsim.com/ and https://aws.amazon.com/robomaker/ to select the hosting platform. The 3rd alternative is Vimeo (pre-recorded class).
- I created the “Next Steps” section in the above description
- Could all of the confirmed lecturers start working on the syllabus?
- As a teaser, here is a video of a LiDAR-based localization algorithm from Autoware https://drive.google.com/file/d/1lDQIVl02ycv3VZAnwtiTqsWcVh0DD44N/view
Very good points from @Katherine_Scott:
- That we host an/or mirror all of this content on the ROS 2 wiki. From the user’s perspective having a centralized resource is the most helpful thing we can do.
- We avoid any sort of LMS/Courseware unless we’re willing to put in the effort to develop the associated content like quizzes, tests, homework, projects. The real value of coursework is that it can become a turn-key solution for educators.
- The data we do have indicates that videos are best for covering ROS basics. We should shoot to build tutorials/how to/explainers first and then use that content to record a video. The benefit of this approach is that makes the underlying content updateable, searchable, and discoverable.
I start tomorrow working on that and will share some first drafts until wednesday. Looking forward working with you guys on this topic.