@Kasper_Jeppesen I think a bunch of your questions can be answered actually by looking over the autoware repo, they have pretty decent readmes / file structure. Things to my understanding aren’t particularly plugin-able at this moment in time but its been a topic of conversation between @gbiggs and I (amongst others). There’s certainly some opportunity to homologate these stacks some or share common interfaces / create adaptors.
The goal of this course and work appears as it is written above is focused on Autoware for their intended cases so I wouldn’t want them to distract by talking about the mobile robot use-case. That seems like a conversation for another thread.
Thanks for bringing up this great initiative. LG will support on the simulation side.
I have some experiences of taking courses like Udacity, etc. in the areas of Self-Driving cars and Robotics. But the ease of use of development environment is the key to the students and success of the course, I think. And there are many things to be figured out if we consider the commercial cloud-based platform such as RoboMaker, theConstructSim, etc, not just of the cost of using their resources, but also all the technical issues such as loading time and latency, sharing the contents, etc. If we consider a relative simple robotics applications, then their platform might be OK, but Autonomous Driving needs high fidelity environments (3D map, HD Map, Vehicle, traffics, etc.) for perception and planning and others as well as performance sufficient enough to guarantee the correct execution of AD systems.
So, I think running a simulator in local machine (user’s machine) would be much feasible, while the contents (maps, vehicles, test scenarios, etc.) are shared or provided by the cloud.
We are working on this right now, and wiling to provide the simulation environment including sample maps, vehicles, and test scenarios, etc.
For clarification, the course described above will likely be about the Autoware.Auto project (ROS2-based) rather than the Autoware.ai project (ROS1-based).
Autoware.Auto is a from-scratch re-write of the Autoware stack using ROS2 (Dashing, currently) which is designed with as many industry best-practices as we can possibly gather. Both are reference implementations of an autonomous vehicle software stack but Autoware.Auto is where most of the development effort of The Autoware Foundation and its membership is currently focused.
I can answer some of these questions right now, and I think these show that a course that teaches how to use Autoware and how each major component works would have benefits beyond the AD community.
Note that the answers below are with regards to what is theoretically possible but which may not currently be technically possible without doing some message porting, ripping code out of its repository, or similar. We aim to remove these barriers to reuse outside of Autoware.
Yes to HD maps if you are happy to treat your area as a road network (see below about the planner).
Yes to NDT. If you have a point cloud map you can use NDT to localise in it, although there may be some assumptions being made that I’m not aware of such as rotation of the lidar around X and Y.
Localisation using NDT is independent of the HD map, which is what defines road infrastructure. We don’t currently have any localisation capability based on the HD map but we are aware of the concept and may implement something in the future.
Yes, it is dependent on having an HD map of a road network to plan within. However you can treat many areas that you want to navigate within as a “road” network. Delivery robots in a hospital or office building, for example, may prefer fixed lanes rather than free space planning, at least for part of the delivery route such as corridors. Robots that drive on a footpath would also be a good fit for using a “road” network.
Also, Autoware.Auto will include a limited free-space planner for some functionality, such as parking the car.
The planners are for autonomous vehicles, so they are by necessity designed for Ackermann-type vehicles.
Yes, you can use it right now if you have a 3D LiDAR sensor and want to get bounding boxes of Euclidean clusters. We intend to maintain this reusability going forward.
That we host an/or mirror all of this content on the ROS 2 wiki. From the user’s perspective having a centralized resource is the most helpful thing we can do.
We avoid any sort of LMS/Courseware unless we’re willing to put in the effort to develop the associated content like quizzes, tests, homework, projects. The real value of coursework is that it can become a turn-key solution for educators.
The data we do have indicates that videos are best for covering ROS basics. We should shoot to build tutorials/how to/explainers first and then use that content to record a video. The benefit of this approach is that makes the underlying content updateable, searchable, and discoverable.
We asked Ricardo from the TheConstructSim to support with the logistics (work with the lecturers to deliver the material, review the material (markdown and video), help with the recordings, upload the video, collect the feedback
Each lecture starts with a recap from prior class.
I will talk to Josh about this. We were planning our next release to be the complete AVP work. That doesn’t mean we can’t do a point release, though.
I can clean up the documentation, but keep in mind that there is a difference in documentation for teaching and documentation for reference. I will look at making a section for teaching materials to keep things better organised.
After our discussions the other night I got tasked with coming up with a way to potentially add the course content to the ROS 2 wiki. What we were looking for in particular was a way to generate course slides in restructured text so the core content could be mirrored on the ROS 2 wiki. I did a bit of research and a project called Landslide along with the Prince pdf generator appears to be workable solution. Prince is free for non-commercial use so I think we should be ok (we would only use it to generate the actual slides).
I should have some time on Monday to see if I can’t generate some templates and an outline of courses on the ROS 2 wiki along with the requisite license information we discussed.
The following markdown generates the slides linked below. I will see if I can turn of syntax highlighting and create a lightweight theme.
Generate HTML5 slideshows from markdown, ReST, or textile.
Landslide is primarily written in Python, but it's themes use:
# Code Sample
Landslide supports code snippets
def log(self, message, level='notice'):
if self.logger and not callable(self.logger):
raise ValueError(u"Invalid logger set, must be a callable")
if self.verbose and self.logger:
looks like a great tool to work together online on markdown files with live preview.
This could be helpful for pair generation of markdown files.
It also supports the slide syntax you a mentioning.
I will try it out this weekend for my slides and give you some feedback.
It works actually great. Here some example slides on my part with a flow chart create with graphviz copied from the template. Everybody with the link can change the document. You can also create comments in the preview mode. https://hackmd.io/@Ly0n/SJ4FByjFI/edit