Autonomy Software working group meeting 20191203

The Autonomy Software working group will be holding a meeting at Tuesday, December 3, 2019 10:00 PM TZ. Call-in information can be found at the end of this post.

Anyone who has committed to contributing to the AVP demonstration is expected to attend this meeting.

I am collecting agenda items for this meeting, so if you have any please post them to this topic.

The minutes from the previous meeting are available at the wiki.

Agenda

  1. Follow up on tasks from previous meeting
  2. Reminder about planning workshop on December 10 in Palo Alto, California
  3. Description of detailed architecture sketch from @Dejan_Pangercic

@gbiggs Is the call-in information the same as in the previous post?

@gbiggs Please see below for a slightly more detailed architecture sketch as requested to @Dejan_Pangercic:

Draw.IO

SVG

The data flow is pretty straightforward–in it’s current form, it’s mostly a DAG with the exception of some loops wrt querying the map, however, the map manager can be broken into a map manager and a fake behavior planner to make this even more DAG-like.

As this is data/event-driven, the data rates would be 5-20 Hz, depending on the setting of the sensors. Whether or not that is sufficient for HARA remains to be seen.

Below is a general writeup to accompany the diagrams:

Autoware.Auto AVP Architecture

This document generally describes some of the components proposed for the AAA architecture.

Sensing

The input to the stack shall be a pair of VLP16-HiRes sensors.

The raw UDP input of these sensors will be translated into PointCloud2 messages via the
velodyne_driver.

The two point clouds will be fused into a single point cloud by the point_cloud_fusion component.

All packages have already been implemented by Apex.AI and is present in Autoware.Auto.

Object Detection

The input to this stack is a fused point cloud. This stack does ground filtering, clustering, and
computes bounding boxes.

With the exception of bounding box computation (forthcoming) all packages have been implemented by
Apex.AI and is present in Autoware.Auto.

Localization

NDT matching will be used for localization. This component receives a ground truth map and an
observed point cloud, and computes the transform between the two.

This component is currently in progress by Apex.AI.

State Estimation

A state estimator uses a history of transforms to estimate the vehicle velocity, and provide a
smoothed positional estimate.

This component can be reasonably implemented by using existing the existing kalman filtering
package. The constant velocity or constant acceleration motion models may be sufficient for the
AVP or a more descriptive motion model can be developed.

No one has committed to implement this component.

Map

The map publisher publishes the point cloud map for ground truth localization.

In addition, given the current (map-relative) pose, and the target point, the map publisher may
provide a short horizon of lane boundaries to the motion planner. Depending on how the HD map
management is implemented, this can be (should be) a separate node.

Parkopedia has tentatively committed to implement this. Apex.AI may implement some point cloud
map-related parts.

Planning

Given the current vehicle kinematic state, lane boundaries, objects, and target state, the motion
planner computes a dynamically feasible (?) trajectory which makes progress towards the target
state, stays within the lane boundaries, and avoids obstacles.

Transform messages may be needed to ensure all inputs to the motion planner are in a consistent
coordinate frame. This is not strictly necessary if all inputs are guaranteed to be in the same
coordinate frame.

An initial unconstrained planner and base class has been implemented by Chris Ho/Apex.AI.

Embotech has committed to provide a full obstacle and lane-aware implementation.

Control

Given the current vehicle state and reference trajectory, the controller produces control commands
that ensure the vehicle will track the reference trajectory well.

Transform messages may be needed to ensure all inputs to the motion controller are in a consistent
coordinate frame. This is not strictly necessary if all inputs are guaranteed to be in the same
coordinate frame.

Chris Ho/Apex.AI has provided an MPC controller and a base class.

Apex.AI has provided a pure pursuit controller built off the aforementioned base class.

Vehicle Interface

The vehicle interface transmits general control commands to the vehicle.

AutonomousStuff has committed to implement this/port existing code.

GUI

In the absence of higher level planning, a GUI is needed to specify the target state of the vehicle.

The GUI can also display information such as the location of obstacles, and raw sensor data.

Parkopedia has tentatively agreed to provide their implementation.

Simulation

Simulation is needed to more easily test and verify an autonomous driving software stack.

We are currently committed to use the LGSVL open source simulator.

LG has committed to support this effort.

Thank you all for coming. Meeting minutes are available on the wiki.

All: please note that Apex.AI is doing Autoware.Auto and AVP2020 contributions together with Tier IV:
1. we are using Tier IV’s algorithms as a base for porting to Autoware.Auto
2. Tier IV gives input and feedback on the implementation
3. Tier IV provides architectural feedback
4. Apex.AI and Tier IV meet also offline a lot

hi all, these are the meeting coordinates for today:

Topic: ASWG, Jan 20/21
Time: Jan 20, 2020 11:00 PM Pacific Time (US and Canada)Join Zoom Meeting
https://zoom.us/j/206643931?pwd=Y0I4dW1VUHpFQkJJVG4vTjRnWml4Zz09Meeting ID: 206 643 931
Password: 910239One tap mobile
+16699006833,206643931# US (San Jose)
+19294362866,206643931# US (New York)Dial by your location
+1 669 900 6833 US (San Jose)
+1 929 436 2866 US (New York)
Meeting ID: 206 643 931
Find your local number: https://zoom.us/u/acGuzJFesI