Autonomy Software working group meeting 20191203

@gbiggs Please see below for a slightly more detailed architecture sketch as requested to @Dejan_Pangercic:

Draw.IO

SVG

The data flow is pretty straightforward–in it’s current form, it’s mostly a DAG with the exception of some loops wrt querying the map, however, the map manager can be broken into a map manager and a fake behavior planner to make this even more DAG-like.

As this is data/event-driven, the data rates would be 5-20 Hz, depending on the setting of the sensors. Whether or not that is sufficient for HARA remains to be seen.

Below is a general writeup to accompany the diagrams:

Autoware.Auto AVP Architecture

This document generally describes some of the components proposed for the AAA architecture.

Sensing

The input to the stack shall be a pair of VLP16-HiRes sensors.

The raw UDP input of these sensors will be translated into PointCloud2 messages via the
velodyne_driver.

The two point clouds will be fused into a single point cloud by the point_cloud_fusion component.

All packages have already been implemented by Apex.AI and is present in Autoware.Auto.

Object Detection

The input to this stack is a fused point cloud. This stack does ground filtering, clustering, and
computes bounding boxes.

With the exception of bounding box computation (forthcoming) all packages have been implemented by
Apex.AI and is present in Autoware.Auto.

Localization

NDT matching will be used for localization. This component receives a ground truth map and an
observed point cloud, and computes the transform between the two.

This component is currently in progress by Apex.AI.

State Estimation

A state estimator uses a history of transforms to estimate the vehicle velocity, and provide a
smoothed positional estimate.

This component can be reasonably implemented by using existing the existing kalman filtering
package. The constant velocity or constant acceleration motion models may be sufficient for the
AVP or a more descriptive motion model can be developed.

No one has committed to implement this component.

Map

The map publisher publishes the point cloud map for ground truth localization.

In addition, given the current (map-relative) pose, and the target point, the map publisher may
provide a short horizon of lane boundaries to the motion planner. Depending on how the HD map
management is implemented, this can be (should be) a separate node.

Parkopedia has tentatively committed to implement this. Apex.AI may implement some point cloud
map-related parts.

Planning

Given the current vehicle kinematic state, lane boundaries, objects, and target state, the motion
planner computes a dynamically feasible (?) trajectory which makes progress towards the target
state, stays within the lane boundaries, and avoids obstacles.

Transform messages may be needed to ensure all inputs to the motion planner are in a consistent
coordinate frame. This is not strictly necessary if all inputs are guaranteed to be in the same
coordinate frame.

An initial unconstrained planner and base class has been implemented by Chris Ho/Apex.AI.

Embotech has committed to provide a full obstacle and lane-aware implementation.

Control

Given the current vehicle state and reference trajectory, the controller produces control commands
that ensure the vehicle will track the reference trajectory well.

Transform messages may be needed to ensure all inputs to the motion controller are in a consistent
coordinate frame. This is not strictly necessary if all inputs are guaranteed to be in the same
coordinate frame.

Chris Ho/Apex.AI has provided an MPC controller and a base class.

Apex.AI has provided a pure pursuit controller built off the aforementioned base class.

Vehicle Interface

The vehicle interface transmits general control commands to the vehicle.

AutonomousStuff has committed to implement this/port existing code.

GUI

In the absence of higher level planning, a GUI is needed to specify the target state of the vehicle.

The GUI can also display information such as the location of obstacles, and raw sensor data.

Parkopedia has tentatively agreed to provide their implementation.

Simulation

Simulation is needed to more easily test and verify an autonomous driving software stack.

We are currently committed to use the LGSVL open source simulator.

LG has committed to support this effort.