New perception architecture: message types

I think this work is being blocked by a lack of a shared understanding of what it is we want to achieve. What objects do we want to recognise, where do we want to recognise them, what sorts of data do we want to use, what data rates, should data be synchronised or can information be added on to a detection after the fact, do we or do we not use consecutive detections to strengthen an object’s presence, how interchangeable/optional do we want different algorithms and detection types to be, and so on. There are a huge number of unanswered questions that need to be defined and then answered before we can even begin to think about the messages used.

In other words, we need to define our requirements before we try to solve them. Otherwise we are solving an unknown or undefined problem.

We also need to keep in mind that we are designing Autoware for all Autoware users, not just for Tier IV’s favourite sensor set, or AutonomousStuff’s specific demonstration. I’m not saying that that is what is happening, but it is easy to forget.

Additionally, I think it would be useful to draw up a list of:

  • The different types of sensors we expect to be used. Not just ones we use now, but also ones a potential Autoware user might use.
  • The different types of data we might process. Obviously this closely relates to the sensors used, but don’t forget using post-processed data as an input to an algorithm, e.g. merged dense point clouds versus individual sparse point clouds, or point clouds with or without RGB data added from a camera.
  • The object locating, object identifying, object tracking, object predicting, etc. algorithm types that we might use.
  • Possible orderings of algorithms.
2 Likes