First off: Thanks for this proposal! A standardized set of vision messages has been sorely missing for years.
I strongly believe we need a separate Pose for each object hypothesis. For example, when meshes are used to represent the object classes, the Pose specifies the pose of the mesh’s reference frame in the
Detection3D.header.frame_id frame. For example, the reference frame of the following mesh is at the bottom of the mug and in the center of the mug’s round part, not at the center of the mesh’s bounding box:
Without a Pose for each object class, we cannot express “this object could be either a laptop in its usual orientation, or a book lying flat (i.e., rotated by 90° if your mesh is of a book standing upright)”.
My proposal would be to either include an array of Poses in a 3D-specific
CategoryDistribution message, or (since we now have 3 arrays that must be the same size) as an array of
ObjectHypothesis messages (or whatever we want to call it) that would have one
I was also sorry to see that BoundingBox3D was removed. (This was meant to represent a bounding box of the points in
source_cloud, right?) I’ve always included this in my own message definitions, and I’ve found it extremely useful.
On the other hand, this information can be re-computed from the
source_cloud, so I can live with that (although it’s a bit wasteful). Also, other people might prefer to use a
shape_msgs/Mesh bounding_mesh, like in object_recognition_msgs/RecognizedObject, or something completely different, and it would overcomplicate the message if we’d include all possible kinds of extra information.