The Raspberry Pi has a hardware feature that gets the motion vectors from camera images, and it is done in hardware decoder level. The feature is summarized here: https://picamera.readthedocs.io/en/release-1.13/recipes2.html#recording-motion-vector-data
There is also a https://github.com/UbiquityRobotics/raspicam_node that will get both the image and the motion vectors as a MotionVectors.msg, which looks like: https://raw.githubusercontent.com/UbiquityRobotics/raspicam_node/kinetic/msg/MotionVectors.msg
I have tested it, on a robot, with a front looking camera, and it does work, it is not as good as the other setups I have seen like on nvidia jetsons, (i.e. has more mis-vectors) but I believe it is usable to a level.
Could we get any usable results from this feature, visual odometry, or such, even if it is a limited one? For example, while using the robot_localization package with imu and odometry as input, could we fuse data from the motion vectors?
Or could it be used for detecting things like when the robot is stuck and wheels are slipping?