Visual Odometry / Helper from RaspberryPI's motion vectors output

Hello,

The Raspberry Pi has a hardware feature that gets the motion vectors from camera images, and it is done in hardware decoder level. The feature is summarized here: https://picamera.readthedocs.io/en/release-1.13/recipes2.html#recording-motion-vector-data

There is also a https://github.com/UbiquityRobotics/raspicam_node that will get both the image and the motion vectors as a MotionVectors.msg, which looks like: https://raw.githubusercontent.com/UbiquityRobotics/raspicam_node/kinetic/msg/MotionVectors.msg

I have tested it, on a robot, with a front looking camera, and it does work, it is not as good as the other setups I have seen like on nvidia jetsons, (i.e. has more mis-vectors) but I believe it is usable to a level.

Could we get any usable results from this feature, visual odometry, or such, even if it is a limited one? For example, while using the robot_localization package with imu and odometry as input, could we fuse data from the motion vectors?

Or could it be used for detecting things like when the robot is stuck and wheels are slipping?

Best regards,
C.A.

2 Likes

What exactly do you mean by “motion vector” here? There are a lot of ways to describe motion but I think what’s going on here is that you are getting vectors of objects relative to screen pixel space (i.e. how much something moves on the screen). A lot of the mpeg encoders calculate some really basic heuristics that allow them to swap out compression techniques based on the underlying activity in a video and I think this is what they mean by “motion vectors.” From what I read into the docs this is probably a really really rudimentary calculation.

I don’t think this is going to particularly useful for odometry as you’re going to get a lot of false positives and a really noisy signal. I think you might be able to use this data for some really primitive signals (like is the robot moving or rotating) but I don’t think it would be suitable for visual SLAM even with an IMU. You might also be able to pull out some really basic brightness and color information from the mpeg encoder which you could use for something like line following or ball tracking, but that’s a bit of a conjecture.

Cool find though! That’s a really useful set of tools.