REP Draft about Motion Capture Systems

Dear ROS Community

As we introduced in Motion Capture Systems in ROS 2 and discussed in the meeting, I have started a REP to try to standardize the MOCAP drivers in ROS 2.

As stated in REP 1 -- REP Purpose and Guidelines (ROS.org), the next step is posting a REP draft to discourse.ros.org. The initial version draft is at rep/rep-XXXX.rst at mocap_proposal · fmrico/rep · GitHub , and I invite all the people interested in providing feedback in this thread.

I hope it helps
Francisco

7 Likes

Many systems will also feed you an estimate of marker/body velocity and acceleration. Do you have any thoughts on if/how that should be integrated into the REP, alongside pose? vrpn_client_ros just advertised separate topics for pose, velocity, and acceleration.

Interesting comment. I have checked Vicon, Optitrack, and Qualisys, and, as far as I have checked, none of these vendors’ SDKs produces information about the velocity and acceleration of the markers/rigid_bodies.

If the question is whether ROS 2 drivers should implement this functionality (using a Kalman filter, for example), my opinion is NO. They should only bridge the information from the mocap system into the computation graph. Calculating the speed and acceleration is probably better for an application that takes the data from the ROS 2 mocap drivers and calculates this information, which should keep out of the REP. And this app would work with any mocap system, from my perspective, thanks to the content of this REP. This is why this REP is so useful. In any case, it is only my opinion. Please, change my mind :yum:

Another thing I have seen is that maybe we should include skeleton messages.

Should there be any special treatment of gaps in the data? In my experience with Vicon, if there are any dropouts (typically <2 or 3 cameras with sight of a marker) the marker data for those frames are filled with NaN’s. Fixing this via gap filling is then a manual post-capture process.

I’m not sure if this behaviour is common among vendors. If not (e.g one vendor sdk returns nan, another returns None) some standardisation may be needed.

1 Like

From my emails with VICON back in 2016, they implemented this stuff inside their VRPN server, so perhaps they don’t expose it as part of their SDK, which I avoided due to license issues anyways.

Anyways, this is something VRPN implements, which is possibly the closest thing to a ‘cross platform motion capture API’ that exists, so might make sense to include in the REP.

I will try to address this in the final version. Please, stay tuned if I forget it.

Anyways, this is something VRPN implements, which is possibly the closest thing to a ‘cross platform motion capture API’ that exists, so might make sense to include in the REP.

I disagree because:

  1. That would force to mocap_optitrack, motion_capture_tracking, and MOCAP4ROS2 to implement this functionality.
  2. It is not a good argument. It is like the existence of a camera driver that detects objects in an image forces every camera driver to do this process. Let’s think about what is reasonable from the point of view of architecture and standardization.

Opinions about this concrete point?

Might make sense to define required and optional parts to a spec, so that implementations that can expose the information do so in a consistent way. I certainly would’ve found a lot of value in having acceleration and velocity directly exposed when working with motion capture, but VICON hadn’t implemented it yet.