ROS Resources: Documentation | Support | Discussion Forum | Service Status | Q&A

MOCAP4ROS2: Motion Capture Systems in ROS2

Dear ROS community,

MOCAP4ROS2 is a Focused Technical Project (FTP) funded by EU ROSIN and coordinated with Eurobench project. The goal is to standardize the integration of different motion capture systems in ROS2.

I want to open a discussion with the members of the ROS community and companies interested in motion capture systems. We want to receive feedback on the proposed design and open a discussion about messages, formats, and processes. We are currently doing some experiments to decide the adequate QoS of the communication between nodes.

We will support Vicon and Technaid IMUs system, that will be used in Eurobench. Our design lets to replace any MOCAP system (for example, to use Optitrack instead of Vicon), or to incorporate new ones. The next figure shows the current design:

  • Driver Layer: Nodes of this layer are dependent on each MOCAP System vendor. All vision-based nodes would publish messages in the same format. The same for IMU-based systems, or any other type of MOACP.
  • Composer Layer: Nodes of this layer are independent of a particular MOCAP System. The output of these nodes are TFs, and maybe some metadata.
  • Application Layer: This layer contains applications that use the information from MOCAP Systems. In the case of Eurobench, there will be a skeleton composer and other performance measurement components.

We want that this project will be useful to the ROS community beyond the funding projects, and this is the reason why we would like to incorporate this discussion to everyone interested in using or contributing to this project.

Best regards


To continue with the discussion, these are the messages that we are using:

For vision-based MOCAPs:


std_msgs/Header header
uint32 frame_number
mocap4ros_msgs/Marker[] markers

Marker.msg (currently it has more fields, but it is dependent on Vicon, so we are removing it)

geometry_msgs/Point translation

For IMU-based MOCAPs, we will use sensor_msgs/msg/Imu, but in the Composer Layer, we have to provide a config file (.yaml) with the distance between sensors.

Hi @fmrico, looks like a great initiative!

As a point of reference, some time ago I put together vrpn_client_ros, which had a similar goal of bridging MOCAP into ROS in a vendor-neutral way.

Being not particularly interested in writing vendor-specific clients, I used VRPN as an interface - that project is not particularly active anymore, it may be useful. Certainly there are a lot of hoops to jump through to link against VICON SDK blobs in an otherwise open-source project. Kudos to VICON engineers for adding velocity and acceleration output to their VRPN server implementation on request.

My other main take away was - consider your interfaces carefully. Some consumers may need /tf*, some may want to subscribe information regarding just one object and would prefer you didn’t pollute the /tf tree of all connected nodes (since /tf can be quite chatty).


Hello @fmrico, this is pretty cool work. Regarding Marker.msg, it seems like some of the fields you removed as being specific for Vicon are not that uncommon for other mocap systems, like the Qualisys system we are using. Maybe it would be better to keep those fields (e.g. subject_name and occluded) and just keep them blank for systems which do not support that specific feature? Probably we would need to use an integer as datatype for occluded as it should be nullable…

Also for our purposes it would be useful to include the marker id in the message.

The Qualisys SDK also supports integrating force plates, which might be useful for some users.


Thanks for the feedback