Experience with PTP (Precision Time Protocol) for mobile robots

This thread is informative for time synchronization, and I’m adding here some idea’s from experience.

There are many different methods for time synchronization across clock domains, and we use the application to derive the requirements for the accuracy, drift, and jitter of time measurements.

NTP is very useful when we need a wall clock measurement for an application, or recorded data.

For synchronization between computers for communication, we rely on PTP as the network topology is fixed. Software PTP allows for broader compatibility across hardware devices, but comes more drift, and jitter in synchronization of the clock compared to PTP in hardware. For most of the high level applications (i.e. monitoring) this can be sufficient.

When tighter accurate is needed PTP implemented in a combination of hardware and firmware; this has the added cost of verifying interoperability between the hardware components. There is effort involved to have the computer, switches, and sensors all support PTP in HW.

For sensors, time synchronization methods are more fragmented. We strive to have the highest accuracy for the acquisition time of the sensor data when we need it. We measure acquisition time where possible, and derive acquisition time from arrival time at the computer; acquisition time is when the sensor reads measurements (i.e. photons | waves). When we are unable to measure acquisition time, we strive for the lowest jitter on arrival time of the sensor data at the computer to more reliably derive the acquisition time.

In practice for sensors we timestamp key events using timestamping hardware in our computer with 2us accuracy. We use this hardware timestamp as the singular common clock source, to align the multiple concurrent clock domains where acquisition and arrival time measurements occur. timestamps are captured by the hardware for the linux kernel clock, PTP, PPS, and several camera interface events (i.e. frame sync, frame start, frame stop), so we can convert time between the various fragmented clocks. This is the most effort invested method for time synchronization as it’s used to provide the highest accuracy for recorded sensor streams which are inputs to sensor fusion functions, offline development of perception systems, and offline re-simulation of open loop perception systems real data for testing. Accuracy of acquisition time for these operations correlates to accuracy in three dimensions of perception.

LIDAR is used with PTP over ethernet, and PPS measuring acquisition time. In indoor applications, the computer generates PPS as GPS does not work. In both cases we hardware timestamp PPS to accurately derive acquisition time. RADAR is typically PTP on Ethernet, and hardware measured on a CAN interface, both providing arrival time. Camera and IMU are very precise with timestamps in hardware as we connect them directly to the computer; frame synchronization is used to trigger all the camera’s to capture at the same time for an aligned observation of data.

Hardware timestamping mechanisms allow the platform to maintain accuracy under heavy load.

The accuracy of time synchronization depends on the application.

2 Likes