Hey all,
I’d like to bring everyone’s attention to a new REP ready to be reviewed and receive input from the wider ROS community: [REP-2014] Benchmarking performance in ROS 2. This proposal is inspired and builds on top of the work of many others in this community and proposes a reference benchmarking approach for ROS 2 systems that is already being used in real scenarios including perception and mapping [1], hardware acceleration [2] [3] or self-driving mobility [4].
Sharing the motivation section below but encouraging everyone to read the full draft in the PR:
Benchmarking is the act of running a computer program to assess its relative performance. In the context of ROS 2, performance information can help robotists design more efficient robotic systems and select the right hardware for their robotic application. It can also help understand the trade-offs between different algorithms that implement the same capability, and help them choose the best approach for their use case. Performance data can also be used to compare different versions of ROS 2 and to identify regressions. Finally, performance information can be used to help prioritize future development efforts.
The myriad combinations of robot hardware and robotics software make assessing robotic-system performance in an architecture-neutral, representative, and reproducible manner challenging. This REP attempts to provide some guidelines to help robotists benchmark their systems in a consistent and reproducible manner by following a quantitative approach. This REP also provides a set of tools and examples to help guide robotists while collecting and reporting performance data.
Value for stakeholders:
- Package maintainers can use these guidelines to integrate performance benchmarking data in their packages.
- Consumers can use the guidelines in the REP to benchmark ROS Nodes and Graphs in an architecture-neutral, representative, and reproducible manner, as well as the corresponding performance data offered in ROS packages to set expectations on the capabilities of each.
- Hardware vendors and robot manufacturers can use these guidelines to show evidence of the performance of their systems solutions with ROS in an architecture-neutral, representative, and reproducible manner.
Reviews, comments and thoughts are very welcome.
Lajoie, Pierre-Yves, Christophe Bédard, and Giovanni Beltrame. “Analyze, Debug, Optimize: Real-Time Tracing for Perception and Mapping Systems in ROS 2.” arXiv preprint arXiv:2204.11778 (2022). ↩︎
Mayoral-Vilches, V., Neuman, S. M., Plancher, B., & Reddi, V. J. (2022). “RobotCore: An Open Architecture for Hardware Acceleration in ROS 2”.
https://arxiv.org/pdf/2205.03929.pdf ↩︎Mayoral-Vilches, V. (2021). “Kria Robotics Stack”.
https://www.xilinx.com/content/dam/xilinx/support/documentation/white_papers/wp540-kria-robotics-stack.pdf ↩︎Li, Zihang, Atsushi Hasegawa, and Takuya Azumi. “Autoware_Perf: A tracing and performance analysis framework for ROS 2 applications.” Journal of Systems Architecture 123 (2022): 102341. ↩︎