RobotPerf benchmarks, the benchmarking suite to evaluate robotics computing performance using ROS 2

Dear all,

As introduced in the last HAWG meeting (#12), a group of robotics leaders from industry, academia and research labs participating in the WG are pushing forward RobotPerf, an open reference benchmarking suite that is used to evaluate robotics computing performance fairly with ROS 2 as its common baseline, so that robotic architects can make informed decisions about the hardware and software components of their robotic systems. More details about RobotPerf will be discussed later today Hardware Acceleration WG, meeting #13.

The project’s mission is to build open, fair and useful robotics benchmarks that are technology agnostic, vendor-neutral and provide unbiased evaluations of robotics computing performance for hardware, software, and services. The benchmarks are designed to be representative of the performance of a robotic system and to be reproducible across different robotic systems. For that, RobotPerf builds on top of ROS 2.

Why RobotPerf?

The myriad combinations of robot hardware and robotics software make assessing robotic-system performance challenging, specially in an architecture-neutral, representative, and reproducible manner. RobotPerf addresses this issue delivering a reference performance benchmarking suite that is used to evaluate robotics computing performance across CPU, GPU, FPGA and other compute accelerators.

Mission Vision Standards
Represented by consortium of robotics leaders from industry, academia and research labs, RobotPerf is formated as an open project whose mission is to build open, fair and useful robotics benchmarks that are technology agnostic, vendor-neutral and provide unbiased evaluations of robotics computing performance for hardware, software, and services. Benchmarking helps assess performance. Performance information can help roboticists design more efficient robotic systems and select the right hardware for each robotic application. It can also help understand the trade-offs between different algorithms that implement the same capability. RobotPerf benchmarks aligns to robotics standards so that you don’t spend time reinventing the wheel and re-develop what already works. Benchmarks are conducted using ROS 2 as its common baseline. RobotPerf also aligns to standardization initiatives within the ROS ecosystem related to computing performance and benchmarking such as REP 2008 (ROS 2 Hardware Acceleration Architecture and Conventions) and the REP 2014 (Benchmarking performance in ROS 2).

Benchmarks

RobotPerf benchmarks aim to cover the complete robotics pipeline. Benchmarks will be initially organized with the following categories in mind but feedback is welcome. In time, new benchmarks will be added and new categories may appear over time. If you wish to contribute a new benchmark, please read the contributing guidelines.

a Perception b Localization c Control d Navigation e Manipulation
perception benchmarks localization benchmarks control benchmarks navigation benchmarks manipulation benchmarks

To simplify usability, the benchmarks are hosted in a GitHub repository that is a ROS meta-package: robotperf/benchmarks. Each benchmark (a ROS 2 package) should live in the corresponding subfolder of such meta-package. So that benchmark information can be easily consumed by other tools, each benchmark should be defined in a machine-readable format. The format will use YAML data serialization language. A YAML file named benchmark.yaml should be placed in the root of the ROS 2 package describing each benchmark at any of its results. See a benchmark.yaml example.

Refer to the Benchmark Specification for more details on how to build new benchmarks.

7 Likes

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.