RobotPerf benchmarks "beta" release

Hello ROS community!

Together with Intel, AMD, Harvard, Klagenfurt University, Georgia Institute of Technology, Boston University, Johannes Kepler University Linz, Ford, Barnard College, Columbia University and Carnegie Mellon University we at Acceleration Robotics are thrilled to introduce the beta release of RobotPerf Benchmarks, an advanced benchmarking suite crafted specifically to evaluate robotics computing performance using ROS 2 as its baseline. In this beta release, we not only showcase new benchmarks and results but also introduce novel visualization capabilities. The complete release is available at Release RobotPerf benchmarks beta · robotperf/benchmarks · GitHub.

Complete announcement is available at Announcing the RobotPerf™ Benchmarks Beta Release: An industry standard for benchmarking robotic brains. You can download and read our paper here.

Grey-Box and Black-Box: Two Benchmarking Approaches for robot brains

We’re not just about results; we’re also about flexibility. RobotPerf™ offers two distinct benchmarking approaches:

  • GREY-BOX - Tailored for real-world applications, this detailed approach incurs a minimal average latency of only 3.3us.
  • BLACK-BOX - Ideal for quick prototyping, it offers a simpler but slightly less detailed analysis.

Both approaches ensure that every robotic architect finds a benchmarking solution aligned with their specific needs.

Contribute

RobotPerf is an open source and vendor-agnostic project. You can contribute to RobotPerf with at GitHub - robotperf/benchmarks: Benchmarking suite to evaluate 🤖 robotics computing performance. Vendor-neutral. ⚪Grey-box and ⚫Black-box approaches. with new benchmarks, new categories or by reviewing the RobotPerf Specification. Contribute and join many other robotics architects from academia and leading industry groups.

6 Likes

Hi, could you please explain the purpose of such a benchmark? I mean, if we take for example one of your perception algorithms like stereo matching. The performance of stereo matching depends on the algorithm used, not on ROS that only provides inputs and takes outputs. If you want to test let’s say NVIDIA GPU against Intel CPU you would have to take at least vendor-optimized implementations. Of which particular algorithm for stereo matching (there are many)? And this kind of algorithms are usually components of some larger processing pipeline, where you can fuse some operations (or not), so testing it as isolated op? And still I don’t understand what does ROS have to do with it?

Sure, we tried explaining this at https://robotperf.net/. This resource may have some additional answers. Shortly:

Goal of RobotPerf is to provide roboticists and system architects with a resource that assists in performance evaluation for improving/selecting the right computing hardware for their use case. Expected falue for stakeholders (in the ROS 2 ecosystem, but also generally):

  • Package maintainers can use these guidelines to integrate performance benchmarking tools (e.g. instrumentation) and data (e.g. results, plots and datasets) in their packages.
  • Roboticists (consumers) can use the RobotPerf Benchmarks and guidelines in the spec to benchmark ROS Nodes and Graphs in an architecture-neutral, representative, and reproducible manner, as well as the corresponding performance data offered in ROS packages to set expectations on the capabilities of each.
  • Hardware vendors and robot manufacturers can use these guidelines to show evidence of the performance of their systems solutions with ROS in an architecture-neutral, representative, and reproducible manner.

This is correct, and what we’re trying to do. The whole RobotPerf specification is aimed at empowering precisely this (accelerators, among other things) and we introduced preliminary results with hardware accelerators into our paper. For example, the following plot shows max. latency decrease through hardware acceleration in simple perception tasks:

We’re working to produce more like these. As we advance, we’ll be working towards including more accelerators with specialized kernels. This is a major endeavour though. Dealing with vendor-specific acceleration frameworks is complicated and hardly translate across vendors unless you abstract yourself away properly. We’re following REP-2008 for doing so.

That’s to be specified on each benchmark. For example, a3 tackles a perception computational graph to compute a disparity map for stereo images. Graph should be visible for each benchmark, and all our development is graph-centric. For this particular case:

So answering your questions above, the corresponding algorithms that match this particular graph (or any other benchmarks’ graph). I agree that many of these could be further integrated in larger pipelines, and that’s why we’re currently working on building larger graphs (benchmarks) that help demonstrate, compare and visualize this.

I wrote about this here. Essentially, most of the roboticists I know build robot behaviors in the form of graphs. With different frameworks, but mostly using ROS. Consistently, to avoid reinventing the wheel, we adopt a graph-centric approach in RobotPerf. To build, maintain, launch and benchmark these graphs, we use ROS’ infrastructure.

All benchmarks are ROS packages (each individually, across perception, control, manipulation and localization). RobotPerf is actually just a ROS meta-package, wherein each one of its packages launches a graph meant for performance testing.

There’s much to be done and we welcome contributions.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.