Complexity / Performance Analyzer and Consideration

Hi,

I would like to open discussion on sensitivity for complexity or performance, including analyzer or tool for those.
Related to A timer that can be triggered a finite number of times. · Issue #1990 · ros2/rclcpp · GitHub, not specifically but just an example.

Complexity vs Feature/Interface

There are always trading-offs between feature/interface support and complexity/performance when we add those to current code. It is likely to add more complexity when we add any code into the current one unless intentionally optimize or reduce the complexity. This will be burden for the application which does not need the specific feature, especially performance sensitive application and resource constrained platform.

I think,

  • Client libraries such as rclcpp is supposed to be a thin-wrapper to provide user-friendly classes and interface.
  • it all depends on application or use cases.
  • defining complexity threshold will be difficult and tricky because it depends on each feature and implementation. (sometime, we must take the security fix even if it degrades the performance.)
  • that is why we discuss on this for each PR to see what is reasonable for us.

please share your thoughts or comments if you have, i think that will do good for the community.

Benchmark / Performance Analyzer

I think it would make more sense to see the statistics once it comes to complexity concern or discussion.
Usually I use Linux perf to see CPU consumption in general, but curious that what tools or profile are available or suitable for ROS / ROS developers, probably platform agnostic tool?

If you have experience around this area, please also do share your thoughts :smile:

Best,
Tomoya

3 Likes

@tomoyafujita the Hardware Acceleration Working Group spent a decent amount of time reviewing this topic over the last year. Have a look at REP-2008 - ROS 2 Hardware Acceleration Architecture and Conventions by vmayoral · Pull Request #324 · ros-infrastructure/rep · GitHub which summarizes the resulting guidelines for benchmarking and performance testing of ROS 2 graphs.

The approach documented there was used in a perception case study and was reported here. Note that in a nutshell, this approach builds on top of the ros2_tracing and LTTng projects, both of which were covered/used in past (ROS-related) work authored by @Ingo_Lutkebohle, @christophebedard and others.

LTTng-related, you may find lttng-analyses interesting and a good alternative to the tools you seem to be already using. Happy to chat if you need further help.

2 Likes