Towards a ROSdoc integation testing tutorial

Hi all!
As-is, the ROS doc site has tutorials on unit testing but none yet on integration testing, despite the latter being a key part of software development and ROS also having the tools already available. As discussed here on GitHub, I’d like to contribute a tutorial based on this recent blog article I wrote.

Before working the blog article out to a ROSdoc tutorial, I’d appreciate feedback on it, especially whether I’m missing anything important or am covering something incorrectly.

Other questions:

  • What is the current purpose of the ros_testing repo with the packages ros2test and ros_testing? Should it be covered in the ROSdoc integration tutorial? This repository is maintained, but not included in the desktop distributions and is close to inactive. It offers the add_ros_test CMake macro, does it offer any benefits over the four-line macro add_ros_isolated_launch_test defined here? @wjwwood @hidmic

  • The current launch_testing package is based on the Python unittest framework. There’s an effort (launch_pytest package) to support pytest also for integration testing. Should we cover both in the tutorial? @wjwwood @adityapande

  • How to visualize the test output? In my blog post, I recommended the tool xunit-viewer, which worked well and offers both terminal and web output. Are there other relevant tools to achieve similar results (i.e. a clear overview of which tests succeeded and which failed)?

  • Is there anything you’d really like to see in the tutorial?

All feedback welcome!

Related threads:

4 Likes

Update: wrote a draft of the tutorial and made a PR, feel free to share your feedback.

2 Likes

Awesome of you to put this together. The ROS domain ID approach to conflicting tests is really great! Currently, we just run our tests with a sequential executor, but it’s SLOW. Do you know how the approach works when you have more than 255 tests?

Do you know how the approach works when you have more than 255 tests?

I haven’t tried or investigated that yet. Running tests in parallel brings its own perils, indeed. What imho would be a preferable approach is that only as many parallel tests are run as the host is capable of, and also that it’s taken care of that each test gets allocated only part of the system resources (to keep tests from influencing each other). Practically, that when defining an integration test, that we specify how many CPU cores and RAM it’s allowed to use, and a central coordinator that only starts a new task if there are sufficiently free resources? If there are already any efforts related to this, feel free to bring them up.

The tutorial got merged and is now available on the ROS doc, thanks a lot to everyone who provided their feedback!

1 Like