Best Practices for Robot Testing

Same as the ros branching strategy, with branches like ‘kinetic-devel’ and feature branches off that on forks per developer. These are rebased on eg. kinetic-devel regularly to keep history clear.

Each developer has an account on development robots with a catkin overlay over the default stuff that is all passed Q/A and just works. When another developer logs in for their testing, the basis they work on is not borked by another developer’s stuff.

For simulation, that is just an arg to the launch files which simply launches some different launch files underneath. If you need different branches for simulation, I’d consider that bad practice.

My team really dislikes submodules so we use .rosinstall-files to tie our workspace together. These are also used by Travis and ROS Industrial CI to build the dependencies for a package.
Industrial CI also runs pylint checks for python2/3, all the tests etc etc.

In terms of testing and Q/A: we have a really good Q/A dude that is very critical. It takes time to pass that but keeps the standard high.

We’re also using GitHub - floweisshardt/atf to keep some tabs on performance in some simulated scenarios, eg. can the robot still navigate through some environment within X time.

For robot behaviors, I’ve developed a a little framework that can be used in scripts to steer the state machine/behavior tree into nasty corners and edge cases: https://github.com/ipa320/cob_command_tools/tree/indigo_dev/scenario_test_tools Not yet as nice as I want it to be, but hey…

1 Like