What quality metrics do we need to make packages quality visible?

As discussed previously, ROS package page already list several of the following data. It’d be great if we bring it under a certain topic rather than scatter these everywhere. I’ve made a list of items which can be used to define the metrics, suggestions are welcome. This list will be dynamic. I’ll edit (and repost if required) based on future discussions. I’ve also created some groupings. Corrections welcome.

CI

  • Build [Pass/Fail] —> Basic data from the CI tool
  • Unit Tests [Pass/Fail] --> This might require more granularity because some tests are more important. Doing this might be possible for some core packages
  • Unit Test Crash --> Apparently CI tools can detect and report this already, we just need to showcase it
  • Unit Test Coverage [%] --> A diagram showing code test coverage with pass/fail spots like a heatmap (Is there any free options? codecov? CppDepend has a sample of what it could look like)
  • Static Analysis
    • Code Quality (https://wiki.ros.org/code_quality)
    • Number of Coding Standard errors
    • Cyclic includes (for c++ using cppdep)
    • Cyclomatic complexity (possible tool: Lizard). --> There might be an existing tool already. Need to check
  • Dynamic Analysis
    • Clang sanitizers (Address, UB, Memory, Leak, Thread) --> reference, multi-builds required is a cmake hassle
  • Testing
    • Integration tests, maybe model-based testing… --> Discussion in progress
    • Fuzzy testing by “chaos node” —> Being discussed along with contracts in ROS. Maybe use pyros-dev or similar tools??

Documentation

  • Status (Maintained, Orphaned, etc.)
  • README (not all packages do. Repository != package)
  • Wiki (Github/GitLab, etc. if the content isn’t on ROS wiki)
  • Getting Started (Tutorials & Debugging Common Bugs)
  • Sphinx/Doxygen links
  • Link to Tagged Questions from answers.ros (as an FAQ)
  • Other resources like photo/handdrawn/generated (https://www.planttext.com/) UML diagrams, etc
  • User rating/review (maybe for tutorials also, eg: How helpful is this)

For issues, we’ll need to access the host (Github, bugzilla) API regarding

  • Number of open issues
  • Time to close issue
  • Activity on issues
  • Other status (eg: wont-fix, etc.)

Meta information at package level:

  • Efferent coupling: parse package.xml file
  • Afferent coupling: get dependencies from the list of packages on wiki
  • Quality summary data ala HAROS

NB: This isn’t an exhaustive list or even a final list. I’ve compiled it based on past discussions.

Current questions:

  • How to visualize the results? (use CI tool or use raw data and other tools to visualize: depends on CI offering)
  • Integration tests
  • HAROS: What all data to show? Dependency graph, Dependencies of package, packages that depend on this package, etc. There is lots of data, but limited to workspace.
  • Coverage for documentation?
  • Low Priority: Model-in-loop or hardware-in-loop tests
  • File bugs after running HAROS? Makes sense only for categories with 0 false-positives
1 Like