ROS Quality Assurance Working Group meeting minutes April 2018 Meeting

ROS Quality Assurance Working Group meeting minutes April 2018 Meeting
Time: 3 p.m. UTC

Notes:

Agenda:
We still working on the first initiative, “Making ROS Packages quality visible”. We identified a list of Quality Metrics to use (display for each package). We defined most of these metrics in a google doc. Currently, the list contains over 20 metrics. We need to prioritize which metrics to implement first. During this meeting, will went through these metrics and assign an implementation priority (i.e. High, Medium and Low).

Outcome:
During the meeting participants voted a priority for each metric. Please refer to the Quality Metrics for the vote results. The list of metrics and the voted priority listed below:

  1. CI Badge

    1. Build High
    2. Unit Tests High
    3. Unit Test Coverage [%] High
  2. Static Analysis

    1. Code style violations Medium
    2. Logic errors and warnings Medium
    3. Cyclomatic complexity Low
    4. McCabe complexity Low
    5. Afferent coupling Medium
    6. Efferent coupling Medium
  3. Dynamic Analysis

    1. Clang AddressSanitizer and LeakSanitizer High
  4. Testing

    1. Fuzzy testing by “chaos node” Low/Medium
  5. Documentation

    1. Comment to code ratio Low
    2. Documentation coverage Medium
    3. Package/Library Status (lifecycle) High
    4. Existence of a roadmap Medium
    5. Existence of readme Low/Medium
    6. Wiki page quality/completeness Medium
    7. Tutorial availability Medium
  6. Open issues report

    1. Number of closed issues Low
    2. Time to close issue Low/Medium
    3. Activity on issues Medium
    4. Other status (eg: wont-fix, etc.) Low
    5. Number of open issues Low
  7. User Rating Medium/High

  8. Other

    1. Maintainability Index Low/Medium
    2. Depth of Inheritance Low
    3. Class Coupling High/Medium
    4. Lines of Code Low
    5. Cyclic includes Medium/Low

We will be moving to implement this initiative and we seeking help. We calling for volunteers to assist in the implementation of this initiative, “Making ROS packages quality visible”. Please, get in touch if you like to help?

2 Likes

Hello Adam,
I’d like to participate, I’d like to participate, but I have some questions. How shall the work be done, do we have any guidelines, repositories?
Are owners of topics already working on them? All high priority ones already have owners, I wouldn’t mind working on less priority ones though, but it’s not really clear on “where to start”

1 Like

Hi David,

Thanks for asking. Your questions should clarify further.

Owners are not necessarily the persons who would do the implementation, i.e. development. I put the “owner” in order to have someone to investigate the metrics and propose a definition … The ownership for now is temporary until we found people who have the skills to do the implementation.

We looking for people who would help in the development work to display these metrics. Once we have enough volunteers, we will meet and kick off the work. The implementation team will meet outside the QA working group as it has a specific mandate.

So far you the first volunteer. Others may join from the working group. Once we have another volunteer we can meet sometimes in May to kick off the work?

The work shall start in high priority metrics.

For 4. Testing 1. Fuzzy testing by “chaos node" Low/Medium there is the repository hypothesis-ros which implements the lowest level functionality for property based testing (allows fuzzy testing as well). If you already used Python hypothesis and want to contribute you could already have a look into it. You can find the contribution guidelines in rospbt/CONTRIBUTING.md

Hello, Adam,

Like David, I also have some questions about the metrics, such as which coding standards are you considering for coding style violations? Were these details discussed already, or are they open for a future meeting?

Hi,

We haven’t discuss those granular details yet. You have been in touch with Andrzej in this regard. If you willing to assist then we should get together sometimes in May to kick off the discussion.

Thanks
Adam

I have been looking into the list of metrics, and I would like to point out that some metrics are listed under different names, but have the same meaning in practice (i.e. they are repeated).

McCabe Complexity is usually a synonym for Cyclomatic Complexity.

Class Coupling is often equivalent to Efferent Coupling.

If under this project these entries have different meanings, I would like to know.

We discussed the same thing; but we were not sure! So, we left the duplicates until the implementation to decide which ones to descope.

Please, for the ones you think duplicate strikethrough and write a note to explain your decision.

  1. Testing
    1. Fuzzy testing by “chaos node” Low/Medium

I deployed the initial release of hypothesis-ros v0.1.0 on pypi. Refer to the package announcement:

2 Likes

Awesome. Nice work. +1

Well done. Thanks for your effort.

@gavanderhoorn @Alami Thx. It was my pleasure.