ROS Resources: Documentation | Support | Discussion Forum | Service Status | Q&A

Quality Assurance Working Group July Meeting Announcement

ROS Guys, Gals, and non-binary Pals,

The next ROS Quality Assurance Working Group will take place on: 2020-07-02T14:00:00Z

I will try to record the video and post it afterwards. Hopefully I’ll get the audio settings correct this time. For this meeting we have a great talk lined up out of Carnegie-Mellon:

A developer perspective on the challenges of automated testing for robots

Afsoon Afzal, PhD student at the Institute for Software Research, Carnegie Mellon University

How do robotics developers test their systems? What are the main challenges of testing robots? Is it possible to automatically test robots in simulation? In this talk, I present the findings of several empirical studies with robotics developers, conducted by our research group, designed to identify both the state of the practice and challenges faced by developers when testing their systems. Considering the importance of test automation, and the potential of simulation-based testing, I will focus on presenting the challenges and bottlenecks of test automation for robotics, and how they are impacted by the capabilities of existing simulators.

Afsoon is a 5th year PhD student in software engineering program at Carnegie Mellon University under supervision of Dr. Claire Le Goues. She is completing her thesis on automated quality assurance of robotics and cyberphysical systems, and is expected to graduate by the end of Spring 2021.
The agenda for this meeting is quite short this time so please do make suggestions of topics we should discuss.

Conference Call Details:
Time: 2020-07-02T14:00:00Z

Phone: (US) +1 617-675-4444‬ PIN: ‪456 561 685 9668‬#


I would love to start seeing the QAWG meetings showing up in the ROS Events Calendar. It would make it much easier to find them and remember to attend.

1 Like

Oversight on my part. I copied over the wrong calendar event from last time. Should be fixed now.


It would be great if the video is made available. That would be really appreciated. Thanks.

1 Like

I am bumping this and locking the other topic to prevent confusion. Resurrecting posts over two weeks old causes all sorts of problems.

Thanks Afsoon for the talk, it was very insightful.

At all: I leave a reflection here, it may be interesting to discuss about it, if you want.
In general, I got the feeling that the presented challenges tend to be related to the simulator itself or to the developers of simulation engines. This is surely valuable. However, it is also true that the way ROS developers design the software of their robots can also strongly facilitate simulation-based testing. In other words, I think that robots developers have also the responsibility to design their own software in such a way that their systems facilitate simulation. Is this something related to the way robotics software is historically developed or something else?

1 Like

Slightly off-topic: should this announcement be in the #quality category?

I’ve missed it as I could not find it any more.

My bad. The topic was tagged, on the front page, and on the events calendar. The Discourse search feature generally works reasonably well if you sort by date.

1 Like

Video has been uploaded here.

Afsoon’s publication’s are located here. There is also this more recent work on simulation on arxiv.


Thank you very much for share the video!! :clap: :trophy:

That’s a very interesting point Ivano. I briefly mentioned the lack of guidelines in the field on how to properly incorporate better testing methods. In fact, both in the interviews and the survey we had responses such as “I really want to do this but I don’t know how”. So I guess, one of the things that software engineers can help with is developing some sort of guideline/tutorial for let’s say ROS developers, on how they can design their software in a way that is more suitable for automated simulation-based testing.

1 Like

Once again, thank you for the talk. I am also happy to testify that my experience working on the topic confirms your findings :slight_smile:

I will agree with you, Ivano, in that the simulator itself presents a great challenge. From my experience developing a simulation platform, half of my work revolved around (0) stabilizing the simulation environment, as well as dealing with the issues of the simulator itself. The other half had to do with (1) simulating the missing parts that enabled me to replicate the hardware interface of the robot, (2) developing a testing framework that supports spawning arbitrary test cases and evaluating the test results, and finally, (3) building the infrastructure for performing continuous testing.

It’s relatively easy to put a quick simulation together and start playing with it, but this is far from having a platform for automated testing, or even a practical simulation for any use. It takes considerable effort to get there, and unless your project scales to a point where the benefits can outweigh the costs, it’s not always an option.

Working with an automated simulation platform for some time now, I can honestly say I cannot imagine myself going back to the earlier times. As already mentioned during the talk, not all bugs involve special cases. Many of them concern the operation of a system under normal conditions. Even in a limited scenario, the benefits can be huge. Testing in simulation continuously provides immediate feedback, prevents bugs from propagating within an organization and affecting other people, and makes the testing on real hardware further down the line a sane process.

Even if we do not have any open standards in robotics (to my knowledge) that facilitate testing (see (2) and (3)), making sure that we have access to a stable simulator and a wide variety of resources and tools (see (0) and (1)), I believe it would go a long way into encouraging and motivating people to use simulation.


Hi Afsoon, indeed. It could be interesting to see how the “standard” guidelines for testability of software may fit in here; for example, the ones helping developers in writing software which is more compatible with unit testing. I am pretty sure some of those guidelines apply for robotics software, while additional ones might be needed to cover the cyber-physical nature of robots.
I will keep this mental thread open and do a quick pass over the literature to see if this is something new also from a scientific perspective (I believe so, but I want to be sure). If you find something related or you want to further discuss this, please let me know!

1 Like

Hi Nick, nice to meet you :slight_smile:
From the “academic side”, I can tell you that in general we are also very happy when our studies match or are useful for engineers working in the trenches :slight_smile:

Out of curiosity, when you work on the structure of the software of your robots (that is, on the computation graph - how nodes communicate with each other), do you have already in mind how you are going to test them? For example, if you have two alternative structures in mind for your nodes, do you have testability as a decision factor?

I cannot bring to mind a scenario where I would be restricted in terms of testing of a component. Everybody probably thinks about testing one way or another, but I’m not sure if this could be a deciding factor. Any unit testing framework should allow you to mock the interfaces and test your component.

Going beyond that and thinking about system testing, components are responsible for recording important events. Some data come from normal operation, and others are necessitated for debugging purposes. Having that, testability of a system usually comes naturally. Then the problem of testing is summarized in having the ability to generate a variety of inputs, subjecting the system to them, and checking that everything still works.

1 Like