Survey on Open-Source and Benchmarking for Robotics

We are inviting you to take a survey to provide feedback on the current state of open-source assets and benchmarking resources for robotic manipulation, and future activities for improvement.

Participation in this study is completely voluntary. You will not be compensated for filling out the survey, but you can enter a raffle to receive a YCB Object Set free of charge. Completing the survey should take no more than 15 minutes. To be eligible for this study you must be ≥18 years of age and able to read and write in English.

If you are interested in participating, you can access the survey here: https://umasslowell.co1.qualtrics.com/jfe/form/SV_2ou6yOTqE57U18O

Responses must be received by February 17th to be eligible for the YCB raffle. The survey will remain open after this date, too.

The data from this survey will be used in the scoping and development of an open-source ecosystem, in support of the COMPARE project (Collaborative Open-source Manipulation and Perception Assets for Robotics Ecosystem). For more information, see the project website: http://robot-manipulation.org/

We look forward to your participation. Thank you.

Project organizers:

  • Adam Norton, University of Massachusetts Lowell
  • Holly Yanco, University of Massachusetts Lowell
  • Berk Calli, Worcester Polytechnic Institute
  • Aaron Dollar, Yale University
3 Likes

Welcome to the community @adam.norton.uml,

Exciting to see more benchmarking efforts. This appears to (some questions below would clarify that though) overlap to a big extend with other well established projects in the community, particularly with the activities of the ROS 2 Hardware Acceleration Working Group including the RobotPerf project (source code) and the REP 2014 standardization effort, so my first reaction is to encourage you to join this WG, participate and try contributing. Next meeting is very soon: (LinkedIn event) Hardware Acceleration WG, meeting #15.

RobotPerf is open reference benchmarking suite that is used to evaluate robotics computing performance fairly with ROS 2 as its common baseline, so that robotic architects can make informed decisions about the hardware and software components of their robotic systems. We surveyed the community in the past, disclosed all results during the HAWG meetings and summarized results at 2022 Hardware Acceleration Report in Robotics. You can probably use this information for your own purposes.

I went through your website, and survey and have a few questions:

  • Were you aware of RobotPerf, if so, how is your project different and more importantly, is there a need to keep two separate efforts or could we merge (note RobotPerf is also being driven by academics, with Harvard spearheading that group, particularly the team behind MLPerf in the past, quite experienced)?
  • It’s not clear at all for me whether you plan or not using ROS for benchmarking purposes. Can you clarify this?
  • Why are you not sharing back the data collected in the community with this community? It seems (based on your survey) that you’d only be sharing data with other researchers. Why should industry get involved then?
  • The headline above is generic however your project seems to be only focused on perception and manipulation. Can you clarify what’s the scope in here?
  • Of most importance, how’s your project going to benefit the ROS community? There’s no mention to expected contributions neither in the text nor in the survey. You’re querying the ROS community in here. There should be some clear expectations on this regard.

Some resources worth considering by you and your group:

1 Like

Hello @vmayoral!

I appreciate your pointers to related efforts. The RobotPerf project in particular seems quite relevant. I will spend more time catching myself up on everything you’ve provided. Before I respond to your questions, I want to first mention that the COMPARE project is currently in its scoping phase, so our efforts are being spent investigating existing efforts, interfacing with users, and finding where our ecosystem can best fit. If my answers appear vague or high level, that’s because we’re still figuring out what we want to do. All feedback is welcome and appreciated!

  • No, I was not previously aware of RobotPerf; I’ll see if my collaborators were, though. Based on my initial review, it seems that RobotPerf is largely focused on computing power, speed, efficiency, etc., of robotic systems (I will do more reading to confirm this; please correct me if I am wrong). The ecosystem we are aiming to develop will be inclusive of all components of robotic manipulation: robotic hardware, software, physical and virtual benchmarking tools, benchmarking protocols, standards, dataset generation, and sharing benchmarking data of many types of robotic manipulation tasks including grasping/end effector performance, in-hand manipulation, pick and place, kitting, assembly, etc. The COMPARE project intends to develop the infrastructure and mechanisms to improve the methods by which open-source assets are developed across all of these categories, how they are distributed, used for benchmarking, and spurring large-scale activities within the community (e.g., competitions, distributed benchmarking, developing conference tracks and journals, etc.). With RobotPerf as part of the open-source landscape, we certainly want to stay abreast of its activities and ensure it is incorporated as part of the ecosystem.

  • Yes, with ROS being a vital component of the existing open-source landscape, it is very much an integral part of the project. But we also will consider other open-source solutions being used for robotic manipulation that are outside of ROS.

  • I did not intend to suggest that we would not be sharing back any data. As an open-source effort, all data will be shared back with the community, researchers, academia, and industry alike. If there is particular language that suggests otherwise, I would appreciate a pointer so we can revise it.

  • Our focus is on manipulation and perception (in that manipulation is ultimately the main domain we are attempting to impact, but it inherently involves perception in many applications). We originally scoped our survey promotion language to be more generic as the survey is still relevant to those outside of manipulation and we may be able to leverage best practices or lessons learned from other domains.

  • As part of our scoping efforts, we want to understand where the development of such an ecosystem and the activities we are planning could have the most benefit. If there are particular aspects of ROS development where we can contribute (such as some of the working groups you’ve mentioned), then that could be our contribution. On the surface, we know how prevalently used ROS is in robotics and benchmarking, so we figured the perspectives of ROS users on these topics will be especially relevant.

Thank you again for your thorough review of our materials and the questions posed.

2 Likes

Thanks for sharing your thoughts @adam.norton.uml!

I believe there’s an overlap in here but I’d need to learn more about your project to be sure. RobotPerf is particularly very focused on non-functional grey boxed performance testing [1]. Is your focus more on the functional testing side of things?


  1. See definitions of the various types of performance testing here. ↩︎

Our focus is on the functional testing side of things, so there is probably less of an overlap than may be perceived. But, lessons learned and best practices are still a good thing to discuss!

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.