ROS 2 TSC Meeting Minutes 2023-08-17

And I disagree once again. I’ve presented various examples above, including the fact that the TSC has increasingly been making it very difficult. That it over-ruled the WGs and went beyond alignment. Into policy creation and into contradictions. TSC changed multiple times the policies to create a WG. TSC changed their mind multiple times about WGs categorization (who gets to be official, community, etc), which as of today, I still don’t understand why this is needed (as if TSC-created WGs are of more relevance). TSC members listed and unlisted things at their discretion at About Working Groups — ROS 2 Documentation: Rolling documentation without even pinging the corresponding WG leads. There’s not even the ROS 2 Hardware Acceleration Working Group listed in there today.

Finally, the whole current WG-TSC reality is very convoluted. After successfully running the ROS 2 Hardware Acceleration Working Group (HAWG) for two of years (2021 activity report, 2022 activity report), again, two years :boom: and having brought hundreds (both ROSers and externals) of developers together, following what’s established at

“Community working groups can become TSC working groups by a simple vote of the ROS 2 TSC”

About Working Groups

I applied early this year for the ROS 2 HAWG to become a TSC Working Group (and let’s highlight there’re existing TSC WGs led by non-TSC members official members, but observers). I did so, only to get pointed to the back-door and requested to “apply to the TSC as company for an official membership”. Of course this got rejected. I got %s, but no further input. Nothing.

So no, Ingo, TSC is not primarily alignment. Aligning would’ve been recongnizing the ROS 2 HAWG work and interest as well as its numbers and contributions. What happened is very different from alignment and that’s precisely why I started this rant. Because something’s very wrong with this TSC.

I understand that you may feel frustrated and believe that I am not directly acknowledging the suggestions to advance the matter technically. I stated my disagreement with the rejection and that such reponses/alternatives are not the right path as they’ll lead to benchmarking misunderstandings. It’s my right to do so, and I blame the current TSC. This is the forum to complain and discuss it.

I recognize that emotions can run high when we are passionate about a subject (and everyone knows that I am when it comes to ROS). It is everyone’s right to feel upset, but that’s not a reason to get personal as you did. Personal attacks hinder effective communication and prevent from collecting valuable feedback that could lead to a better outcome for everyone in this community. Additionally, while I have been accused of making ad hominem criticisms, my initial comments were carefully generalized. It was only after @gbiggs rundown of TSC members requesting further clarification that I particularized my comments providing some examples directed to their positions in the TSC (not that I dislike any of them or have a personal opinion. I just don’t care). So no, there is no “making myself out to be the victim here” Ingo, I’ve been taking responsibility for each and every one of my comments, responding with clarifications. Taking responsibility for my words is the opposite of playing victim.

And that is precisely the problem Jaime that I’ve remarked above. We need TSC members that are involved into the technology, to steer the technology in the right direction. You are not involved in the technology anymore, so ideally and in my opinion you should get someone involved in the technology from your side that can help steer the project. Otherwise, you could hire tomorrow another CEO and have her/him replace you in the TSC. After all, she’d be representing eProsima.

Wrong. Acceleration Robotics is a company producing semiconductor building blocks for robots using hardware acceleration. We make faster ROS components using hardware acceleration so of course it’s important for us to have an accepted consensus to measure performance. This helps us drive our product development and services. REP-2014 was an attempt to elevate the already existing consensus in the ROS community (which is using ros2_tracing) to a standard (yes, informational, but still standardized because it already is! The whole ROS stack is instrumented as such). Trying to reflect this (again, already existing consensus in the ROS 2 codebase) in a REP was the most logical step.

And just for clarity, again, REP-2014 and RobotPerf (an implementation of the REP) initiatives aimed to establish a common ground for robotics benchmarking, not to promote any specific company. They are built vendor-neutral (which is what’s scary for some companies) and that starts with ourselves getting out there, requesting feedback for multiple years (within the HAWG) and then writing some general consensus (REP-2014) and implementing it (RobotPerf). Btw, representatives of the TSC participated in some of these meetings. So you should be aware of this (but maybe choose to ignore it).

Also, note that similar to MLPerf, RobotPerf aims to become an industry standard by adoption. Placing ROS 2 at the center of this approach encourages roboticists to use ROS for performance evaluation, benefiting the entire robotics ecosystem. Attempting to leverage ROS’ successful community standadization efforts to generate an industry standard (RobotPerf) is nothing wrong or bad for ROS, quite the opposite. The informational and non-enforceable nature of REP-2014 was intended as a guideline, not a strict rule. Its rejection by the ROS 2 TSC is a missed opportunity to set common conventions that could have positively impacted the community.

Let me give you a direct example of the relevance of a “consensus in benchmarking”: in the last RobotPerf release we disclosed that we’re observing a very concerning behavior with Fast DDS when it comes to analyzing the latency of certain perception computational graphs:

This is not new and If you were more involved in the technology, you would probably be concerned by problems like this a bit more. Having REP-2014 out there would help us all align and agree on a general way to collect metrics, benchmark and compare.

Fair enough Phil, here’s a improved version of what I conveyed before, but now including actional proposals:

TSC Issue Comment Suggested action
Lack of Transparency I criticized the lack of transparency in the TSC’s decision-making process, particularly in the evaluation of REP-2014, and other WG-related matters. I called for more openness and accountability. All decisions by the TSC should be disclosed, including voters and votes
Non-Technical Rejection I expressed frustration that the feedback for the rejection of REP-2014 was mostly non-technical, despite the proposal feedback being discussed, reviewed, re-iterated based on initial feedback, which we addressed in detail TSC comments and feedback should remain technical. If they don’t have the expertise, they should either defer to the corresponding WGs or invite external independent experts (without conflicts of interest) that advise accordingly
Political Nature of TSC I criticized the TSC for being more political and less technological, and questioned the value of the current TSC setup TSC membership shouldn’t be voted by the TSC, it should be voted by the community as a whole (open poll to all members of the community), preferrably in an open/transparent manner (again to facilitate accountability). Also, TSC memberships should be renewed and voted periodically, e.g. every year. Politics and lobbying within the TSC should be rejected.
Lack of Strong Contributors I criticized the TSC for missing strong contributors, key players, and individuals with a strong track record in the community. TSC should be diverse in both gender and representation. Companies (Big companies, smaller and startups) as well as community (non-affiliated membership) representatives should get similar representation counts. The overall ROS community should have avenues/methods to raise concerns against TSC members, and methods to replace them if appropriate
Anonymous Lobbying I expressed concern about anonymous lobbying (mostly corporates, and under confidentiality) and the confidential tone of ROS TSC discussions Confidentiality should be an exception, and never used for TSC decisions
Need for Community-Centric Governance I called for a change in the current leadership towards a more community-centric governance model TSC community representatives should increase to match the count of company representatives and (regardless of their affiliation) should act as non-affiliated for what the TSC matters concern
Inconsistency in Standardization I criticized the inconsistency in the TSC’s decision to reject standardizing a benchmarking approach in ROS 2, despite existing ROS codebase consensus, despite existing accepted implementations and despite the consensus reached among those involved in its development Accept the community standards as community standards. Don’t reject what’s obvious (e.g. adoption of ros2_tracing) for tracing and metrics collection.
Lack of Accountability I criticized the opacity and lack of accountability in the governance structure, and called for proper mechanisms to replace inactive or non-contributing representatives All votes should be publicly disclosed, with the corresponding voters. Voters should be allowed to provide comments to clarify their position. All TSC members should be elected by the community, accountable for their acts and/or replaced given the right circumstances
Slowing Down of Innovation I expressed concern that recent policies and events are slowing down the innovation process in the ROS ecosystem and discouraging many from continuing to contribute Proactive contributions aligned with ROS interests should be encouraged. Not discouraged or stopped. TSC should carefully consider the technical value of each new initiative and its implications for the ROS ecosystem