ROS Resources: Documentation | Support | Discussion Forum | Service Status | Q&A answers.ros.org

Proposal: Moderation Guidelines on Surveys

Hi All,

We’re seeing fairly regular requests to the community for surveys. While I think gathering data is generally helpful; I worry that at times the relationship can be repetitive, poorly planned, and transactional. I would like to have a survey moderation policy in place that ensures that when we do surveys that the data is open, avoids personally identifiable information, non-repetitive, and broadly applicable.

My proposed policy for surveys is as follows:

(1) Survey poster must have a trust level of 2, member.
Rationale: the submitter must regularly engage with the ROS 2 community.

(2) The survey must have prior approval to verify:
(a) The survey doesn’t duplicate recent prior work.
(b) The results will be opened up to the broader community.
© The intent and scope are relevant and correct for the ROS community.
(d) No personally identifiable information is collected. If there is a compelling reason to collect this information it is redacted when released.
(e) The poster has a plan to collect enough information for the survey to be representative.
(f) The poster will follow up with the resulting data and a thorough analysis and summary.
(g) The survey is correctly scoped to be broadly useful. (i.e. it benefits everyone’s project).

Thoughts? Opinions? Let us know what you think.

  • I support this policy
  • I do not support this policy

0 voters

We’re actively trying to engage and foster collaboration with the broader scientific software engineering community and this rule seems like it will make it difficult for them to do their work if they depend on things like surveys.

It’s not uncommon for people to register on Discourse specifically with the intent to reach out to the ROS community, which up till that point they’ve only been observing.

This seems a bit vague: when is something broadly useful? If researchers have a specific research question, by definition a survey serves their goals, not necessarily those of the ROS community. In addition, the usefulness may not be apparent immediately.

My point with requiring a minimal trust level is to explicitly prevent drive-by surveys and to formalize the requirement of a certain level of lurking prior to making a post. One could do this fairly easily by making a post along the lines of, “We would like to survey X, what should that survey look like to be useful to other groups?” I don’t think it is particularly onerous to ask that people engage first with the community they wish to survey and understand.

To the second point, broadly is probably not the best qualifier. What I would like to see is that surveys about a particular topic be scoped in such a way that the data collected is useful to a wider audience than just the survey author. It may be the case that most surveys are like this; but it may also not be the case. We can’t really determine that unless we have a discussion about the work prior to the survey going out. The counter narrative is that many groups may have their own one-off questions that would be easy to append to a survey, and add a lot of value, without a lot of work.

The take home on my proposal is not to curtail research, but instead to be more mindful of how and where we use people’s time. The reality the situation is that many surveys are actually business and market research masquerading as “research.” I think it would be beneficial to have a process where we ensure that the research benefits the community and is conducted in a productive fashion.

Always brings up the question of “from who” and “what are the metrics its based on?”. Process stinks but I’m sure it’ll come up later in this discussion if not now.

Agreed, but @gavanderhoorn brings up a good point. I think if we make a requirement like that, there should be built in exceptions or a clear contact point for outside people to be able to still move forward if deemed appropriate.

This overall seems to be overly regulated for a community message board. I think if you’re interested in regulating surveys, just go full bore and just outright ban external surveys and take responsibility for running an annual survey with questions imputed from the community that would have otherwise submitted surveys themselves. Example: Jack wants to ask ROS users about some new sensor they want to make. Jack would post a survey that would be immediately shot down but his questions or the intent of that work would be represented in the annual survey.

To make things clear here; I don’t want to get in the way of valid and well though through and discussed efforts. What I am trying to prevent are drive-by / one-off / off-the-cuff surveys. If you have a single question Discourse has a handy poll feature. I do have to look at every single survey posted to primarily
to determine if it is for marketing purposes. We’ve already had to moderate a couple of non-ROS specific surveys. I want to have a formal policy around surveys so that they are explicitly moderated.

It’s really easy to get to the “member” trust level. I don’t think that’s too much of an ask.

“From who” is, I assume, @Katherine_Scott, as she is the community manager.

The metrics should probably be clarified, but I have no good ideas for what they should be.

Member, aka Trust level 1, is only reading 30 posts across 5 topics and spending 10 minutes on the site.

Trust level 2 is currently at the default of reading 100 posts across 20 topics, 60 minutes spent on the site, visited for 15 days, recieved 1 like, sent one like, and replied to 3 topics.

It’s definitely a higher bar. There’s 2068 vs 296 in the respective trust levels. And less than half of the users have reach “member” status spending the 10 minutes so maybe requiring 10 minutes is at least a bit of a barrier that will avoid just drive by posting.

It would be great if we can have a moderately consistent policy that then can be administered by any of the site moderators w/o having to bottleneck on one specific person’s availability.

OK, I had mixed up the trust levels. Trust level 1 might be more sensible then?

Relevant would be knowing what level the people who have posted surveys in the past have had when they posted. Probably not information that’s available, though.

+1

I think that the intent here is good, but if you’re going to have a policy that depends on X, then you have to measure X, and have a threshold for passing the test. For example, if it isn’t meant to duplicate recent prior work, then does that mean no overlapping questions? 1? 2? 50%. What if they’re worded differently? I ask, not to be awkward, but because someone pitching their proposal is going to be awkward in exactly this way, and moderators will have to engage with them.

I’ve spent a lot of time in the trenches with surveys, and arguing with IRB stuff, and I worry about moderator time being sunk, and people arguing endlessly. What’s the intent in the rules? If it’s to disallow commercial entities doing surveys, then make that a rule, with an exception if they ask. Maybe there’s a way to do this where people are free to do surveys (by and large), but they are encouraged to post their raw data an analysis on a wiki page (kind of like fivethirtyeight.com), or something like that.

On the flip side, if you want the moderators to apply a sniff test, then just make a single rule: No surveys, unless approved in advance by a moderator. And give a mechanism to get the permission.

Some specific thoughts

The survey doesn’t duplicate recent prior work.

This is vague and people will argue about what “duplicate” means.

The intent and scope are relevant and correct for the ROS community.

Not really sure what they means.

No personally identifiable information is collected. If there is a compelling reason to collect this information it is redacted when released.

This is the right thing to do, and anything approved by a university will come with this baked in if it’s blessed by an IRB. However, not allowing personally identifiable information means I can’t do linked follow-up studies. Also, by the strict definition of “personally-identifiable information” use by my IRB, this means IP numbers and stuff like that, which is hard to verify unless you have direct access to the survey engine (like Qualtrics).

The poster has a plan to collect enough information for the survey to be representative.

How much is “enough”? Do you need a power analysis? How do you predict this before the survey is run?

The survey is correctly scoped to be broadly useful. (i.e. it benefits everyone’s project).

How do we measure this?

2 Likes