OSRF Adopts Policy on Use of Generative AI in Contributions

As the use of Generative AI tools becomes increasingly prevalent in software development, open-source projects face unique challenges and opportunities. These tools can enhance productivity, foster creativity, and streamline workflows, but they also raise important questions regarding ownership, licensing, attribution, and ethical use of generated content. As the OSRF’s open-source community embraces the capabilities of Generative AI, developing a comprehensive policy has become crucial. This will not only safeguard the integrity of the projects but also foster an inclusive and innovative environment for all contributors.

In response to requests from the community, the Open Source Robotics Alliance’s Technical Governance Committee (TGC) chartered a Technical Committee (TC). This TC investigated how other open source foundations and projects are approaching the unique challenges of Generative AI, and drafted a policy for the OSRF. The policy has now been reviewed and approved, and has been publicly posted in the OSRF’s Policies and Procedures repository. It is available in PDF form, and is also summarised below.


The OSRF allows the use of Generative AI tools in contributions (code, docs, etc.) but contributors are responsible for:

  • Understanding the tools and their limitations.
  • Ensuring the output is high-quality, original, and doesn’t violate copyright.
  • Clearly disclosing the use of AI tools in their contributions.
  • Verifying the accuracy and appropriateness of AI-generated content.

Basically, use AI responsibly, be transparent, and make sure your contributions are still top-notch!

9 Likes

I’m glad we’ve taken a more pragmatic approach. Just yesterday I had this conversation with chatgpt that almost certainly saves me a lot of time. I was thinking its kind of sad that I wouldn’t be able to upstream anything like it but I guess I can work something like it into L-turtle.

well, can you guarantee:

?

Isnt that our job to figure out as contributors? As a reviewer I always clone the repo locally and run tests prior to providing feedback.

Fwiw, there is limited documentation on the task (other than reading source code) I wanted to achieve. The fact the llm simplified the boiler plate saved me a lot of time.

The output is small enough that I can read audit and write my own unit tests over. It’s not without errors, but I know how to fix those.

Is it copyrighted? I’m not a lawyer, but most of the code is boiler plate. Has the parent company that trained the llm violated copyright? Probably. If we don’t like this then we should be explicit in banning it.

But at this point I’d also ask how do you know a contributor has not been using an llm? Should we subject contributors who are honest on llm use to more scrutiny than those who aren’t?

1 Like

This is the reason for this middle-of-the-road policy. It gives contributors a way to declare that they used an LLM without having their contribution rejected because of that, and in the future, if an LLM is found to produce copyright-violating content, we are able to identify and replace any contributions that used it. Transparency is the goal.

4 Likes