What are AI System Cards for ROS?

“System Cards” are somewhat like Baseball Cards for AIs, a summary matrix-score of various key parameters for rapidly changing version-control. The term is barely a year old but events in AI are moving so fast that “System Card” is already a standard term, like other AI jargon that hardly existed until now. It’s up to the ROS developer community to apply AI System Cards and define any ROS-specific parameters that could become part of the evolving standard.

Further, it may be apt to develop Robot System Cards to manage robot fleet operations at the top level of abstraction.

The original paper laying out AI System Cards:

2203.04754.pdf (arxiv.org)

I’m finding this paper fairly dense to read. Since you seem to be up-to-speed on the concepts here, can you propose what a system card would look like for ROS?

This paper appears to have only 6 citations, hardly a standard as far as I can tell from afar. Are you sure this has the buy-in that you believe it does?

My super quick googling implies Meta and OpenAI seem to be buying into it… but I’ve never heard of this before (and I get the sense other folks in here haven’t either). Given the difference between AI architectures and robot fleets, I’m also thinking that @mjcarroll’s suggestion to provide an example would be helpful.

We are defining Robot System Cards for the first time here, so nothing much is set, expect for the historic context of ongoing transition of AI-assisted coding.

Let us consider an RSC as something like a summarized Spec with API-like parameters. Instead of a coded language, it’s going to tend to key descriptors in semantic Natural Language. There will be standards like ISOxxxx, and specific descriptors for processing, CPUs, software suite versions, including LLM/LWM compatibility, multi-sensing, actuation, power spec, and so on. There will be links to documentation, narrative tech notes, and so on.

RSCs need some time to develop and mature, and GAI may well best define and write them.

Nothing is truly standard in Generative AI, only the reasonable expectation that standards are developed as needed.

The paper cited is influential (“buy-in”), and the standard suggested, clearly provisional, rather than fairly suspect.

We may call the “card” here a “data sheet”, or a ROS “topic” a “bus”.