Use ros message in parameter interface

Hi,
we would like to propose to enhance the parameters to use ros msgs.
The goal here is to have parameters with structures. This is a
major roadblock for us in the transition from ros1 to ros2.

I already diged a bit into the code and would suggest the following changes:

Add a new type in rcl_interfaces/msg/ParameterType.msg

  • PARAMETER_YAML_MSG

In this case, the ParameterValue.msg would contain a string_value that
contains a yaml representation of the msg that should be set.

On the C++ side, we can use the rosidl_typesupport_introspection_cpp framework,
to parse/serialize the given yaml into a msg. I already programmed a short prototype
and this works fine.

From here, the next steps would be to look into rclcpp::Node and how to extend the
api for the parameter interface.
E.g.
my_node::msg::Parameter = this->get_parameter(“Foo”).as_msg<my_node::msg::Parameter>();

Before going further with this, I would like to have feedback on this approach and
if it has chances to be accepted upstream.

@JM_ROS thanks for posting idea, i got a question.

so this was supported with ros1 but ros2? I mostly work on ros2 recently, so i am not sure about this.

ROS1 supported parameters in the XMLRPC format. Therefore you could set/use arbitrary structured parameters.

There was no support for parsing these automatically into ros messages though, so this would be new.

So when I first started porting things over to ROS2, I also ran into this several times where we had parameter structure that could not be represented in ROS2 style. Here’s a concrete example of how I “fixed” the robot_calibration parameters to be able to load them in ROS2:

In ROS1, I had an array of models to load:

models:
 - name: arm
   type: chain
   frame: wrist_roll_link
 - name: camera
   type: camera3d
   frame: head_camera_rgb_optical_frame

In ROS2, the common pattern is to make the old array a list of names (you’ll also see this quite a bit in MoveIt and other packages), and then have each name be a block of parameters:

models:
- arm
- camera
arm:
  type: chain3d
  frame: wrist_roll_link
camera:
  type: camera3d
  frame: head_camera_rgb_optical_frame

While it is a bit more verbose, one of the major advantages here is that you can get a lot better documentation/introspection. If the whole “models” section were a YAML blob, you wouldn’t automatically know how to fill it in - but in the format I show above, when you do a “ros2 param list” you can actually see all the parameters (whether they were set or left as default) for not each of the arm/camera blocks.

The code I use to parse the ROS2 block can be seen here robot_calibration/params.cpp at ros2 ¡ mikeferguson/robot_calibration ¡ GitHub

1 Like

I just realized, that I should explain more of my vision of the API / tooling side.

If you declare a parameter you normaly give a type. E.g.
node->declare_parameter<my_node::msg::Parameter>(“super_duper_param”);

This type can/will be exported to the external tooling e.g. ros2 param.
The external tooling itself can then do all the nice things, like bash completion, rostopic echo can do.

Also this should allow us, to do schema checks on the yaml blob, even before applying it.

You would know, that the parameter is of a defined message type. Therefore only the fields that are present in the message would be allowed. Also you would have to give all fields of the message, as the yaml → msg parser would reject them otherwise. In my opinion this would make the parameter interface way more obvious, especially as noted in my previous post, if you have tool assistance for setting them.

Yes, I see, I got hung up on the YAML part and missed the fact that once you parse it into a message, it would be well defined (although I think you would still miss out on the the ROS2 ability to add things like a description for the parameter - however that doesn’t seem to be widely used)

This isn’t quite a message type used for parameters, however I have this library that makes the structure of your parameters much more declarative and adds validation and structure for dynamic parameter handling: GitHub - PickNikRobotics/generate_parameter_library: Declarative ROS 2 Parameters

Hi, I found your library during my initial research into the topic, but as far I could tell, it only provides support for parameter validation, but it does not solve the problem that no lists of mixed types are allowed. We have a lot of parameter that look like this :

border_polygons:
    - pose:
        orientation: 0.0
        x: 1.0
        y: -0.22
      rect:
        size:
          x: 2.0
          y: 0.01
    - pose:
        orientation: 0.0
        x: 1.0
        y: 1.45
      rect:
        size:
          x: 2.0
          y: 0.01

so our primary goal here is to support this use case by adding support for messages in parameters.

Similar to what @mikeferguson said above one thing that is possible with my library is to do this (small excerpt from the ros2_controllers joint_trajectory_controller parameter config):

joint_trajectory_controller:
  joints: {
    type: string_array,
    default_value: [],
    description: "Names of joints used by the controller",
    validation: {
      unique<>: null,
    }
  }
  gains:
    __map_joints:
      p: {
        type: double,
        default_value: 0.0,
        description: "Proportional gain for PID"
      }
      i: {
        type: double,
        default_value: 0.0,
        description: "Intigral gain for PID"
      }
      d: {
        type: double,
        default_value: 0.0,
        description: "Derivative gain for PID"
      }

This would result in a pameter structure like this:

test_joint_trajectory_controller:
  joints:
    - joint1
    - joint2
    - joint3
  gains:
    joint1:
      p: 1.0
      i: 4.2
      d: 0.1
    joint2:
      p: -1.0
      i: -0.2
      d: -0.1
    joint3:
      p: 0.3
      i: 0.3
      d: 0.1

The library generates a struct called Gains with the pid values and places them in a map from string values to structs. There is no reuse from one config to the next but using this you can create structures of parameters similar to what ros2_control does.

It is certainly possible to consider expanding the parameter API to support more complex data types. However it’s important to keep in mind the tradeoffs of doing those sorts of changes.

Parameters in ROS 2 were designed not just as a pure data storage, but also with specific semantics as well as introspection and discoverability. You can read more about the design here

You can see in that document that we specifically designed the parameters to be easily set and validated in configuration files (read launch files). With support for more complex datatypes we would have to add the ability to embed full messages in launch files. And there is technically not a problem serializing any message into json. But validating and error checking become much more of a problem. We would have to add logic to the launch mechanisms to have fallback behaviors if the complex datatypes cannot be parsed correctly. Whereas with the non-complex datatypes a yaml or xml linter is sufficient to know a priori that the config is parsable.

And similarly the parameter server adds several more layers including parameter events and parameter descriptions which are currently able to be parsed fully generically on any system. And we can make generic tooling that can interact with any parameter on any node. Adding custom datatypes support would mean that every layer of the introspection, discovery, description would also have to be made completely generic and made sure to be fully implemented in all client libraries.

There are also guarantees about atomicity of parameters available and the ability to accept or reject parameters.

The design works hard to make parameters as convenient as possible and flexible. However they’re not optimized for generic data storage, they’re focused on being specifically configuration parameters for nodes. That can be used to be tuned in launch configurations and at launch time.

I believe that extending parameters to become fully arbitrary data storage would be detrimental to the overall experience. Instead I would suggest that an additional data storage system be designed for the needed use case(s) that you’re looking for. Designing a storage and retrieval system will be important to take into consideration what sort of data you want to process. For example what is the data size? What is the bandwidth? What is your expectation for persistence? What level of introspection do you want to support? Do they need guarentees or other sematic meaningful behaviors?

Parameters are integrated into the launch but under the hood they’re actually built on top of messages and services you can see in rcl_interfaces The same sort of design process can be built and added to support other use cases.

When I think of supporting arbitrary datatypes beyond the non-complex ones that are very common are things like images and point clouds. These datatypes won’t work well in configuration files such as launch files.

In ROS 1 we basically delegated everything to XMLRPC for parameters and let the user do anything in there. The most backwards compatible way to add something would be to potentially add a GetXMLRPCValue.srv and a SetXMLRPCValue.srv in a package with a complement of language bindings. However this is taking a shortcut in the design and leaves out a lot of potential value from doing things like leveraging the potential reuse of our already defined message formats as you’ve highlighted.

But as a tl;dr my suggestion is that instead of trying to shoehorn arbitrary data into the parameter API, propose a data storage and retrieval interface that meets the semantic data requirements for your use case(s). With the modularity of ROS you can prototype this independently and in the future once it has been validated and shown to be in high demand we can consider promoting it into the core. But during development the ability to rapidly iterate and respond and evolve from feedback and testing will be much more productive than trying to directly evolve the much more well established API that many people are already relying upon.

Hi Tully, thanks for your detailed response.

I don’t get your point here. The proposal is to add a parameter that contains yaml/json whatever structured data. The validation of the contained data is still up to the node. Basically as long as its valid yaml, it will just be forwarded to to the node.

Apart from this, I would see it as beneficial, if the ParameterDescription could be enhanced to give hints about the expected format of the parameter, so that external tooling can be developed to do schema validation on the parameter yaml file. Also I would see this as a great improvement to the semantic of a parameter.

The parameter itself would stay atomical, you just need to set it as whole every the time.

This is an interesting point, especially the convenient part. In my opinion, and others I have talked to, it would be more convenient if it would support structured data. In my use-cases, parameters are often more complex and cannot be expressed easily without structured data.
The workaround proposed by mikeferguson & tylerweaver works somehow, but leads to code bloat and a poor user experience when trying to figure out what parameter to set, without looking into the code.

E.g.
Using the workaround the workflow comes down to you start the node, do a ros2 param list on it, fill a parameter ‘model’, restart. Then the node crashes as you did not set some parameter that just appeared based on the value that you entered in ‘model’. From there it is rinse and repeat…

This is not what I am aiming for at all…

I totally agree, I would not put images or point clouds into a parameter.yaml. Funnily enough, you could already do that using the binary option…
Again, our use case is structured data, like list of points, for the footprint polygon of our robot.

The proposal of adding generic yaml to msg parsing is build on top of the structured parameter support and can be seen as an independent thing, that can live in some unrelated library.
The goal of the generic parsing is just to use to reduce code bloat, by reusing the existing infrastructure.

I dont’ understand you proposal. Are you suggesting a fully separate infrastructure for loading parameters build on top of the normal ros API ?

For the high demand part, seeing that there are common patterns, to work around the shortcommings of the current parameters, and open issues on this, seems to be a clear indicator that there is demand.

Sorry my previous reply didn’t catch that you were proposing to actually embed the data as a json string and not actually proposing supporting embedded structured data.

This illustrates exactly my point. It could not be validated until it gets to the running node. It breaks the strong typing of datatypes used throughout ROS. A lack of strongly typed data prevents the system from providing introspection without resorting to string parsing and json parsing.

So the ParameterDescriptor would end up needing to effectively support a full JSON Schema. And every single tool such as launch and introspection tools, command line tools and linters would have to understand that too. Even if you were say using an xml file for the launch configuration, you’d have to parse json somewhere inside of the xml.

Overall we’ve made a commitment to keep data in the ROS messaging systems strongly typed. This allows the system to provide introspection, logging, and lots of other features without requiring custom coding.

I don’t see a significant difference between a point cloud and a list of points for a polygon. Why do you consider one structure with a list of points more reasonable than another structure with a list of points?

To this end if you want to be loading different robot configurations that may include a footprint. It sounds to me more like you should be using a parameter which is your robot type and then loading the robot’s footprint polygon and other information as a resource instead of embedding the raw data of the polygon into the parameter. This is the sort of approach I was suggesting that there are other design patterns to follow.

I was suggesting that if you create specific capabilities that are outside of the parameter interface. For example of your robot footprint polygon. You could have a service setRobotFootprint(polygon) which has clear semantic meaning instead of trying to shoehorn it into a parameter. With this more concrete case, I think my above suggestion of a parameterized resource makes more sense, but to clarify what I was suggesting.

As an aside, if you want to just abuse datatypes you could just slip json inside a string parameter. But i’m also going to strongly not recommend that. This is again doing an end run on the strong typing that has helped with ensuring compatibility between ROS nodes even as the ecosystem grows with only loosely coupled development cycles.

Reading through your reply especially the part about the strong typing and the yaml schema, I think we still have a misunderstanding going on.

My proposal is to to allow parameters that are of a message type. Therefore the ParameterDescription would get a new type ‘MSG’ and must if the type is set to ‘MSG’ contain the name of the msg.

Inside of ros2 there is already a schema present, how messages get converted to yaml.
For the way back, we can use the information present in the ros msg introspection system and infer the types of each yaml entry.

If your point is about you don’t want to stuff yaml into the ParameterValue, then I would propose to

  • modify the ParameterValue, by adding a name field
  • Add a new Parameter type ‘CONTAINER’
  • Add a new member to the msg: member_array_value

Afterwards one could still use the introspection system to convert ros messages to / from ParameterValue(s).

I am confused by this point, this is how it currently works (at least for the cpp part). The tooling is dumb, it just passes a yaml file to the node to be loaded from disc. Internally this is parsed and put into the overrides and any entry in this data will only be evaluated if you actually declare a parameter of some specific type. There is no strong typing involved as far as I can see.

As pointed out above, that my proposal is to deduct a type for every entry of the given yaml, by using the introspection features of the ros msg format. Therefore we would have strong typing on the evaluation side inside of the node.

I think this is misunderstanding of what I meant. My proposal is to export a msg name in the case that the parameter is of type yaml / msg. This is an information that external tooling MAY use.
For example you could create a tool, that starts a node, list all of its parameters, gets all of its parameter types, and armed with this, validates if all parameters are set in a given yaml config file / and if there are any extra entries in the file, that are not used by the node at all.
Like an optional roswtf param that you can run on your config, and that will tell you : Node expects ‘parameter’ but yaml file contains ‘prmateter’.

This is a bit a philosophical, for me the difference is, that a structured parameter could still be written and understand by hand. The Polygon I am referring to in my example would only contain a few points and be a rough shape of a robot (perhaps a bad example). This would not be the case for a full pointcloud / map of the environment etc.

Indeed a lot of this is somewhat philosophical. But that’s not a good reason to dismiss it. A metric of “could still be written by hand” is not a good way for us communicate to people when they ask should I do A or B.

Likewise, as I understand it you’re proposing to extend the parameter struct such that you can serialize ROS messages into json and send them as strings inside the parameter structure. Which when sent over the wire will be turned into a serialized parameter message with a field which is a string which contains a message which is already serialized to json. You could very similarly just use a string parameter and embed the json struct {type: std_msgs/Point, value: {x: 1, y: 2, z: 3}} But this is specifically not something that we recommend. It breaks the many layers of tooling that we’ve built up to help developers work. This is sending data over the wire that doesn’t declare it’s type such that it’s introspectable. It doesn’t compress down in the same way as the binary format. It is much more flexible but at the same time is much more fragile. For example there is no more hashsum checking if side A and side B have the same std_msgs/Point definition. The sender might be sending a Point and the receiver might be wanting a PointStamped, but there’s no way to know. The receiver might not have the type support installed for the data type of the sending node available.

In general we don’t want to encourage playing fast and loose with types. And your proposal for embedding this fast and loose typing into some of our core primatives such as the parameters I am strongly pushing back on because there is significant value in keeping these core principals strong. The enforced simplicity enables many things and tools that we use regularly.

Now at the same time I understand that you need to be able to send arbitrary data and my understanding from your proposal is that you can put it into an existing message defined as an msg or into a custom written msg. As such I return to my previous recommendation that you consider creating your own service by which you can set and get your arbitrary datatype captured in the msg.

  • Define MyParameters.msg
  • Create setMyParameters.srv using the above msg
  • Create a simple tool that will call setMyParameters from command line arguments, or loading from a file or whatever other source you want.

The parameters are basically set in exactly the same way. However, they are specifically restricted to a specific subset of data formats so that they can remain simpler and generically available and have additional toolling built up taking advantage of those constraints. In particular they don’t have to worry about typesupport for custom messages existing on their system if the parameter was sent from another system.

This will unblock you and your use cases and not require a revamp of the core capabilities of the parameters with the addition of general typesupport compiled in. Doing things like serializing to json and then sending it as a string is a neat trick for prototyping but is really taking a shortcut to getting where you want to go, and deferring the problem onto the receiving node’s responsibility.

I get your point there, therefore, my ‘updated’ proposal to enhance the ParameterValue by a Container type.
A Container ParameterValue would contain a vector of ParameterValue and therefore allow for support of structured parameters.
From our point of view this would solve all of our problems with the parameter system, and we could build everything else we want on top of it.

We could go down that road, and basically reimplement the parameter support completely and ignore the build in system. Or we could use the same manpower to improve the build in system for everyone.
This is more about improving the ecosystem than taking the fastest way to the goal for us…

Something along that line would potentially work. The inputs can be easily handled, and a form of container could be implemented. I think that the biggest challenge will actually be to provide a simple interface for the user interface. Class Parameter — rclcpp 22.0.0 documentation This would force another layer of indirection onto the user interface to have a vector of containers potentially. And some extra types would be required in the proposal.

This is a great sentiment, but I think that you’re underestimating the effort to build out and extend the parameter support. It’s not nowhere near an equivalent amount of work. There are a lot of other components that need to be touched for the first class parameters. And added complexity increases maintenance and learning costs in the long run. For example all parameter interaction tools and GUIs will have to support this extension as well. And every node the processes parameters will have to add a case to their parameter type parsing. If we’re still only exposing a list of mixed primitives then it still doesn’t necessarily unlock the full potential of a service which can store arbitrary large and complex messages. It’s not clear how many use cases will fall into this extra set of capabilities which should be traded off against the added complexity for all users.

This evolved proposal would be reasonable to put together. But when you do, please make sure to include the broader impacts to users code, and the different tools that interact with parameters.

Hi,

Interesting discussion, having strongly-typed, custom structures as parameters could come in handy.
Supporting it directly seems quite a lot of work, but a wrapper between messages and standard parameters could do the job.
For example, having a geometry_msgs/Point parameter called “p” can easily be wrapped around three params (p.x, p.y, p.z).
Easy to do in Python with message introspection, not sure how to do it in C++ if the message is not known at compilation time.

A package could offer such a wrapper, plus a bit of tooling around setting params from a full message. Indeed, existing tools will only see three doubles (p.x,p.y,p.z) without any semantic behind the scene.

Hey,
I started putting up a merge request, in order to implement advanced parameters
(fix: Generate correct code for nested arrays of own type by jmachowinski ¡ Pull Request #748 ¡ ros2/rosidl ¡ GitHub).
Sadly I don’t get any feedback on the merge request, what is the way forward here ?

Is there a separate PR for the client libraries so this can be tested through the parameter interface in the client libraries?