Process for third-party client libraries to be incorporated into the core ROS 2 distribution

Hi,

one of the goals of ros2-rust is its tight integration with ROS 2, with Rust being one of the supported languages as its end goal.

Right now ros2-rust is still lacking many features that other client libraries include by default, such as actions, parameters, logging and node composition, among others. We do have support for pubsub, clients and services, and the Rust code generator is almost complete (it only lacks support for constants).

Given that the only two client libraries and its code generators (C++ and Python) that are part of the standard distribution were developed in conjunction with ROS 2, I believe there’s no formal process for third party client libraries to be promoted into the core.

As such, I’d like to ask what would the requirements be for a client library and a code generator to be in included. From ros2-rust we’d be happy to be the guinea pigs for this process. Would this topic be of interest for the next TSC meeting?

Thanks.

11 Likes

So before we talk about the getting ros2-rust into the core, we need to make sure it is releasable into the ROS ecosystem at all. That is currently not the case for any Rust packages.

To get there, we need to have someone go through and do the work to make Rust packages releasable and buildable on the ROS buildfarm (https://build.ros2.org), as well as on CI (https://ci.ros2.org). That may involve work in:

(this list is likely incomplete)

If you are interested in pushing forward on this, then I’ll point you to @nuclearsandwich , who has been thinking about this problem and probably has a much better idea of what would need to be done.

Once we can release and CI ros2-rust at all, we can consider whether it makes sense to put it into the core.

3 Likes

Of course! Hence why I started this thread, so we can figure out what something like this would involve, it’s never too early to talk.

Moreover, I’m interested not only in what the buildfarm infrastructure needs, but what would the expectations for a third-party package would be before it’s considered for inclusion into the core. For example, what minimum features it should have, whether or not OR would be involved in it, if the members of the team behind it must belong to different organizations, etc.

For reference, the incubation process for an Apache project (e.g. Incubation Status File · apache/superset Wiki · GitHub), has certain requirements at the project management level for a project to be accepted into Apache.

Awesome! @nuclearsandwich do you have a document, a ticket or maybe perhaps just post here what you’ve found so far? Thanks.

I’d add, though, that we follow pretty much all the ROS conventions for building packages. We have colcon extensions for cargo and ament_cargo build types, so that ros2-rust (and packages that depend on it) can be built via colcon. We could start by releasing these two as part of python3-colcon-common-extensions.

ros2-rust is divided into a client library and a code generator. The code generator follows a similar process as the C++ and Python generators, i.e it’s written in Python and leverages rosidl and all the related packages. One difference here is that unlike the Python generator, there’s no C layer in between rosidl_typesupport and the Rust messages (the generator only spits out .rs files), and these are only compiled at the very last stage (i.e. when an application depends on these, but not when the message packages are built).

1 Like

We’ve imported third-party repositories into the core, usually in response to a new feature or a need in the core. We don’t have a formal process for what it needs to have, but at the minimum:

  1. The package must be releasable on https://builds.ros2.org so we can distribute binary packages for it.
  2. The package must be buildable on https://ci.ros2.org so we can run nightly CI on it.
  3. The package has to be Tier 1 support. That means that when it breaks, we will absolutely fix it. This also means the package becomes release-critical, i.e. we cannot do a release without it.
  4. I’d say that if we are going to import a new client library, at the minimum it must have support for all of the basic features of rclcpp/rclpy: topics, services, actions, parameters, timers, executors, etc. We don’t have a formal list here, but for most normal uses of ROS 2 things should work.

There’s probably more, but that’s at least a starting point. Regardless, I think we are pretty far away from being able to release Rust things on the farm, so we should probably hold off on the details of this conversation until we get there.

Quite the opposite IMHO, the more we discuss this, learn what the steps are and iron out the kinks, the closer we’ll get there.

I’m with you that we’re quite far from adding Rust as a supported language in the core distribution. If there’s an interest by the TSC in adding Rust (which I’d say there is, given that there’s always a specific section about Rust in the TSC meeting minutes), from the ros2-rust project, we are more than happy to shape our roadmap to meet the needs of the TSC, but without any conversation, we won’t know what it is that we should prioritize and it’ll surely take everyone much longer.

Anyway, happy to hear from @nuclearsandwich about what the missing pieces are so that we can collaborate.

2 Likes

While reviewing Debian’s Rust Packaging Policy when extending tools like bloom to support Rust based ROS packages, it may also be worth considering William Brown’s talk at RustConf 2022 about shipping Rust packages for another linux distro, coincidentally published just last week. Just an FYI:

2 Likes

I’ll post here for now. The Open Robotics Infrastructure team has an internal roadmap of features and improvements we want to add to the infrastructure supporting Gazebo and ROS 2 and getting the ball rolling on Rust packaging for ROS 2 is on that roadmap. There isn’t an overarching issue tracker for ROS infrastructure but rather than start one or put a meta-issue on ros-infrastructure/ros_buildfarm or ros/rosdistro I actually think it is worth actually putting together an REP, now that we’re in this age of language-level packaging ecosystems, to set down some expectations for how they interoperate with ROS. We’ve often answered questions about ROS and PyPI but we’ve never, to my knowledge, written out our position and formalized it.

As far as the mechanical things that need to happen for Rust packages in ROS 2, here’s a list I made earlier this year for an internal project. As you alluded to it’s just a draft and certainly not fully detailed or comprehensive:

  • Rust package support in ROS 2 tooling
    • cargo and crates.io support in rosdep
    • Support using a workspace-level cargo registry in colcon-cargo
    • Add rust library for ament_index handling
    • Add platform release templates to bloom supporting Rust packages
  • Platform support for Rust
    • Create workflow for creating deb packages of common set Rust dependencies
    • Update Windows CI images to install Cargo and Rust
    • Update Windows setup instructions to install Cargo and Rust

In addition to the distribution questions I think Rust raises some other fundamental questions of the ROS ecosystem which are worth raising out loud for projects that have ROS 2 core aspirations.

Rust does not bring with it just a new build tool frontend but an entirely distinct compiler toolchain (on most, if not all of our currently supported platforms since it’s LLVM based rather than GCC which adds quite some surface area to the dependencies required for building the core. I’m not saying that’s a showstopper but it should not pass without remark. I’m assuming that the Rust frontend in GCC or the GCC codegen backend are neither mature enough nor really considered as part of the Rust story for ROS by the existing Rust users.

Given that this is an explicit difference from the behavior of the core packages it’s worth determining if that difference is compatible with core inclusion. I could see situations where generated code needs to be reviewed along with other code in a certification context and this is a small enough nitpick to raise out of context but does that mean that each Rust package will regenerate messages instead of all packages that use the same message sharing the generated code? (I know Rust compile times aren’t fast but it doesn’t seem like doing work over and over again will help that :love_letter:)

Those are the things that jump out to me first but I think of a few new ones each time it comes up. I think that there are broadly two different categories of technical hurdles to resolve: precise implementation details about how the systems will interact. And larger technical questions about what are the restrictions/standards/expectations/whatever-you-want-to-call-them for packages in ROS 2 overall (where we have historically had firm ideas but only enforce them via encouragement) and ROS 2 core in particular where we would be reviewing everything very closely.


I’d like to draft up an REP which defines what I consider to be the status quo (which I’m sure will get elaborated and adjusted by others on the ROS 2 core team) and either an additional REP focusing specifically on Rust or a set of patches on top of the first one doing the same.

I haven’t yet given that RustConf talk a watch yet, although I am looking forward to winding down my Friday workday with it, so I don’t want to straw-man what I think the subject will be and I want to give it a chance.

Yes, as the author of the colcon integration, I think it’s hard to get cargo and colcon to be friends :smile:

Sounds like you already are aware, but compiled libraries don’t really exist in Rust for our purposes. Libraries are distributed as source code. That’s because Rust libraries (.rlib files) do not have a stable ABI, you can’t distribute them to different machines or use them from different rustc versions. Even if you wouldn’t have to worry about this, I think working with rlib files directly would mean giving up cargo and using rustc directly – which would be a gigantic amount of work.

No, it will not regenerate messages – recompile, yes, but the .rs files are created once, when the message package is built. The recompilation is a broader problem than just messages, every library included in another package will get rebuilt each time.

I believe the canonical solution for that is cargo workspaces, but that clashes with how colcon-cargo works. I think using CARGO_TARGET_DIR might also help a bit, but haven’t tried that out yet.

If I understand correctly, if we set CARGO_REGISTRIES_MY_REGISTRY_INDEX, all packages in Cargo.toml would be fetched from there. How would projects work that have a mix of ROS-related and non-ROS-related dependencies? I.e. it sounds like you would need to mirror all crates from crates.io, right?

Currently, colcon-cargo/colcon-ros-cargo uses a different approach: If my_pkg depends on my_other_pkg (both Rust) and my_other_pkg exists in the workspace, it uses that one instead of looking for it on crates.io.

There is ament_rs - Rust for read acces to the ament index. Do you think it should also support writing?

I’ve not got the time to reply to everything today (I’m busy enjoying the rare rainy day in California) but I wanted to acknowledge a couple of points. Thanks very much for the info!

Yeah, following the lead of Debian and Fedora we’re likely planning for rust crates to be packaged as source code and during build farm packaging jobs, only the crates in the platform distribution or distributed in the ROS repositories will be available to use.

Ah I see. I misunderstood from Esteve’s post that the rust codegen would be run within each Rust package but generating the Rust source in messages makes sense. Thanks for the correction.

This makes me wonder if a .cargo/config.toml approach could work. The implementation of colcon-ros-cargo works by placing a cargo config file in the root of the colcon workspace. That config file provides a map from crate names to the path where their source code can be found. When cargo is run, it will automatically find that config file and redirect any use of the listed crates to use the source code at the path that it maps to.

Perhaps if the ros distro could provide an environment variable that points at the path of a config.toml or even just some .yaml file that maps crate names within the distro to source code paths, then colcon-ros-cargo could read that in at build time and add them to the .cargo/config.toml that it generates.

That same strategy could be used to support colcon underlays for cargo packages. I’m not sure if underlay support has already been implemented @nnmm ?

On second thought, the ROS distro wouldn’t be able to provide one single file that maps crate names to source code paths because we’d want to support partial installations. I.e. a user should be able to install ros-<distro>-valuable-crate without installing ros-<distro>-useless-crate, and I assume it would be onerous if not impossible to have a file inside of /ros/<distro>/ that gets updated whenever packages are installed or removed.

Maybe instead we could use ament_index to expose which packages contain cargo crates and then have colcon-ros-cargo collect that information into the .cargo/config.toml for the workspace.

1 Like

Just chiming in as the Ada client library maintainer. I’m not currently looking to have rclada as an officially supported language, but I would like to have it working with the build farm and etc. So any roadmap/documentation that results from this effort will be also very interesting to me.

1 Like

I’m not sure what underlay support implies. But at least for the case where you have a ROS 2 install sourced and are building a local workspace with Rust packages, everything works fine.

Sorry for the vagueness. When I said “underlay support” I specifically meant underlaying a colcon workspace that has some cargo crates installed in it so that crates in the top level workspace will have their crate dependencies redirected to the crates in the underlay (if present).

I’ve started to collect the “requirements” for a new client library (but also a new programming language ecosystem) in a draft REP: [REP-2013] ROS 2 Rust Client Library Integration by nuclearsandwich · Pull Request #363 · ros-infrastructure/rep · GitHub as I mentioned in that pull request, at the moment I’m primarily looking for feedback from other members of the ROS 2 core team about what needs to go in this document and once the structure is a little more mature I’ll expand on various things and soliciting more feedback from my primary audience: y’all!

2 Likes

There is some discretion here if we extrapolate from the Python and C++ behavior, and I know that neither ecosystem maps 1:1 on what Rust and Cargo do. I’m very grateful for the corrections I’ve received thus far and will be further grateful for any that follow.

There are several different cases that we’ll eventually have to cover and given my role as on the ROS infrastructure team I find it easier to work “backwards” from the eventual goal of allowing Rust packages to be built on the ROS build farm with all of the constraints required for that. And that’s how I started drafting this novella of a post, but I’ve since tried to turn it all right-side-up and I don’t actually think we need to compartmentalize quite as much as I initially thought we ought to.

I’ll use Ubuntu for specific examples but it’s worth keeping in mind that RHEL is a supported platform as well. Luckily I think they’re pretty similar for our purposes here.

The dependencies of packages in your workspace is something that colcon is explicitly hands-off with.
It does not have any facility for enforcing or resolving system-level dependencies whether they be “ROS packages”, packages installed via pip, or you’re run-of-the-mill packages installed via apt. It sounds like you’ve already done some work to make sure that in-workspace crate dependencies are successfully found by subsequent builds. I think that’s the most important functionality required.

Cargo is able to fetch package dependencies from crates.io as you need them just like pip is able to do for packages from PyPI (1), this means that rosdep isn’t strictly required to support cargo or pip. But I think there’s a general expectation that ROS package dependencies are expressed in a package.xml file and that those dependencies are resolvable with rosdep. I know cargo install is used to download and install application crates. Is it also used to install library crates in a per-system or per-user location and is there any extra plumbing required to use those crates from a build once installed?
That functionality would be sufficient to give cargo packages parity with Python. I’ve got a half-baked idea that it would be interesting if it was also possible to prevent or restrict just-in-time fetching and installation of packages from crates.io during a build (even as an optional behavior) in order to help ROS Rust developers vet whether their package.xml has comprehensive coverage of their Cargo dependencies which have already been installed using rosdep. Although colcon’s isolation mechanisms don’t extend to systemwide packages so it would not really guarantee that dependencies are fully described unless you used separate cargo workspaces for each package, which without some kind of hard-link caching would be really storage intensive and probably not worthwhile, and even that would not also cover non-cargo dependencies for those packages.

The colcon support you’ve got gets you building in local workspaces and adding rosdep support for installing crates with cargo would allow you to specify crate dependencies in your package.xml alongside other dependencies.

When prepping a package for a build farm release using bloom, all of that package’s dependencies must use the system package manager. That is, neither pip nor cargo can be used directly to install dependencies.
In addition to using the system package manager, ros/rosdistro describes which repositories dependencies can come from for a given platform.
Dependencies can also come from the current ROS distribution itself. If a dependency isn’t available upstream, sometimes it will be packaged directly in a ROS distribution, especially if it’s something that would be of use to the general ROS community. (2)

For Rust packages on the build farm, the same requirements would apply.
Even though getting packages directly from crates.io is possible (and as far as I know is the default behavior in cargo) doing so wouldn’t be acceptable on the build farm because it circumvents the distribution expectations for ROS packages, which is that their dependencies come from either the ROS repositories directly or from the platform-specific expected rosdep sources linked above.
rosdep currently supports adding dependencies that are installed by pip and other non-default package managers in order to aid in bootstrapping with packages that may not yet be released or on platforms that are not yet otherwise supported, as well as for developers who don’t intend their packages to be released on the build farm and therefore do not have that constraint.

Python has an advantage in that it its ecosystem is relatively mature and many commonly used python packages have been packaged for Debian and RHEL so this limitation most often affects very niche or very new dependencies.

I think there’s also something to be said for the composition of Rust projects. Due to the robustness of the package management making incorporating dependencies a more streamlined process, packages generally have more dependencies which in turn lead to still more transitive dependencies, which means that it’s going to be harder still for us to keep up with packaging dependencies. I think that we’re going to see quite a lot of need for the “vendoring” of common Rust crates within ROS as those crates make their way into our platforms upstream. The Debian team has developed debcargo to automate the creation of Debian packages from Rust Crates and for a while at least they were also maintaining a centralized list of crates they were packaging with debcargo. I’m a little concerned by the potential volume of vendored crates that will be requested but I haven’t actually looked at the transitive set of dependencies for the ros2-rust project. How big is it? Of those packages, how many are already being packaged in Debian?

Another major issue which is discussed in the talk linked above is that the culture among Rust maintainers is not one which favors maintaining strong compatibility guarantees over long support cycles (This is not meant as an admonishment against any Rust package maintainers, I think that it is not an unreasonable position to take). Semantic Versioning is common in Rust but it is likely that security fixes in a Rust crate will be made in the latest releases only or even potentially in a new major release.

This is something that packages in ROS can generally roll with but it’s not clear to me yet how long term stable distributions like Debian, RHEL, and Ubuntu plan to handle it as they generally do what they can to avoid breaking changes during a distribution’s lifecycle even if it means introducing their own patches to implement fixes in a compatible way.

The SUSE talk touches on this as well and they’ve integrated tooling into their fantastic Open Build Service infrastructure (3) but absent from that talk as far as I can tell is how they resolve conflicts between breaking changes and security-motivated fixes. Did I miss a reference in the talk or do any of the Rust peeps have more context? Maybe my idea of the breakneck pace of Rust development is stale (I was last very active writing Rust in the pre std::future era and stepped away during the apex (or nadir depending on how you look at it) of the async-std and tokio discussions but I remember having a very hard time trying to keep anything stable while the foundations were still molten.

(1) Actually this might be the first time I’ve realized that pip may sneakily install a missing dependency if you’re not paying attention during a colcon build. I think this is avoided much of the time because ROS packages tend not to specify dependencies in setup.py / setup.cfg at all since they’re instead in the package.xml. I’d have to test this out to be sure but I’m fairly certain colcon does nothing to block this from happening.

(2) This process is sometimes referred to as vendoring and REPing up a set of recommended practices for that is also on my wishlist.

(3) Find me with some good tea or gin at ROSCon and ask me how tempted I am to try and rebuild the ROS build farm on OBS.

2 Likes

Thanks for your excellent write-up @nuclearsandwich!

No, cargo install doesn’t work for libraries. When you have a library on your machine that you want to use, you write my_local_library = { path = '../src/my_local_library' } in Cargo.toml and run cargo build.

In colcon-ros-cargo we used a hack so that you can also write my_local_library = <version>, i.e. without giving the explicit path, as if it was a package in the registry. The hack is implemented with [patch] entries in .cargo/config.toml containing the correct path, generated by colcon-ros-cargo. We did that to conform to (what we think is) a principle of colcon: To reference dependencies only by name, not by their path, so that you can move around packages in your workspace like you want.

I’m calling it a hack because it has some drawbacks:

  • You will get lots of warnings about unused entries in .cargo/config.toml
  • Switching branches will often result in errors because of a stale .cargo/config.toml
  • It’s not clear from Cargo.toml whether a dependency is intended to come from a local package or from crates.io. If you are mistaken about whether or not a package is in your workspace, you might accidentally get the version from crates.io.

So, if you download (with curl or something) a library somewhere on your computer, the necessary “plumbing” you mentioned would be to use such a hack, or to rewrite the Cargo.toml files to have path entries instead of registry entries, before you call cargo build.

Using the Rust libraries shipped as Debian packages (not those that may in the future released by ROS, but those currently in Debian) is pretty foreign to most Rust users who are not Debian maintainers. I’ve never done it or heard about anyone doing it, and I’m honestly not sure how you would use them – probably Source Replacement - The Cargo Book is worth looking into. Still, for bloom and the build farm it might be possible to use them somehow. I’m really fuzzy on what bloom and build farm do though, and packaging in general. That said, the dependencies of the Rust client library are currently minimal and they’re apparently all available in Ubuntu 22.04.

Maybe I can take a step back and ask a really naive question here though – do you intend for the package produced by the build farm/bloom to be the primary way of distributing the Rust client library, as opposed to just releasing it on crates.io like we’ve done so far? Could we do both, maybe? Not trying to be negative, I’d just really like to understand everyone’s overall perspective better.

The context for this question is that for a Rust package that’s intended to be used in a Rust program, most users will probably find using the version from crates.io is the most attractive option. This way they can use pure Cargo, which is generally well liked by Rustaceans, and they don’t have to modify their system.

For Rust packages intended to be used by anything other than a Rust program, in contrast, you will always need colcon. For instance, a package that produces a shared library for use by C++ or Python. So should this kind of package be our primary motivation for the bloom and build farm integration? I hope that made at least a bit of sense, please tell me why I’m wrong though :smiley:

Also a second dumb question: Why can’t bloom install pip and cargo dependencies directly, as you mentioned above, @nuclearsandwich?

Does this imply that there is a chance that with rclrs released into crates.io you might be able to write pure rust code that does not need an installed version of ROS on the system and that the crates.io dependency will contain everything you need to develop a rust application that uses ROS?

I, for one, would like to write rust code that can be consumed by python or c++ applications and vice-versa. At the same time, if I were writing a pure-rust project, it would be nice to use the native tooling for specifying and getting dependencies as the Rust system is ergonomic and automatic.

As an aside, does the buildfarm prevent cmake packages from using add_custom_command to shell out to cargo or pip in any way?

Hrm, that definitely makes it harder for us to map into rosdep in any kind of consistent way since as far as I can tell that means there is no way to request the installation of a Rust library outside the context of a cargo build.

I don’t think that colcon is principled about referencing dependencies by name only. But I think that this may be a secondary conclusion of another expectation of — I don’t want to say an expectation of colcon I think it’s nearer to say that it’s an expectation of colcon users — that a package’s dependencies can either be “siblings” of the package in the current workspace or available on the current system / in the current environment.

You’ll see discussions around colcon discuss “overlay” and “underlay” workspaces and I think it’s worth checking out the documentation on using multiple workspaces if you have not already done so. But when looking at a single colcon build or test, colcon is only aware of packages that are in the workspace. Everything else, whether it be in an underlay workspace, installed using pip or conda, or a package installed using a system-wide package manager, to colcon it’s just “the environment” or “the system” (these terms are mine).

I think that colcon only has one strong position about dependencies: During a build colcon will make sure that any dependencies of your package which are detected in the current workspace will be built before your package and the environment setup scripts for those dependencies will be sourced before your build is run. Likewise during the test phase colcon will again make sure that your dependencies setup scripts are sourced in the environment your tests run in.
Other than that colcon is not doing anything but letting the environment colcon was run in pass through to the packages being built/tested.

It sounds like there might be something of an inversion of control in cargo compared to the python and cmake-based systems that colcon already works with. In both current cases it is generally up to the installed dependency to make itself discoverable by a downstream package. Most python packages installed via pip are installed into a common location that the system is already configured to check for modules, but it’s also possible to install packages in an uncommon location and use the PYTHONPATH environment variable (as well as APIs within python itself) to instruct the module import system to check additional places for Python modules and that’s the mechanism used by colcon’s default environment scripts to make installed python packages available. Current versions of CMake include the ability to configure package registries so that information about installed dependencies (these are the package_nameConfig.cmake files) or it can use custom CMake scripts within the current package to find dependencies manually (named Findpackage_name.cmake).

Based on what you describe, it seems like there isn’t really a way for an installed Rust library that’s already been “installed” to declare “find me here” without touching a solitary file: .cargo/config.toml. And while it is possible for colcon to modify the process environment for each individual package it is probably harder to create per-package cargo configuration files and keep those up-to-date.
It looks like cargo config files are hierarchically merged but it doesn’t appear that we can control that hierarchy in any way in order to support multiple workspaces without having to manually unify all underlay configuration into the current workspace’s .cargo/config.toml.

Even with all of the downsides you describe, to me the current hack is still preferable to modifying Cargo.toml files inside the source directories of a workspace. Modifying files that are version controlled with changes that are highly specific to the current location on disk seems like a recipe for frustration.

Does colcon-cargo run cargo builds inside the source directories or is it copying/linking packages into the build/ directory before it generates any intermediate artifacts?

Here’s an extremely underbaked idea: If we were using out-of-tree builds, a CMake recommended practice and something colcon does with CMake packages, albeit for different reasons than we’d do them for Rust packages then the Cargo.toml of the build directory could be ephemeral and thus modifiable by colcon-cargo without worrying about interfering or confusing the source Cargo.toml and you could use whatever hard-coded paths you want there to route in-workspace dependencies where they need to go. Since you’re only wiring the dependencies needed by the current package you would eliminate the unused entry warnings. As long as you’re only running builds via colcon (1) before each build it would be able to vet that workspace dependencies are still present when altering the in-build Cargo.toml preventing the stale dependency problem. This would not do anything to address the third listed drawback but the ambiguity of a dependency’s source could be considered an acceptable trade-off for the flexibility of that dependency’s source.

(1) This is the expected workflow when using colcon even when you’re iterating on rebuilding just a single package because colcon is doing work to manage the environment beyond simply invoking the cmake / python build and installation commands. For working on a package foo_rs in a workspace you might first run colcon build --packages-up-to foo_rs and then as you modify foo_rs periodically rebuild it with colcon build --packages-select foo_rs.


Just today while looking more into this I found this document for the Debian Rust team. There is some Debian packaging specific jargon in there but reading through that tips section gave me a lot of hints about the work they’re doing to package Rust libraries in general. Toward the end they make explicit reference to using source replacement in order to develop using libraries installed via apt.


I don’t see this any negativity in asking. It’s a very fair question. I do want to try and break it down since there are several questions in there.

do you intend for the package produced by the build farm/bloom to be the primary way of distributing the Rust client library

So, for the ROS 2 core team packages in the ROS repositories are the primary way of distributing ROS 2. The topic of this thread is “process for third party client libraries to be incorporated into the ROS 2 core distribution” so from a purely mechanical standpoint if the Rust client library is going to be part of the ROS 2 core, its primary distribution should be the same as the rest of ROS 2.

as opposed to just releasing it on crates.io like we’ve done so far? Could we do both, maybe?

In short, yes. I don’t think we would discourage distributing the project in any manner the project maintainers see fit (within ethical and legal bounds) and having multiple distribution mechanisms will help enable folks to work how they want to work. The drawback to multiple distribution channels is that you need to then manage, document, and support various combinations of configurations. But depending on how we managed shared responsibilities between the Rust WG / ros2-rust maintainers and the core team I expect that we can find ways to support each other. I do think that it’s worth emphasizing that for packages that are formally part of the ROS 2 core the distribution mechansim that would be covered under our support tier policy, development workflows (such as feature, API, and code freezes), and QA procedures would be the packaged releases on the official build farm irrespective of other distribution mechanisms. So this sort of blends in with the previous answer but to us recognition as officially part of the core would require adopting the core’s distribution methods but not to the exclusion of other methods as long as the Rust team is prepared support multiple methods.

I understand where you’re coming from. I think that there’s probably going to be some tug-of-war here as we discuss whose ecosystem/community should have to buckle slightly to accommodate the other.
I can take my own turn at asking a naive question: How is it currently possible to use ros2-rust purely with cargo? Looking at these instructions it looks like a ROS 2 installation is required and while the interface may be “pure Cargo” you’re still working with the system package manager or a manually installed build of ROS 2 behind the scenes. What system modifications do you think they’d want to avoid?

You won’t need colcon specifically (see “The longest footnote”) if you have a Rust package with dynamic libraries or applications that you want to make available to the wider ROS community, the buildfarm is the way to do that. In the same way that asking a Rust developer to change their workflow is something that you’re trying to avoid. Asking a ROS developer to set up a secondary system in cargo in order to install or build dynamic libraries is going to grate similarly. The ROS distribution database ros/rosdistro contains source code and release information for every package that could be considered officially “in ROS” and I think that part of what’s being discussed in this thread is what would the expectations be for Rust packages to join them there. I can certainly anticipate that general ROS users would want to be able to run apt install ros-rolling-cool-rust-node upon learning about it rather than setting up an entirely parallel workflow using Cargo directly. So I wouldn’t necessarily consider cross-language inter-operation as the reason for pursuing the official build infrastructure but rather being distributed and available in the existing ROS repositories as well as fetchable and buildable from source using the existing ROS tooling.

The longest footnote

We use colcon as the primary “build tool” for ROS 2 but I tend to think of colcon as just a frontend, albeit one that is very hard to replace due to the complexity of the work it is fronting. There’s an article on the old ROS 2 design website which distinguishes between build tools and build systems. Quoting from that article:

A build tool operates on a set of packages. It determines the dependency graph and invokes the specific build system for each package in topological order. The build tool itself should know as little as possible about the build system used for a specific package. Just enough in order to know how to setup the environment for it, invoke the build, and setup the environment to use the built package.

The build system on the other hand operates on a single package. Examples are Make, CMake, Python setuptools, or Autotools

The reason I bring up this distinction and the reason I’m so partial to thinking of colcon as a frontend is that it is technically possible to build ROS from source and use it entirely without colcon. But that “technically” is incredibly load bearing and lest I be accused of confusing the issue there is no recommended alternative to colcon for general ROS development nor do I think there ought to be one. But the build farm infrastructure for creating binary packages does not use colcon. Instead it does the laborious process of separating out each package, building it independently, creating binary packages and making them available for downstream packages. If anyone would like more details on that process I can use this as a chance to plug my upcoming ROSCon 2022 talk " The ROS build farm and you: How ROS packages you release become binary packages." :smiley:


Honestly this is probably deserving of its own REP but being as succinct as I’m able. This limitation is inherited purposefully from the policies of many Linux distributions that packages in the distribution must be reproducible entirely using packages in the distribution. Among other advantages this allows the distribution’s security team to be able to review packaged software for published security vulnerabilities and update shared library dependencies in a single pass rather than having to inspect each individual package which may be bundling or vendoring dependencies.

Allowing the installation of packages from “outside” our distribution system makes it that much more difficult to manage the distribution as a whole and ensure compatibility between all software within the distribution.
If packages could arbitrarily install packages from third party sources, there would be no guarantees that the distribution as whole was coherent and compatible with itself. This is, inherently a trade off of what I’ll term the “distribution integration” model versus the “application integration” model. The distribution integration model is more restrictive in what it allows, but in exchange software within that distribution is reliably mutually compatible. Whereas if each application is responsible for integrating its own dependencies based in its individual needs, then the work of integration no longer mutually compatible and while it’s likely that the integration surface of an individual application is smaller than trying to integrate a whole distribution the work needs to be done by each and every application integrator at each stage. There are ROS users out there for whom I’m sure the “application integration” model is the preferred approach. You get to integrate exactly the versions of whatever software you wish however you wish to acquire it and are comfortable enough with that process to maintain it individually. But part of the value provided by ROS distributions is that distribution integration model. If you set up the ROS 2 apt repositories on a basic Ubuntu system and install a stable ROS 2 distribution you can be fairly confident that the applications in that distribution, and any applications you build on top of software in the distribution are going to be mutually compatible.

We do not actively inspect or restrict this globally. The ROS infrastructure team do caution against it when we see it being used for that purpose and do our best to help package maintainers integrate recommended alternatives instead. I would _strongly_discourage anyone from trying this as packages which knowingly circumvent distribution restrictions can be removed from the distribution by the core team.

There is a relevant line in the Debian Policy handbook on build restrictions

For packages in the main archive, required targets must not attempt network access, except, via the loopback interface, to services on the build host that have been started by the build.

We don’t block network access since it is used knowingly for some packages designed to vendor but that’s not to say I wouldn’t prefer to enforce some of these restrictions generally and lift them for packages that we expressly integrate.


At least reading the current setup, the ros2-rust project requires a working ROS 2 installation on the local system. So this is at least not the case currently.

A “pure” Rust ROS 2 stack would essentially be an entirely independent implementation of ROS 2 since libraries like rcl and rmw are implemented in C.


At the risk of sounding trite while also shamelessly saying some trite stuff. I’ve really appreciated your patience and thoroughness talking me through the way things currently work and I hope that I’m doing at least an adequate job reciprocating. I’m finding this discussion both productive and enjoyable. Thanks!

3 Likes

Yep, colcon-ros-cargo supports building a package in workspace A, sourcing the install space of workspace A, and then building a package in workspace B that depends on the former package.