ROS Resources: Documentation | Support | Discussion Forum | Service Status | Q&A answers.ros.org

Splitting the Autoware.AI repository and changing the organisation

#33

Note: I do not really understand what benefit will splitting and re-organizing of Autoware.AI repo have. Since the plan is to transform it into a sandbox in 12-18 months from now, I really do not think that we spend our resources here very wisely.

However since this topic will also be applicable for Autoware.Auto and since at Apex we have 220 packages in a monolithic repository and everybody loves it - I will provide an input.

You also need:

  1. core_localization
  2. core_decision_making
  3. examples/demos/tutorials
  4. thirdparty

What I suggest is to have above as subfolders inside the autoware root folder of a monolithic repo.

@sgermanserrano I do not fully understand how are a repository structure and where you install/run Autoware nodes connected? As it is mentioned further down, you can easily create binaries for just subparts of the repo (e.g. visualization) and install it on the same or different machine than e.g. real-time nodes.

Do we really plan to write our own visualization code and not use e.g. rviz, xviz, rqt_plot, etc.? With that all that we would need to save are visualization config files.

This is a) certainly not applicable to Autoware.AI and b) whether a package is safety-critical or not could also be asserted via a cmake macro and depending on that you can build which CI rules and checks apply to that particular package or not.

@amc-nu all of your requests can be done in one repo. I claim that it is actually even easier to design and define cleaner and minimal interfaces. For testing that no intended dependencies creep over colcon build builds the packages in an isolated mode.

I would also add:

  1. you do not need a feature and code freeze when doing releases, these can both be one
  2. single entry point for developer, all work for developers is in one place
  3. much easier to co-host documentation and code in one repo and actually make checks against changed code (e.g. changed executable names, APIs, …)
  4. much better traceability from feature request, design document, code, tests, PR and CI. E.g. OSRF folks have to manually link github issues with PRs and PRs with CI/CD, e.g. https://github.com/ros2/rclcpp/pull/639#issuecomment-470160609. Here I am not 100% certain if this just a gitlab feature but in gitlab in the monolithic repo this happens automatically
  5. easier to extract activity report (commits, branch graph, analytics, …)
  6. easier user/organization management

I think that all 3 above points can be addressed:

  1. if you split packages as proposed above you can modules independently and then re-use artifacts from previous builds => we are doing this
  2. amount of issues/PRS can be mitigated by someone like you @gbiggs
  3. as @davetcoleman mentioned this is a limitation of bloom

We are more than happy to share what we developed for our internal mono repo in terms of layout and CI.

D.

1 Like

#34

It will aide with the transition between Autoware.AI and Autoware.Auto, although it is not essential for this.

Part of perception, as I understand it.

Part of planning.

Yes, this is potentially necessary.

Depends on what is in there. I’d prefer not to be managing someone else’s code if we can find a better way to use it.

Not to my knowledge, but I wouldn’t be surprised to see, for example, custom renderers for custom data.

I don’t think the repository organisation affects how minimal the interfaces are. I do think that if interfaces are in a separate repository it is mentally enforced that interface declaration is separate from implementation, but I wouldn’t separate the repositories solely for this.

I think this applies to any number of repositories. It’s a function of the branching model.

Achievable with an organisation, although I agree it is not as straight-forward.

If we split the repository, my goal is that each can be treated as an individual black-box unit with its own API and documentation.

Can you provide some examples? I believe it is possible to achieve traceability across multiple repositories as well, but I am concerned about the amount of manual work that may be involved. It’s also possible that much of this may be a tooling problem (especially the CI).

I don’t think this really changes if you have one or multiple repositories.

Organisations fix this, too. Plus as @dirk-thomas said if you want to restrict permissions to just part of the code then it is easier with multiple repositories.

Yes, as I said above I want to get nightly binary building going eventually. I don’t see this as an argument either way, but it is easier to do with split repositories if you can treat each repository as a single unit.

The rest of my life just became startlingly clear.

On the other hand, with 131 packages releasing all of them individually would suck so splitting repositories and then being able to several repositories is nice.

Please do so, it will contribute to the discussion and help us make a better-informed decision.

0 Likes

#35

You should try running bloom on a repo with that many packages. You will be waiting a pretty long time to do that one release… (I am every time bothered by the time it takes to do a release of ros_comm which has less than a quarter of the packages.)

And if you then notice that one of the 131 packages needs a bug fix release for a single line change but you have to release all 131 packages again (each except one with an empty changelog). And every user will have to download 131 new Debian packages :wink:

Of course you could tweak bloom to support releasing subsets of packages from a single repo. But realistically is anyone interested to put the effort into doing that?

0 Likes

#36

@Dejan_Pangercic the comment was a consideration to have when considering splitting the repository. But as it is today if I want to run Autoware in a Synquacer (which is Arm based) and have Rviz running in x86 I would need 2 full copies of Autoware that need to be compiled separately.
A split repository would mean a higher degree of control as what to compile and install in a particular machine, without a deeper knowledge of Autoware, otherwise for a new user I do not see an easy way to decide what nodes are needed.

0 Likes

#37

Just a few clarifications from me, which I’ll do inline, but in general I’d say that splitting things up in repositories can be helpful, especially for consumers of the project (assuming it’s a framework or SDK, rather than a stand alone application). However, it definitely comes at a cost, and I would buy the argument that it may not be the best use of resources depending on your schedule. On the other hand, that argument can be used to justify an indefinite amount of technical debt, so I always hesitate to follow it blindly.

So on balance, I don’t have a recommendation for your group, just comments :slight_smile:

My impression for why this was the case is that reviewing code for the two cases is wildly different and so you need to look at each pull/merge request to first determine how you should review the code (to what standard) and you have to make sure during iteration that you don’t “leak” into more critical parts of the code. With separate repositories it is clear, if you need to change the safety-critical code it will require a separate pull request and that is easily noticed and audited.

Perhaps I miss the point, but as @gbiggs mentioned, I don’t think this has any impact on code freeze versus feature freeze.

I think you’re mistaken here. GitHub has the same features as Gitlab w.r.t. reporting the status of CI/CD. We do not use this feature because we have custom infrastructure and have not taken the time to use the GitHub API to automatically report the build status. We do have this in ROS 1 and ROS 2, but we don’t use it as much in ROS 2 yet, e.g. here’s a pr picked at random:

https://github.com/ros-visualization/rviz/pull/1292 (look at the “All checks have passed” section or for the green check marks next to the commits)

As others pointed out, I think it’s actually easier to be granular with permissions (a good thing imo) if you use multiple repositories within an organization.

This is frustrating for me because I have explained this so many times :upside_down_face:

It is not a limitation of bloom, but instead a limitation in our process or maybe a limitation of distributed VCS (i.e. git) depending on how you look at it. Basically it boils down to the requirement that each release has a tag, which I think is a reasonable and good thing to have for people consuming your software, and is reinforced by things like GitHub tying your tags to releases (not every tag is a release but every release is a tag).

If you keep that requirement then you cannot (in my opinion) realistically tag the repository in such a way that more than one version of a package can be represented easily. For example, if your repository has foo and bar packages in it, and on your first release you release both at version 1.0.0 and so you can use the tag 1.0.0 for the release. But then you want to release foo and not bar, so you set the version of foo to 1.1.0 and keep bar at 1.0.0, so but then what tag do you use? foo-1.1.0?

What if later I update foo and bar to 2.0.0, do I then use 2.0.0 again or use two tags, one for each? If the user wants to get the latest version of the software that works together, which tag do they choose? Remember that bar-1.1.0 could be newer than foo-2.0.0. Also bar-1.1.0 might not have a released version of foo as its peer, but instead some in between versions state.

I could go on for a while, but the point is that it’s a conceptual issue, not a limitation of bloom.

You can already do this using the ignore files. It would be trivial to have this as a command line option to bloom.

However, the conceptual issue still stands, if you find a small bug after a big release that you fix upstream then you still need to do a new release and then you’re back to the tagging issue I mentioned above. Even if you tag new releases for all packages and only want to actually bloom one package’s new version, then you could do that but you’d have a mismatch between binaries (debian packages) and what’s in the source tree, which is confusing for contributors and consumers of the software.

The right answer, if that’s your concern, is to split the repository up. Otherwise, I think you need to live with the “useless version increments” in unchanged files and the slow release process and the redundant updating of basically unchanged binaries that you mentioned.

4 Likes

#38

I’ve never actually seen the explanation before, but it makes sense to me and I’m happy to be better informed!

0 Likes

#39

@gbiggs (CC: @aohsato) Should we rename core_control to core_actuation? The figure in Overview has sensing, perception, decision, planning, and actuation. It does not have control. I’m thinking which repository should hold the actuation layer discussed in #1677.

0 Likes

#40

Although “actuation” is an often-used term in robotics, so is “control”. In this case I feel that “control” better describes what is going on, i.e. controlling the car to follow a planned motion, with actual actuation (controlling the wheel and steering actuators) being a subset of that.

The overview figure needs to be updated anyway.

0 Likes

#41

@gbiggs It seems planning might have Control according to the discussion #1719. I think we need some consensus to rename top level diagram.

@shinpei0208 @aohsato @Dejan_Pangercic Are there any comments?

0 Likes

#42

Then the first thing we should do is come to a consensus on what constitutes planning and what constitutes control.

In my experience in the manipulation world, once you have a set of joint states over time (a trajectory) to achieve, anything after that is control. Creating that trajectory is planning. My experience in mobile robotics has planning as producing a path to follow to the goal and a shorter path to follow in the immediate vicinity of the robot, and deciding what velocities to drive and turn at to follow that path being control. I think the two are fairly similar. But I don’t know if autonomous driving follows the same convention or not.

1 Like

#43

@gbiggs I misunderstood your proposal. Mission Planning (and maybe Actuation) in Autoware is similar to Control you mention. I think both ideas make sense once we can get consensus :slight_smile:

Which repository should each vehicle interface such as ymc and as packages be in in your proposal? Drivers repository for interacting with vehicle hardware?

0 Likes

#44

@kfunaoka I think any third-party driver/binary should be in the drivers repository, so that we can avoid future problems when a new binary is needed

0 Likes

#45

If it’s a driver for specific hardware, especially something we can eventually graduate out of Autoware (e.g have the hardware maker maintain it) then it should go in drivers.

0 Likes

#46

There is some debate about the naming of core_control going on, and also about the separation between control and planning and actuation. Speak up if you have an opinion.

0 Likes

#47

I know I’m a bit late to the party on this one, but I wanted to put in a couple of items regarding the repo naming. I agree with @Dejan_Pangercic that core_localization is separate from core_perception. Localization is the process of determining your estimated position in a reference coordinate frame - in our case, using inputs from multiple sensors. The best definition I can give for perception is the processing of raw input data gathered from environmental sensors into usable, logical representations of those data (which is why I, personally, would include the drivers under perception). For example, I would say that the NovAtel driver and any post-processing on individual pieces of data would be under “perception” but the combination of data from multiple input sources and the comparison of the estimated position output from those combined data against a reference coordinate frame would be considered “localization.” Similarly, if we were doing camera-based localization, the camera driver and any neural networks which process the raw images into representations of real-world objects would be “perception” while the comparison of the images against a known set of images for the purposes of estimating position would be “localization.” At their core, I think “perception” is about representing objects outside the vehicle and “localization” is about representing the position of the vehicle relative to the outside.

Regarding core_decision_making vs core_planning, I think it’s a little less clear. My opinion would be that planning pretty much covers the path-planning and decision-making processes but I’m not as solid on this one.

1 Like

#48

@JWhitleyAStuff thanks, this is a great explanation.

Regarding

Control (algorithms) are separate from planning (algorithms), so @gbiggs had it right.

Regarding separation of decision_maker and planing packages maybe this definition helps:

  1. Global planning (=decision_maker) => on demand service, interfaces to apriori map and traffic information,
  2. Behaviour planning => open problem, Darpa urban challenge: rule-based, other companies: search-based method, our favorite: robust POMDP
  3. Local planning => some deterministic sampling-based methods (GMT) or form of MPC, pre-recorded path, RRT

As you see 1 is on-demand, non-realtime package that can interface to many different sources of data. 2 and 3 are online, realtime algorithms with a minimal set of dependencies. Hence the split.

0 Likes

#49

Not too late. We can still change!

I feel that localisation and detection could be classed as both perception if you take perception as being “understanding the state of the world”. I’m not religiously devoted to dividing things up like this, though. It’s more that it gives us one repository instead of two or three. Nevertheless, functionality and binary releases may make us consider splitting them into two.

We want the drivers separate because:

  • the idea is that any new drivers will eventually get pushed upstream, so the drivers repository should always be aiming to be empty.
  • Drivers should get relatively stable which means we don’t have to make releases of it very often, whereas perception we would like to see a good cadence of new features and algorithms.
  • Drivers are almost certainly going to be safety-critical software, whereas much of perception might not.

I feel the same way on this one. However we recently had a long discussion here about where to draw the lines between planning, control and actuation. There was no easy place to put it. In general I think that the breakdown @Dejan_Pangercic gave is the right one from both the interfaces point of view and in terms of thinking of things as stages.

I think it is important to remember that repositories are in many senses a release unit, rather than a strict delineation of “these must be used together” or “these form a single application/pipeline”.

1 Like

#50

I agree with @Dejan_Pangercic and @JWhitleyAStuff that core_localization is separate because perception is a too big category.
What do you think of dividing perception into localization and object_recognition?

0 Likes

#51

In my studies, “localization” == “ego perception” and was considered a subset of perception for the same reason @gbiggs described. The overall goal is to perceive the state of the environment, your ego position within that environment is just another aspect of the state of the environment.

Either way we decide I’m fine with, just wanted to provide some additional experience!

0 Likes

#52

After moving the packages to the new repositories, I can make a couple of observations:

  • There would be almost nothing in core_control, only one or two packages.
  • core_perception would be huge.

Based on these observations, I think it is worth considering merging core_control and core_planning, and splitting localisation out from core_perception to try and get that repository down to a more manageable size.

utilities is also huge, so there is some thinking that needs to be done there. Some of the packages, such as the one providing build flags, are used everywhere as well which is a nasty dependency to have.

drivers is full of stuff we can get rid of, like unnecessary forks of existing ROS packages.

0 Likes