ManyMove – C++/Python Behavior Trees for Robot Manipulators (Looking for Feedback and Tips)

Hi everyone!
I’m Marco from Italy, I’m a roboticist but I’m quite new to ROS: I usually develop manipulation projects using native robotic programming languages (e.g. RAPID and KRL).
I’m creating some new projects using robots from a producer I never used before and since they offer ROS drivers natively I thought to go with it instead of learning yet another language. ROS always fascinated me but I never found the time to brush away some dust from C++ and python studies and dig into it.
Honestly I found quite challenging coming up with a complete project that included motion planning, signals and objects management and logic design, but patching together some great packages I think I’m close to achieving something usable.
Probably this is a very basic project, but I couldn’t find something that covers all the various aspects of programming an industrial robot, so I thought to share it.
I’ll be using it in production within next month, but I still consider it and Alpha version: please be very careful if you try it, its still at an experimental state!

Currently only working in ROS2 HUMBLE!!! Do not try the Jazzy version, it won’t work.
I will be porting to Jazzy as soon as I can, but fist I need to deploy a project in production in March…


The project name is ManyMove, here’s the github link:
ManyMove - github

You can find a description and some instructions in the main readme and some more in the readme of some of the packages.


To sum up what this is:

  • ManyMove: a general manipulator framework for ROS2 Humble, MoveIt2, Behavior Trees.
  • Aims to standardize I/O, object management, motion planning, etc, to program cobots and industrial robots.

To sum up the structure:


Feedback/Tips needed!

In this repo I tried to use the most basic and generalized functionalities as possible in every package. Here’s some issues I encountered where I could use some tips:

  • For now I’m using just OMPL/RRTConnect since it’s fast and versatile. I added some cartesian speed control on every kind of move since the cobots I’m working on would do an emergency stop if the cartesian speed is not respected. I noticed that OMPL is fast but takes some tries to get a trajectory close to the optimal, so I implemented a mechanism to get a number of valid plans and then choose the shortest in a mix of cartesian and joint distance. This is still faster than CHOMP most of the time in moveitcpp, but I have to admit that I still have to dig deeper on both CHOMP and PILZ planner. About the Pilz planner: I tried to avoid it because I think in the future most of the projects will be based on vision systems and AI, and I didn’t want to use a deterministic planner like Pilz. I still like it since it’s similar to how I’m used to program robots, and implementing it may mean I need to add some complexity to the project. On the other hand, it permits to chain together several PTP, LIN and CIRC moves without stopping between each move, which is often important to achieve optimal cycle times. So, versatility vs efficiency:
    What planner would you experiment on in a project like this?
  • In manymove_planner I implemented a version for move_group and one for moveit_cpp. The first one is more complete but is slower when used on the real robot, probably due to the messages going through the Ethernet network. In simulation the performance is just a bit slower, but on real robot is hardly usable. I still have to tweak the network settings, any suggestion on how to do it? Is this solvable or do I have to factor a much slower speed when using move_group?
  • The moveit_cpp is faster both in simulation and with real robots, but it seems to freeze the planning scene while the robot is moving. This wouldn’t be a huge problem since the project doesn’t require a dynamic scene, but in the future it will. Moreover, when I have 2 robots moving one of them can lose some little detail, for example if an object is attached to it while the other robot is moving. Not fun. I didn’t find a way to force the moveitcpp’s internal planning scene to update. I was thinking about running both move_group and moveitcpp and to use the scene from move_group just to check for collisions, but it seems to add unnecessary overhead. I can give instructions on how to reproduce the problem in the examples in the repo. Any suggestion on how to solve this?

Sorry for the long post, I’ve been designing this alone and I wanted to be at least close to usable before sharing, I would have hundreds of questions, design choices to discuss and possible new features to implement!

Please keep in mind that this is highly experimental, be very careful if you try it on a real robot!!

Also consider that I’ll be updating it very frequently until the first complete project is live, possibly in March, and I may also make very deep changes to it.

PS: the BT repo using py_trees is just drafted right now, I’m planning to complete it once I have some time!

6 Likes

Hi,
I just added a couple of launchers to start minimal examples using py_trees, you find them in the main README.md.

I also wanted to share the details of one of the many design choices I’m evaluating about how to build a fast and reliable planning subtree for a sequence of moves:

The above is an example of a 3-moves sequence built with buildParallelPlanExecuteXML function in manymove_cpp_trees.

In ComposedDropSequence leaf I start with a ResetTrajectories child: this ensure that the validity flag of all the trajectories stored for this branch is set to false, so all trajectories will be recalculated later.

The Parallel node ParallelPlanExecute may be a bit confusing: it has a sequence of planning nodes on the left and a sequence of execution nodes on the right. These two sequences will run in parallel.

Why plan and execute in parallel if there are no valid trajectories stored?

Because this allows to turn most of the planning time to masked time during the execution of the first move, reducing the impact of planning on execution time. To make this work there are some functionalities I had to implement in planning and execution:

  • In a sequence of moves, the moves after the first one automatically read the last position of the latest planned trajectory of the previous move. This ensures that all the sequence is coherent and can be executed without unexpected behavior from the robot.
  • The action server for executing a trajectory validates it both beforehand and during execution to avoid collisions. This is because the scene may change, for example because there are 2 robots in the scene and they are sharing the same space, like in the app example.
  • If the stored trajectory is not flagged as valid then the execution nodes wait indefinitely, or it fails automatically if the execution have been resumed. If the execution was stopped from the HMI, or if during execution a collision was found, the execution node fails and invalidates the current valid trajectory (as it always does on exit regardless of failure or success). The fallback node brings the execution to the a duplicate plan node that plans a fresh trajectory, validates it and returns failure, so the RetryPauseAbortNode can retry and restart the execution from where it left.

I designed it like this because a stop in the execution may have stopped the robot in a position different from the start position, so we can’t reuse it safely. Moreover, we can’t just restart the whole sequence of moves, because this would impact on execution times if the robot has to go back to the start, but it can also be impossible for the robot to make a move of the type requested in the first move to get back to the start point. The safest and fastest route is to restart the execution from where it left in the sequence of moves, and this also aligns to what a robot operator would expect when stopping and resuming the execution.

There are some downsides on this approach, and I still have to cover some edge cases that make the tree fail. But this is a behavior that could feel quite familiar to someone used to program robots with the classical logic found in RAPID, KRL and similar. These languages are much more deterministic though, and while this whole branch could be written down in a few move command lines, the rest of the tree would require quite some lines and would be much harder to follow, at least in the complete application.

Anyway, I wanted to share this details also to keep track of one of the core parts of this package. I’m trying different approaches, and the next one will be to reuse the old trajectories upon validation without replanning: this would be slower at the beginning but faster if the scene doesn’t change.

In scenarios that don’t have variance in the scene, like in the examples, also using the Pilz planner would make sense since since it’s very fast. It would also give the chance to plan a sequence of moves linked with a blend radius, which would help a lot with cycle times. On the other side, this wouldn’t be as versatile, as the Pilz planner’s moves don’t avoid obstacles: they just fail if the path is on a collision. Moreover, if a sequence of moves needs to be stopped I would need to keep track of the current waypoint reached and eventually replan from there, which isn’t trivial.
Another reason not to use the Pilz is that I’m aiming to deploy bin picking solutions with a vision system, and having a versatile planner and a logic that can handle dynamic scenes is much better in that scenario. If it can handle that, it can also handle picking up an object from the same place and placing it in the same spot over and over again, but that simpler scenario must be at least almost as efficient in term of execution time.
To be clear, the Pilz planner could work just fine in a bin picking application, it just needs a different approach and a lot more structure in the scenario definition. I already deployed some bin picking applications with ABB cobots and a vision system, and the hardest part was to make them both versatile and reliable. I think that in the near future the upcoming AI models may work better with a less structured planner, but we’ll see how they evolve.

To make an example of another approach, you may check the tree built with py_trees in the example for the lite robot. There are a couple more plan&execute branch builders that can be activated, but they are currently commented out. Here it is:

In this tree: I first plan the first move, then I execute the first move and in parallel I plan the moves for the next sequence, and so on. This is about as efficient, but it’s not as manageable if you want to start and resume coherently the execution.

Let me know if you find this thoughts any useful or interesting, I like to share some of these details as it also helps me clear my mind on how to try next!

I created a new tree branch builder and refined a little the tree logic also for the old one. Now you can choose between buildSequentialPlanExecuteXML and buildParallelPlanExecuteXML.

buildParallelPlanExecuteXML remains mostly unchanged, but I made so it always resets the calculated trajectories on execution: this is how I intended it to be used and I don’t see many advantages in reusing old trajs in this kind of branch. Let me know if you find it desirable and I’ll test that option to see if it’s already usable or if it need some tweaking of the logic.

buildSequentialPlanExecuteXML is the new function, here’s the structure:

As you can see, the planning prior to execution completely disappeared. Instead, it always tries to execute the trajectory first, and if there is no traj available it fails and the planning node will calculate the new traj. After the new trajectory is planned it will go back to execution, and if execution succeeds the traj will not be deleted but it will be kept for the next execution.
The first run will be slower and will stop before each move, but from the next iteration the logic will have already validated and executed the stored traj, so we can be confident that the robot will act more predictably.
Some details worth noting:

  • There is no performance gain in grouping the moves in a sequence in this implementation. The only advantage is that we can create a group of moves that we design to be in a certain order. When we’ll create the final tree structure this helps to keep the sequence more readable and understandable, also reducing errors if we need to modify the tree structure.
  • I kept the variable to reset the trajs, as some moves may be reused in different parts of the branch. For example, on the preparation sequence in the lite example I first go to rest position (toRest move). This move is done once when the scene has no obstacles, so the trajectory is straightforward. Later, in the main logic branch, I use the toRest move again: this time in a different sequence and with an object left on purpose in the trajectory. If we don’t tell the first move to reset the traj, the move in the other branch will find an already valid traj and try to execute it. This should fail due to collisions found even before sending the traj to the controller, but I noticed that the execution may start anyway. I’ll try to find out why, as it shouldn’t happen. In the meantime, we can avoid this problem either creating a new move with the same targets, or using this move in a sequence with reset_trajs set to true, expecially if we don’t plan to reuse it later.
  • After the first planning the trajs are kept until something happen to disrupt execution. This means that if the first planning generates a suboptimal trajectory this will be repeated forever. One way to have higher chance to get good trajectories is to increase the number of required successful plans to be obtained in the planning action, so we have higher chances of having a traj close to optimal. Also consider that stopping the execution when the robot is performing a move work force the replan on resume: since it will have to plan from a point different from the original start point, on the next execution it will have to plan yet a new traj, possibly closer to optimal.

I’ll test this new structure for a bit, then I think I’ll focus on moveit_cpp’s problem with frozen planning scene during robot movements. I’m curious to see what happens if I start move_group node and use it to get the planning scene info for moveit_cpp… I don’t think this is the ideal approach, but right now moveit_cpp version is not as reliable as I’d like it to be so I need to find an alternative. If you have suggestions about it they are more than welcome!

For the work with py_trees you may be interested in the XML parser that I created: py_trees_parser. In the future I plan to have a gui that will work with it.

1 Like

Your project sounds very interesting. I would like to invite you to the deliberation community group. We are meeting later today and plan to talk about behavior trees, if you are interested: ROS Deliberation Community Group Meeting - Mar 3, 2025

@erichlf Seems interesting indeed, when I’ll start to work again on the py_trees version I’ll give it a try, thank you!

@ct2034 Sure, I’d be glad to. See you later then!

Last few days I worked to hunt a bug that was making collision checks unreliable on moveitcpp version, now it should be fixed.

I updated launches that use moveitcpp: those for a single robot are just renamed for consistency, but those for 2 robots also use a more modular version of the action server. I made so there is only 1 underlying instance of MoveItCPP, and it spins separately from the action servers of the manymove package.

I also added the standalone launchers for the examples with Panda robot both with movegroup and moveitcpp.

Now I think I’ll focus on refining it to be able to deploy a simple application in production ASAP, but I’m really tempted to:

  • Try the porting to Jazzy
  • Try the repo on NVIDIA Jetson

Let’s see how much time can put into it in the next few weeks!

Hi everyone!
Since the latest logic for the trajectory reuse and the sequential plan/execute seems to be working well, I simplified further and create the MoveManipulatorAction that includes the previous functionalities but all built-in in a simpler node. The tree should be more readable now.
I would have simplified further removing also the RetryPauseResetNode, but I’m having some trouble reproducing the logic without it so I’ll settle here for now.
Here’s how the single move looks now. A sequence of moves will just be a repetition of this branch starting from RetryPauseResetNode.

I’d like to clean up a lot of functions that I’m not using right now, I’ll probably save all the current code in a branch on github but I won’t be updating that branch. If someone thinks that the separation between planning and execution is needed please let me know, that’s one of the biggest parts I’m going to remove on next updates.

Ok, big code cleanup! 2600+ lines removed against less than 600 added in one go. Now the whole repo should be more understandable, and the minimal python version is working with the new logic too.
This time, I saved the previous logic in a legacy branch, if it were ever needed in the future or for reference.

I added an example using the Isaac ROS cuMotion planner, you find a quickstart in this topic in NVIDIA Developer Forums.

I’ll try to share here some of the design choices and features I used in this repo when I can. To begin, one of the things I missed transitioning from industrial robot’s languages was a simple way to define a move or series of moves. Here’s how the move sequences are defined in the manymove_cpp_trees bt_client.cpp file:

    std::vector<Move> rest_position = {
        {robot_prefix, "joint", "", joint_rest, "", move_configs["max_move"]},
    };

    std::vector<Move> scan_surroundings = {
        {robot_prefix, "joint", "", joint_look_sx, "", move_configs["max_move"]},
        {robot_prefix, "joint", "", joint_look_dx, "", move_configs["max_move"]},
    };

    // Sequences for Pick/Drop/Homing
    std::vector<Move> pick_sequence = {
        {robot_prefix, "pose", "approach_pick_target", {}, "", move_configs["mid_move"]},
        {robot_prefix, "cartesian", "pick_target", {}, "", move_configs["cartesian_slow_move"]},
    };

    std::vector<Move> drop_sequence = {
        {robot_prefix, "cartesian", "approach_pick_target", {}, "", move_configs["cartesian_mid_move"]},
        {robot_prefix, "pose", "approach_drop_target", {}, "", move_configs["max_move"]},
        {robot_prefix, "cartesian", "drop_target", {}, "", move_configs["cartesian_slow_move"]},
    };

    std::vector<Move> home_position = {
        {robot_prefix, "cartesian", "approach_drop_target", {}, "", move_configs["cartesian_mid_move"]},
        {robot_prefix, "named", "", {}, named_home, move_configs["max_move"]},
    };

As you can notice, it resembles how moves are defined in languages like KRL or RAPID. Yep, it still miss many features available there, give me some time! But also, I feel that with ROS and the AI wave we may have to explore new approaches on move management, so keeping it as minimal as possible could be better right now.
All these moves are defined in the previous part of the file. Joint moves are defined through a vector of double values, and once they are defined they are fixed. Pose/Cartesian moves are associated to a blackboard key, and these can be modified at runtime by modifying the key value.
This was designed to allow me to obtain the pose of an object dynamically through an AI vision system, but it can also be useful if two or more robots need to interact, or if I need to grab a component in a moving transport system knowing its position at a certain time and its speed.
One thing I don’t like is the number of lines of code I need to define a pose. I will try to reduce it or create an helper function in the future. Is it something you feel is needed or am I overthinking it?