ROS Resources: Documentation | Support | Discussion Forum | Service Status | Q&A

Is "Twist" (still) a good velocity command interface

I’ve always thought that the geometry_msgs/Twist message was supposed to be interpreted as relative to the base frame of the robot, and was decoupled from the actuator commands. Therefore, geometry_msgs/Twist commands should go through the proper kinematics before actually producing the appropriate actuator command. I’d image that it would be this body->actuator module that would do the sanity checks on the geometry_msgs/Twist command that you’re talking about.

It seems like people may be abusing this notion to use geometry_msgs/Twist as an actuator command?

I do think it would be beneficial for the community to have a standard way of adding base type actuator commands like we have ros_control for joint type actuator commands.


The step between a Twist and actuator commands can be relatively small: things like differential drive controllers apply two (or three?) relatively simple transformations.

Perhaps that has contributed to this notion.

What @Ingo_Lutkebohle may be referring to is that for (for instance) Ackermann kinematic configurations, this transformation cannot be really/faithfully made. The question then arises: is sticking to Twist not a sort of “lying”, and would it not be better to express this impossibility by not using a Twist any more?

Why can’t it be made? You can still describe the motion of an Ackermann vehicle in terms of the body frame. It’s just that you can’t arbitrarily command it.

I think this discussion boils down to:

  1. How should we be describing the constraints of a mobile base? A full twist message is capable of describing the motion of any arbitrary base, but has no way of enforcing the actual constraints of the system.
    This is a good question, and I think historically this has been done in a bit of an ad-hoc way. Maybe it makes sense to put these type of constraints into the URDF (or something else) and make a REP around it?

  2. How useful is it to have an arbitrary description of motion?
    I think this is part of your argument for using different command types. If a robot can’t actuate the commands coming from the Twist message, isn’t it better to have a dedicated type so that the motion can be guaranteed?
    I think the answer depends a lot on the upstream modules you envision interfacing with your system. Some planning algorithms can be used across several different types of mobility bases (with some configuration). I get the argument that if it needs some amount of configuration, why not also configure the output type? The trade off is that you’ll forego any common tooling that could otherwise be leveraged. It would also make the planning modules a bit more difficult to implement, since they’d have to have special cases for each output type as opposed to using more generic constraints.

  3. Should we have standardized interfaces to “common” mobility types?
    I think the answer to this is yes. But I think it’s also good to couple this with a standardized abstraction (which is what Twist is to me), that way “custom” types can be easily added.

I guess my argument would be to keep a generalized abstraction layer, and have the kind of command checking you’re suggesting at a lower interface layer.

1 Like

I’m not @Ingo_Lutkebohle, but iiuc, the idea is to not expose certain interfaces if you don’t have the capability to actually offer the associated services.

This makes a lot of sense to me and we do it elsewhere in ROS as well.

As an example in the “other direction”: we typically try to make people avoid using std_msgs and std_srvs for their topics and services. And with good reason: no/very low semantics attached to those messages, which makes it possible to connect components together which shouldn’t be allowed to be connected.

Then the question becomes: should this “not allowed” be enforced by the application designer, or would it be nice to use the typing system to help us avoid such system designs and help us make better ones?

Of course you can, but I think the question is, why should you?

In my experience, the standard use case is to follow a trajectory using a controller. This could be done direct, but in most cases, we have at least a two-level controller hierarchy: One controller has (traditionally) a fairly general notion of the vehicle, e.g., that an omni-directional vehicle can drive forwards and sideways. This controller is usually an optimizer. The lower-level controller knows the exact configuration and maps vehicle speeds to wheel speeds. In ROS, this is often a fixed mapping plus an underlying PID controller.

So, what we’re are talking about is the interface between those two controllers, would you agree?

Now, this simple approach breaks down very rapidly. For example, the classical DWA formulation can switch instantly between a left turn and a right turn. Even for a differential drive this can be hard on the gears, and once you add a physical steering it becomes a recipe for rapid hardware failure. One way to avoid this is to use things like jerk filters, but these degrade performance.

Because the upper-level controller is usually an optimizer, it is possible to integrate vehicle specific information there, and this is what I see control engineers doing for both simple and complex algorithms. The various cost-function components in the move_base DWA implementation are an example of this, and it usually leads to much better performance.

It can often be beneficial for the optimizer not to work in vehicle-velocity space, but in some other space. For example, for Ackermann kinematics, this could be forward-velocity and steering angle, since this makes it very easy to take into account the sideways force, which is relevant to avoid slippage…

If we have a smart optimizer like that, and we force its output through the “twist” needle-hole, then we have two transformations in play: one forward and one reverse in the lower-level controller, and these have to match. This violates the information hiding principle, and when the transformations change at run-time it can also require additional parameters, which splits the interface.

Moreover, there is also double-Ackermann steering and other things, where it really breaks down. This is rare, but not as rare as you might think (just in the past two years I worked on two vehicles like that, for urban and industrial applications).

This is all my rationale for looking at other interfaces that are more expressive and/or more direct.

Now, in addition to all that, static typing prevents more errors than dynamic typing and thus, while it’s very general, I don’t think an array together with a type field is ideal.

I like the type field idea in general, though. It could be a good compromise between expressiveness and interoperability. Since most people agreed that we don’t need to skimp on the fields, we could have something similar to the twist (e.g., with separate fields for the various velocities) with an added type-field. Rather than change the interpretation of the fields, this would only specify how many of them are actually used (e.g, a diff-drive would only use x and theta, omni would use x, y and theta).

So this would become

Header header
uint8 DIFF_DRIVE = 0
uint8 OMNI = 1
uint8 type
float32 vx
float32 vy
float32 rx

I would still prefer to use something more capable for Ackermann and for aerial/underwater vehicles, but in principle the above could accommodate a simple Ackermann steering in the same way that the TEB driver currently does it.


Well, why shouldn’t you? Its a compact representation that generalizes to many systems and embeds the same information. It’s not as much of a hack as I think you believe it to be. You’re calculating the base frame (usually the center of the rear axel) while computing a path and simply leaving it in the base_link frame for transmission to the actual controller, which as you suggest, are aware of the robot-specific characteristics. I understand your discussion above on DWA but I fail to see how that changes things – you can still accomplish all of that given a 6DOF representation. There is no information lost.

Agreed. It’s the “outer loop” to give the controller a new reference signal to track. It’s not what’s directly spinning at 1 kHz+. I also align with your other comments. Having a message capable of 6DOF motion is valuable and generalizes to many types of systems allowing the ROS tools built on top of it to support arbitrary robot types.

An REP or a set of headers to convert Twists into specific commands at the lower level could be created. I think an REP would do the job fine.

more capable for Ackermann and for aerial/underwater vehicles

It seems like what you want now is Twist then.

  • inconsistent usage in practice
  • error possibilities without a back-channel

I’ll leave it at that. I don’t have a major stake in this interface and merely intended to make note of some concerns I had from a software quality point of view, but if people are happy with the status quo, I’ll acquiesce.

And I think those are very legitimate issues. It seems that the core of your concerns surrounds use of the message to decode information and what to do when that information is nonsensical for a system. Is that a relatively accurate summary? If you’re less concerned with message size and more with the use of it, I think that’s something all together different from what I was thinking above.

Perhaps then we actually just need a twist_converter.hpp-like header. Something to take in a Twist message with the characterization of the base and give back out only sensible fields, and throw exceptions if you try to access fields that are nonsense (or I suppose also if fields are filled out that shouldn’t be).

Maybe we create a new message type: [Something]Stamped which looks a little like

Header header
geometry_msgs/Vector3 linear
geometry_msgs/Vector3 angular
String encoding

and a set of headers that will take it in and output some struct of data that it may actually want to use. Or simply an object that allows access to all information but throws exceptions when you try to access things your encoding type thinks is unreasonable.

1 Like

I believe that Alison’s breakdown identifies the key points in this discussion.

From this I’m seeing three different things that need to be expressed. First there’s an abstract description of a motion, second is a concrete description of the motion and the third is the constraints for the motion. I believe that all three of them need independent representation.

And I agree with @Ingo_Lutkebohle that we’re talking about the interface between the high level controller with a generalized sensor of a vehicle and the low level controller which is going to be executing the actions.

The Twist message on the cmd_vel topics has been our abstract representation of the desired movement of the vehicle. The twist message has the ability to represent arbitrary fixed body motions but no vehicle is capable of expressing arbitrary fixed body motions they are all subject to certain constraints.

Most current implementations of lower level controllers take in the abstract command and then project it down onto the envelope that they can achieve. For example something like a diff drive controller typically just ignores any y or z velocities and non-z rotations.

The abstract representation of the commands are very powerful as we can have many different tools that can interact. All the different planners can have the same output, there are many safety functionalities that do things like limit velocities and accelerations, as well as things like the joystick controllers and tools to mux different inputs with different policies for teleoperation from a joystick or other remote controller.

At the other end once we’re inside the differential drive controller or ackermann etc clearly things need to operate with representations that map onto the actual actuators. Historically this has been internal to the controller but as ROS gets pushed to lower level computation providing an abstraction at this level becomes more valuable.

I think that the drone community has a good model for this that we can leverage to improve our abstraction. They have a architectural component called a mixer that is where they do the mapping from the abstract commands to the concrete commands. All representations before the mixer are abstracted away from the hardware. And the mixer then supports converting to hardware specific commands. This approach allows you to switch airframes and you just have to match your mixer settings to the hardware and the same autopilot can fly.

This works across multiple airframe types with different numbers of rotors etc. This same architecture works for planes and copters. Clearly the vehicles have different envelopes. And the mappings go to completely different actuators, servos for ailerons vs motor controllers for propellers. Planes vs drones have similarly different envelopes as ackermann vs holonomic bases.

The reason this works for both is that the planners output the same type of message but with completely different constraints. So now we need to make sure that our constraints are met. It would be possible for us to push the more concrete types further up the stack to make sure that every planner will output only valid commands for the hardware implementation. However this will mean that every planner needs to be customized for every potential hardware type. This is both expensive in terms of parallel implementations but also still does not actually communicate the appropriate constraints to the planner. Our basic drives types are straight forward, but once you start trying to list all possible ones it gets infeasible. Diff drive, holonomic, powered caster (pseudo holonomic like pr2, with 12 independent actuators), front powered caster, Ackermann, double Ackermann, skid steer/tank drive, center articulated.

Capturing all of the potential variations gets even more challenging when you think about the subtleties such as powered casters that may or may not be able to do continuous rotation, steering limits, individual actuator speed limits, suspension parameters such as toe in for Ackermann or compensation for the slip angle. For arbitrary hardware configurations there’s an arbitrary number of constraints that might need to be applied in addition to just picking the high level geometric type of the drive unit. And some vehicles may use hybrids of the above options etc. And we’re not even getting into the possibility of legged locomotion.

To that end what the planners really care about is constraints that they can feed back into their optimizations. And I think that creating a standard way for this to be parameterized by the hardware implementation and then feed into the planner would make the most sense. This is clearly a whole new area that there has not been a lot of discussion on but I think that this would be valuable to develop a good abstraction of this that could keep our planners generalized and support arbitrary vehicle architectures in a more standardized way.

The constraints could even be setup to be dynamically updated as a vehicle experiences system degradation or is reconfigured (like a tilt rotor). For this initial scope I would suggest limiting this to the local planners that are focused on immediate execution and not high level planners that are producing full trajectories through time etc. These are the same controllers that already produce Twists on cmd_vel topics. There are likely separate constraints representations at that higher level that could be expressed too.

As a first pass simply putting in limits on velocity, acceleration, (optional jerk and snap) in each dimension would seem to be a reasonable first step towards a potential message along these lines. If this makes sense we can iterate on making a new message to represent this.

So in summary my suggestion is to consider planning to work towards the following structure.


This will keep the twist as the abstract interface. And when it’s applicable we can split the mixer functionality away from the actuator functionality and provide common messages for classes of vehicles such as diff drive, ackermann or otherwise. And we look at adding a new message to support communicating the constraints back to the controller. These constraints are likely currently either implicity in the controllers or captured in parameters or other settings. If this makes sense we could consider moving this over to a REP proposal.

The abstraction proposed in REP 147 actually does this at the slightly higher level of acceleration and might be a good reference model for specifying the abstraction.


My short answer to the question in the title is yes, Twist is the velocity command interface as it defines a 6DoF velocity of a rigid body.

Whether a controller or a planner is compatible with this is a different question. I think that as a standard interface, Twist has been sufficient and I would refrain from any sort of splitting up or extending it.

The question of whether mobile bases should stick to Twist as their primary interface is a very valid one. Especially now with the advent of many ackermann vehicles (aka consumer cars) entering the robotics domain there is room for improvement on this front.

First, I would strongly advise against dropping structure from messages and relying on client code to reverse engineer what data on a certain topic means (even if it’s done through provided libs). Yes there is precedent for this with certain sensor data types but I’d argue that this flexibility in size/shape is native to those sensors, it would not be possible to define camera messages for every single imaginable resolution and encoding setting.

Second, instead of aiming for “one size fits all”, I suggest sticking to well-defined interfaces and extending the nav stack and relevant controllers to operate using these interfaces. ROS messages - for the most part - provide a statically typed interface to nodes which define whether two components can talk to each other or not without having to assess what goes on in runtime. This is a very powerful tool which we shouldn’t risk in favour of simpler graphic charts and a false illusion of universal compatibility.

An intuitive example is that if you have a local planner that natively plans with ackermann constraints, it could output both ackermann command messages and Twist (a holonomic vehicle can perform all commands kinematically although suboptimal). If one wants to smarten-up an ackermann controller to reverse engineer what to do from a Twist message, so be it, but at the same time they may choose to not provide this input interface and only take ackermann commands.
In short: message interfaces should define which pieces of the system fit together before ever trying to run anything.

We are very happy to host and provide feedback for these new messages over at control-msgs as well as new controllers in ros-controllers and ros2-controllers.

I agree that twist is highly problematic as a message type for a whole host of reasons. A lot of the problems that you have on this thread are because twist is in units of velocity.

Let me explain.

In the beginning people started building robots with electronic speed controllers (ESC). They used ESC because that’s what was available because thats what are used in factories. Factories tend to want motors to meter amounts of products per hour so they care about speed and use ESC.

Because people used ESC they gave commands in velocity - because that’s what ESC understand.

However you have a real problem with this - because mobile robots don’t live in velocity space they live in the real world which is measured in terms of distance. Generally you want to tell the robot to go forward 1 meter you don’t want to tell the robot to go at 1m/s for 1 second. The best ESC and robots in the world can’t go from 0m/s to 1m/s instantly and they can’t stop instantly either. The notion of a velocity is and can only ever be a target - its not a precise notion and can’t easily be made to be.

Because of the choice of unit space a number of other nasty work arounds emerged too - for example we all put dead-man timers in our robots (well at least I hope we do!). What is a dead man timer? Well its really a nasty way of converting speed to distance. What we say is, we can’t trust the robot to go more than 10cm because we can’t be sure that there isn’t an obstacle in the way so we are going to tell it to stop if it doesn’t get a new velocity command every 0.1second while its going at 1ms.

Normally your sensor suite can clear an area in front of the robot as safe in terms of distance, however how long before the robot stops is going to be a function of speed. Really what you ought to do is have a different deadman timer for different speeds (shorter dead-man for higher speeds) but as we all know that would be nasty nasty nasty nasty so no-one does it - we just fudge it by having a super short deadman even if that isn’t really that appropriate.

Frankly also issuing commands in terms of velocity is bad too because it forces certain control functions to be higher up in the stack than they need to be. For example consider the problem of driving towards a wall to dock with it. Your sensors tell you that you are 0.5m from the wall - you want to be 0.1m from the wall. Your sensor knows how far away the wall 0.001m precision. You have odometery that is accurate to 0.0001m. You should just tell the motor controller that you want to move forward 0.4 meters and then check to make sure you got to the right place. But oh no! We have to tell the motor controller that you want to go at 0.1m/s and you have to keep telling the motor controller that this is what you want to do every 0.1seconds and repeat this 40 times or more. Because none of the systems really knows how fast the robot is accelerating you have to have a high level control loop that constantly reads the sensor data to see if you’ve done it right etc. etc. etc. What a god awful mess!

My suggestion is that you simply issue commands in terms of cartesian space. The motor controller (which will have to be specific to each robot anyways) has to understand this command and execute it for that robot. This will be much better because invariably the control loop on the motor controller is super fast and is much more able to deal with variations of the dynamics of the robot than higher level stacks.

I’d suggest that you have a command that defines the arc that you’d like the robot to follow and the distance relative to current position. Maximum velocity, maximum acceleration would be settable parameters. The arc could be defined in a number of ways - it could be defined in radians - so for example a command of

d=1 meter
arc = 0 radians

Would cause the robot to driver forward 1 meter. A command of


Would cause the robot to turn in place

a command of

d= pi /2

Would cause the robot to drive in a semi-circle.

For a differential drive robot this would be easy - the motor controller would execute the arc

For an Ackerman steering type robot - well it obviously can’t turn in place but execution would be easy too.

1 Like

Thanks for the additional insights, but aren’t you ignoring the fact here that there are different levels of control, abstraction and use-cases?

It seems like you are ignoring the fact that there are definitely vehicles and levels of control for which it makes sense to express desired state in terms of velocity.

The majority of your comment seems to consider position based mobile robots only.


Sorry was called away to a meeting before I could fully finish the description of what I am proposing.

You would set up your controller to be distance based - with a maximum / target velocity and a maximum acceleration. Then you would feed the motor controller new position commands.

Essentially you would plan a path for the robot in arcs and then pass those arcs directly to the motor controller. The motor controller would have a stack of commands that it would have to execute. If it ran out of commands the motor controller would be responsible for safely bringing the robot to a halt within the defined acceleration limits.

Its a great comment. The way to resolve your question is to think about specific use cases where we suspect velocity would make more sense. (by the way I’ve added a little more detail in a reply to the original post)

So what are the applications where a velocity controller would make more sense? Self driving cars maybe? Even then I struggle. The self driving car will get obstacle data in meters not in speeds. Once you have the sensor data you will then be able to say where the safe zones are* and plan your arc accordingly. The car will seek to go at the target velocity - so what’s the problem?

Maybe where you don’t have a single motor controller controlling all motion (e.g. an ackerman steering arrangement with a separate steering system to the drive system) but even then to interpret cmd_vel messages you need some process handing instructions to the steering system and the drive system separately.

I am struggling to think of a case where this situation is worse than cmd_vel - and I can think of plenty of cases where it is better. Maybe you can help me think of an actual counter-example?

The only time I can think of is if you have a standard, off the shelf velocity controller and you don’t want to screw with it - but I’d have thought that by now in our joint journey of having robots conquer the universe we would be far beyond that parlous situation.

As an aside this is not a theoretical concept. We at Ubiquity Robotics moved our control stack partly in this direction and we got much better results.

I think what we’re discussing is to potentially create a new Twist message for use of what Twist is used for now. I don’t think the actual Twist message itself would be modified or directly removed, a new one would be made and replaced, to be clear. I don’t want to give off that impression.

To me, that’s the thing to be absolutely avoided. To go down the line of thinking: now someone has an omnidirectional base, and creates new messages and modifies their local planners to use it. Then someone else as a XYZ base and needs ABC new message for their awesome-local-planner. Now we have a fragmented ecosystem where nothing talks to each other properly without adapters everywhere or having loss of information. There’s no end of base configurations and its a slippery slope that could have unintended consequences. Most local planners in the mobile space can create permissible differential and omnidirectional velocity commands. A few can also make permissible ackermann commands. This is going to get real complex, real fast. I’d advocate for a new message, but I don’t want the creation of a series of messages from my viewpoint. Ackermann is not somehow particularly “special” over the other types of base commands that it requires something unique that separates it massively.

Publishing multiple topics for “whoever might be listening” seems more of a hack to me than sending the hardware interfaces rigid-body velocity commands. Not saying that’s not a viable alternative, it just feels “icky” to me.

The impression I had from control_msgs is that its more suited to sending actuator-level commands, not robot-level commands (where a robot might have 2-N actuators in concert). Do you think that that would be the appropriate place for this? It seems to me a new message, if we deem it necessary, would live in something like a trajectory_msgs or similar since that seems sufficiently robot-level abstracted.

@davecrawley I think we’re having a slightly different discussion. We’re discussing the representation of targets going into the client node of the robot (e.g. my autonomous stack says the target of my robot is to do X) for which the client node robot_base_node may transmit over its connection to the robot base/controller whatever it likes. That can be your position space if you choose, or some derivative of the velocity / torque space. I don’t know of any local or global planners where the output would be sensibly “go X forward” and be sufficiently smooth as to track your targets effectively.

Agreed- and I don’t know how typical that is. I can say from my experience I haven’t run across one. There’s been safety mechanisms that will convert and project velocity commands into the future to understand and stop collisions, but I’ve never sent commands to a motor like “go forward X cm”.

Below is off topic but related:

We have odometry and you should be aware of the acceleration characteristics of your robot. It is important to create a smooth acceleration space velocity command to ensure that you’re not grinding gears and having smooth motion. That’s part of the responsibility on your base_node or using a smoothing technique.

What you described is the canonical form of closed loop control. Which is critical for accurately tracking targets in the presence of disturbances, changes in friction, wheels not quite straight, new actors in the scene, etc. I don’t know that you could create a safe system without periodically checking sensors readings. The formulation I’m familiar with in academia work is cascade control systems which this aligns with.

Just to quickly respond. Yes its closed loop control - and you need it. But given that you will always have multiple closed looped control systems in a robot like this - which control loop are you going to set up the coordinate system for? Clearly the lowest level control loop with the highest bandwidth. The current “standard” arrangement that you describe doesn’t do that. Actually its worse than that we constantly convert from the coordinate system that these systems operate in (position for most sensors, and position change for most odometery) and then change that measurement in to speed using non-real time systems??? Then after we’ve done that we convert it back in to position again. If you take a step back doesn’t that seem slightly bonkers to you? You loose information and bandwidth and thus performance. Some people try to make up for it by getting more expensive sensors and odometery and thus reduce the discretization errors that I will talk about in a minute - essentially in an attempt to improve the information and bandwidth of their system - but thinking clearly about coordinate systems is probably cheaper.

As to position based mobile robots: Yes they exist that’s how ubiquity builds its mobile bases. Why? Because we found we got much better performance that way. More precisely we got much better performance that way with inexpensive hardware. So how do we link up to the rest of the stack that as you rightly point out is mostly velocity based? Well we convert from velocity to position and back when needed - its trivial to do and you do it all the time in your velocity base systems - its just that you loose information when you do. If you think about it saying “I am going to go at target velocity 1 m/s and stop when the deadman timer runs out in 0.1 seconds” is mathematically equivalent to saying “I am going to go forward 0.1 meters in 0.1 seconds”.

Its just that, in practice, you loose less information in computing the later compared to the former because you don’t need to measure differentials of x all the time (an operation thats subject to discretization problems and lots of information loss). Not only that but you can also make very high speed loops because you don’t need to measure differentials which usually requires two position measurements separated in time. Thus you can build very high bandwidth control loops.

Now that we come back (or perhaps I should say looped back :slight_smile: ) to the subject of control loops - perhaps you can see why I am keen on the idea of building good ones.

That’s obviously my main motivation - however given that this arrangement also solves the problem that ingo pointed out of difficulty in having a single command that works for different robot types - specifically ackerman and diff drive. I thought it would be a productive way forward.

1 Like

@davecrawley You know I’m a fan of being able to position control as well, however, as has also been said by others, different control types are appropriate for different situations. Completely replacing velocity control is not likely to be appropriate.

@athackst @tfoote @smac It appears we’re in agreement that some way of expressing the constraints is important, and I think that @smac’s suggestion of adding a type field and @tfoote’s suggestion of adding constraints are both useful, in that they address different parts of the problem. A constraint model would obviously be very powerful help for controllers, while a type field can be useful for tools as well, to know which fields to work on, without having to understand a full constraint model.

A constraint model would also solve an issue I had with type fields: You need to take care to check the field, and individual driver writers can easily forget that, or make mistakes. If you have an explicit constraint model, you can have a general library that checks the constraints, which everybody just uses as a filter in their code (or even as an explicit filter node in the architecture). This also makes checking more advanced constraints feasible.

Just from the top of my head, I would say it would be useful to have:

  • velocity constraints, of course
  • acceleration constraints, maybe as a set of “preferred” and “possible” (the latter could be useful to communicate what kind of deceleration would be possible for a crash stop, for example, whereas the former would be what we should use in normal situations)
  • binary motion constraints (can move in the direction of some axis or not)
  • control rate, maybe?
  • control delay (how long it takes for the robot to react to a command, once received)

I’ve had a look at REP147, and it’s confusing me a bit. In the section “Rate Interface”, they first write “The message to send a velocity command is geometry_msgs/TwistStamped”, and then they say “The command is a body relative set of accelerations in linear and angular space” (my emphasis). In general, I would advise against calling a message that communicates accelerations “Twist”, because AFAIK the term is generally defined for velocities in screw theory.

btw, I do know that at least in some autonomous driving applications, the low-level interface is in terms of accelerations, but in others it is in velocities. I would suggest we stick with velocities for now, since that seems to what most people are using. We can always have a second discussion later on (or in parallel) about other interfaces.

As a final note, I also like @tfoote’s architectural sketch, and agree that it would be useful to have more concrete messages for the mixer step. This would be at least level below what we’re currently discussing, right?


I also like having a constraint model as opposed to a type field.

I imagine there are two main types of constraints that any planner cares about:

  1. kinematic and other hardware-related constraints (max possible accel, axis of possible actuation, max curvature, etc)

  2. behavior constraints/costs (ie software limiting of max velocity, acceleration, jerk, etc) which may change over time like when in different modes or domains.

Since we are discussing the interface between a “higher” level controller and the “lower” level actuation, I’m going to assume that it’s only the kinematic/hardware constraints that are relevant. In your example, I would say that a “preferred” acceleration belongs in a behavior constraint list.

So a couple of questions that pop out are:

  1. Should all constraints be together?
  2. Should constraints be set through a configuration file or a message?

It appears the REP sets these in a goal message? My initial reaction was “the URDF sounds like a good place to put kinematic constraints” but I can see use-cases where a separate configuration would be desired. The URDF would also only work for kinematic/hardware constraints and a separate message would be needed to describe desired behavior constraints.

1 Like

Hi Everybody,

sorry for the late reply – I’ve been on forced vacation the last 10 days, due to the Corona situation.

I agree in principle, but it has to be said that handling a URDF is quite a lot of overhead, particularly for simple cases. Moreover, not out of necessity but in current practice, URDF is often parsed at run-time, whereas planners for mobile bases are usually configured during node start-up.

Now that I think of it, this would be even more of be a limitation when using topics. With URDF you could at least in principle parse it before run-time.

Anyway, I would argue that as long as we can make it easy to use, it doesn’t matter much where we put the information. I surmise that handling all sorts of kinematic chains just to figure out whether we have a differential drive or an omnidirectional drive might be much harder than just configuring that somewhere, but then, I don’t know very much about kinematics, so this is really just a hunch.

In general, if we have to introduce new information that is not already possible to represent in URDF, I would argue for creating a separate configuration file rather than changing the URDF.


(+18 more characters)