ROS Resources: Documentation | Support | Discussion Forum | Service Status | Q&A

Execute code within a Node/Callbackgroup context

Currently a node lives in a executor, and callbacks from timers and subscriptions are executed by that executor, taking callbackgroups into account.

However, sometimes I have a trigger from somewhere else, not related to ROS, that I want to handle in that same context, This to prevent having to fiddle with mutexes etc.

Basically something like this:
rclcpp::Node::execute(CallbackT && callback, rclcpp::callback_group::CallbackGroup::SharedPtr group = nullptr);
I then expect the callback to be executed as soon as possible in the same way as a subscription or timer callback.

Now I do it by creating a one-off timer with a duration of 0, but it’s not the nicest thing in the word. Would it be useful to have something like that in ros2?

1 Like

Yeah, that would be nice, and we’ve thought about it before, but have always just done a timer will a small or zero timeout instead.

Other similar systems have methods like call_soon(callback). Which just calls the callback as soon as it is convenient.

Yeah it also works indeed, it’s just that to make a one-off timer you need to store the timerhandle and make it accessible to the callback in order to make it one off.

Just for reference, i now use the following utility:

struct RunOnNodeTask {
  rclcpp::TimerBase::SharedPtr timer;
void exec_on_node(rclcpp::Node& node, std::function<void()> callback, rclcpp::callback_group::CallbackGroup::SharedPtr group) {
  using namespace std::chrono_literals;
  std::shared_ptr<RunOnNodeTask> task = std::make_shared<RunOnNodeTask>();
  std::function<void(rclcpp::TimerBase&)> cb = [task, callback](rclcpp::TimerBase& timer) mutable {
  task->timer = node.create_wall_timer(0s, cb, group);

I solved a similar issue creating a custom objects that inherits from rclcpp::Waitable.
This is registered similarly to subscription callbacks and it can be triggered externally. In my case it’s triggered whenever data is added to a buffer.

That’s actually a cleaner solutions that should also introduce less overhead. Requires a bit more effort though but I’ll look into it! Thanks!