What is your remote robot workflow?

Hi!

I bet lots of us here are writing code on their computers and then push changes to the robot for execution/testing. What is your workflow on pushing your changes to the robot?

So far on my list I have:

  • Connect robot to the Internet so that it can pull the latest changes (Deploy keys etc)
  • On your laptop have the parent git repo, which is set as a remote on the robot side so that you can pull directly from your laptop
  • Copy the whole workspace over for every change
  • Prototype directly on the robot

Every single of these options has its pros and cons. Do you have any other methods I could look into?

1 Like

I’ve seen a competition team build and deploy .deb binaries to their robot for each change. I’m not saying I’d recommend it for quick iterations.

EDIT: If you go this way, be sure to include info about the current git hash in the .deb’s metadata, as well as indicating there were local changes when the .deb was built. This approach worked quite well when I was making firmware for ‘embedded’ Linux on 3D printers, where the build-time was just seconds.

1 Like

We are using ngrok to create temp tunnel and use ansible to update codebase. I would say it works but not that ideal.
Prefer vpn + docket. But didn’t try on real robot.

1 Like

If I’m just prototyping, and the robot is functionally a development machine, then might just use vscode remote plugin to build and run on the robot.

But if the robot’s hardware is resource constrained, or the connectivity between the robot is slow
/intermittent from say spotty Wi-Fi, or if it’s more of a shared development/production resource with other teams, then I’ll just push new docker images to it. If you’re smart about your docker layer caching, you’ll end up only having to push the layers that include the recompiled binaries, and not necessarily the entire runtime dependency environment. So, an initial Gb or so push at first, and then a few Mb for consecutive pushes.

Of course, you could mix and mash docker volumes of the workspaces install folder with rsync, for even more efficient synchronization, given you could deterministically guarantee the compiled executables would run within the same docker image you use on you workstation.

You could also set your workstation as a local docker registry, so you don’t even need internet/bandwidth for an external registry like docker hub.

1 Like

rsync the install directory of your developer workspaces to a directory sourced on the robot. As fast as the speed of copying binaries!

I wouldn’t use this method for deployment of assets, but for prototyping in house rapidly, that can save time.

1 Like

Part of my dev setup is to run nodes locally on my machine but using the robot’s ROS Master. Since that configuration has surprisingly many pitfalls, I’ve created a little script for my own sanity: https://gist.github.com/chfritz/8c2adab45a94e091be77c55b0432ad2e

Example usage:

use_robot.sh myrobot.local
roslaunch my_package cool_new_feature

Clearly this is only a good idea if the network has enough bandwidth for the data this node will subscribe to and publish. But when that is the case then there is hardly anything faster in terms of iterating.

For production deployment I second the use of debian packages, together with auto-upgrades.

1 Like

We do use Kubernetes + WeaveNet + docker container for orchestration.

see Robotics Distributed System based on Kubernetes for more details.

I think the followings are good points to use container orchestration,

  • Application engineer does not need to do any operation but docker push. (Operation Cost)
  • Application engineer can specify and label the target if needed. (e.g) my debug robot or something.)
  • Using WeaveNet can support layer 2 emulation, so that we can construct distributed system / application using DDS.

we’ve been using Raspi4B and it works okay so far.

1 Like