I bet lots of us here are writing code on their computers and then push changes to the robot for execution/testing. What is your workflow on pushing your changes to the robot?
So far on my list I have:
Connect robot to the Internet so that it can pull the latest changes (Deploy keys etc)
On your laptop have the parent git repo, which is set as a remote on the robot side so that you can pull directly from your laptop
Copy the whole workspace over for every change
Prototype directly on the robot
Every single of these options has its pros and cons. Do you have any other methods I could look into?
I’ve seen a competition team build and deploy .deb binaries to their robot for each change. I’m not saying I’d recommend it for quick iterations.
EDIT: If you go this way, be sure to include info about the current git hash in the .deb’s metadata, as well as indicating there were local changes when the .deb was built. This approach worked quite well when I was making firmware for ‘embedded’ Linux on 3D printers, where the build-time was just seconds.
We are using ngrok to create temp tunnel and use ansible to update codebase. I would say it works but not that ideal.
Prefer vpn + docket. But didn’t try on real robot.
If I’m just prototyping, and the robot is functionally a development machine, then might just use vscode remote plugin to build and run on the robot.
But if the robot’s hardware is resource constrained, or the connectivity between the robot is slow
/intermittent from say spotty Wi-Fi, or if it’s more of a shared development/production resource with other teams, then I’ll just push new docker images to it. If you’re smart about your docker layer caching, you’ll end up only having to push the layers that include the recompiled binaries, and not necessarily the entire runtime dependency environment. So, an initial Gb or so push at first, and then a few Mb for consecutive pushes.
Of course, you could mix and mash docker volumes of the workspaces install folder with rsync, for even more efficient synchronization, given you could deterministically guarantee the compiled executables would run within the same docker image you use on you workstation.
You could also set your workstation as a local docker registry, so you don’t even need internet/bandwidth for an external registry like docker hub.
Part of my dev setup is to run nodes locally on my machine but using the robot’s ROS Master. Since that configuration has surprisingly many pitfalls, I’ve created a little script for my own sanity: https://gist.github.com/chfritz/8c2adab45a94e091be77c55b0432ad2e
Clearly this is only a good idea if the network has enough bandwidth for the data this node will subscribe to and publish. But when that is the case then there is hardly anything faster in terms of iterating.
For production deployment I second the use of debian packages, together with auto-upgrades.