Introducing Teleop for ROS

Hi everyone,

I’d like to share Freedom Pilot, a set of teleoperation and human intervention tools that works out of the box with ROS1 & ROS2 for controlling your robot from anywhere in the world. We are giving away this suite of teleop tools for free for one year.

It is a one line installable agent. This lets you stably control your ROS robot from anywhere allowing you to drive, navigate or inject commands.

Its got WebRTC infrastructure with STUN and TURN servers, supports the ROS nav stack out of the box, thumbpad joystick control, maps, GPS, gamepad support, multiple cameras, topic data visualization and many other cool configurable things for building out a robot operations center or just getting your hobby robot driving cleanly from anywhere (including your cellphone when you are away from home).

I would love feedback! To try it out, create an account at freedomrobotics.ai (coupon for free: ROSPILOT):

  1. In the app, click GET STARTED (on the left hand side panel) and curl the install script.
  2. Launch your ROS nodes.
  3. Set the velocity topic that the joystick publishes on in SETTINGS -> PILOT and follow the directions to change your velocity topic, max speeds, etc.
  4. All your image topics will be automatically detected.
  5. Then click on PILOT and TAKE OVER and you are driving the robot!
  6. Next, you can enable nav stack control with waypoints, maps, etc in SETTINGS -> PILOT.

Detailed docs are here for teleoperation and navigation setup.

Here are just a few examples of how Freedom Pilot is used in the wild:

If you have a second after trying it, I would love to hear:

  • Is this useful for you?
  • Any features you would like to see added? Any rough edges?
  • How would you want to use it?

If you hit a snag getting it running, click SHOW CHAT in the app and I can hop on to help you or drop me a DM!

STOKED,
Steve!

20 Likes

Amazing tool!

I saw the post before on one-click logging for ROS and really love the tool and the simple UI for complex tasks

Quick questions:

  • Are you able to track video streaming metrics (such as latency and how many seconds does it take to set up a P2P connection)
  • Can you send waypoints on the video like for making certain actions given the position on a click on a video.
  • Do you have any tool for doing local path planning?
  • Any idea on how could I put custom routes on the maps view?
  • How computational intensive is the video transmission on the robot side? Which parameters do I have control for video transmission?

Thanks!

3 Likes

I have a lot of custom commands I need to send to my robot. How much flexibility do I have in what commands I can send through your platform? Do you support custom serialization methods?

2 Likes

Hi Steve, this looks great! Is it possible to use any perception / vision pipelines with this? And do you have any examples of how I could run, for example, image recognition? Thanks!

2 Likes

Thanks for the kind words and great questions, @David_Cardozo!

Are you able to track video streaming metrics (such as latency and how many seconds does it take to set up a P2P connection)

Latency is shown in the heads-up display. Setting up a connection takes a few seconds as the shortest possible path between devices is negotiated.

Can you send waypoints on the video like for making certain actions given the position on a click on a video.

Yes, right now we pipe the location of where you clicked through which you can convert on the robot to a waypoint. In the video that @sjhansen3 posted, you can see how clicking on the screen makes the Fetch robot move its pan-tilt head to look around.

Do you have any tool for doing local path planning?

Right now, the local path planning is done on the robot. We don’t currently do the planning ourselves so users can use the navigation stack of their choice (or use it as a click to point feature, or even for their ML pipeline), but if you have ideas on what you would like to see in such a feature, let me know!

Any idea on how could I put custom routes on the maps view?

That’s a feature in beta mode, more on this soon :slight_smile:

How computational intensive is the video transmission on the robot side? Which parameters do I have control for video transmission?

There are two ways you can transmit video: 1) You can do a low FPS JPEG stream, which is ok for monitoring but not recommended for teleoperation. 2) We enable WebRTC (p2p video streaming) where you have full control to tune the video transmission. You can specify the resolution, frame rates, bandwidth thresholds, etc. Depending on the configuration it will consume more resources but with the right configuration you can have a webrtc connection even if in low-compute environments (like a Raspberry Pi 3).

Let me know if that helps.

2 Likes

Pretty cool stuff. Getting WebRTC setup and working well is a colossal pain. So having it baked in is pretty cool.

I know that there is a lot that can be done from the topic stream system with the the API. How much can be done with the API an the WebRTC stream? How well could I integrate this into my system if I get tired of handling my own WebRTC stack?

The joystick is clearly very low latency. What is the connection methodology to the robot there?

2 Likes

Awesome tool!
I’ve had some experience using this system to teleoperate robots in new environments and it made remote monitoring/debugging significantly more convenient. Remote ssh and ability to easily run launch files are a big plus for me.
A couple of questions

  • Is there support to send commands to control a manipulator through freedom’s UI?
  • Can the costmaps generated by the ros navigation stack be layered onto the 2d map shown in the video above?
2 Likes

Thanks for the questions, @slessans!

How much flexibility do I have in what commands I can send through your platform?

Any command you can put in a ROS message :slight_smile:

Do you support custom serialization methods?

By custom serialization, do you mean custom messages? If so, we do!

1 Like

@eratner thanks!

Hi Steve, this looks great! Is it possible to use any perception / vision pipelines with this? And do you have any examples of how I could run, for example, image recognition? Thanks!

We have a blog which outlines how to generate or label a dataset here.

You could for example use the pixel coordinates of a pick point in realtime and use that to guide/adjust/retry on a picking algorithm then save those adjustments into a dataset along with the images for offline training. More on that process here.

1 Like

@mjsobrep thanks!

Pretty cool stuff. Getting WebRTC setup and working well is a colossal pain. So having it baked in is pretty cool.

I know that there is a lot that can be done from the topic stream system with the the API. How much can be done with the API an the WebRTC stream? How well could I integrate this into my system if I get tired of handling my own WebRTC stack?

The joystick is clearly very low latency. What is the connection methodology to the robot there?

We have built our WebRTC implementation to have built in bandwidth tuning, automatic retry on failures, the ability to send a subset of topic data over webRTC that you need in real-time, multiple cameras which swap so you can use them on limited bandwidth and super-stable infrastructure for transport. In addition, it drops back to cloud-based image streaming if many people want to watch so that it doesn’t increase the CPU load.

All of the joystick and other commands run both through the API and in parallel over WebRTC data channels and then are de-duplicated on the other side, so you have guaranteed delivery, but also low latency.

What data are you wanting to have come over in real-time? Would love to understand your use case to help us iterate and tune :slight_smile:

1 Like

Oh interesting. So you have a media server that sits in the middle of webRTC streams when needed? I hadn’t tried multiple users at once, but that would definitely be neat.

Ahh, very neat. From the perspective of the API, is that transparent?

Essentially I have a small teleop humanoid on a telepresence base (https://youtu.be/OHybatsjzog). I have my own stack (https://github.com/Rehab-Robotics-Lab/LilFloSystem). But there are still a few connectivity bugs that I just don’t feel like fixing. @Achllle and I had talked discussed porting my entire back end over to Freedom. I’m in Academia, so not sure that makes sense long term financially, but would be a cool exercise.

1 Like

Glad Freedom is making development easier for you, @mrunaljsarvaiya! To your questions:

Is there support to send commands to control a manipulator through freedom’s UI?

Yes, the web app automatically recognizes when a joystick is connected and pipes through Twist commands, which you can interpret however you want on the robot side. We’re also working on piping through the entire joy message so you can customize and use all the buttons. When you do this, make sure that similar to a mobile base, there are time-outs on velocity commands on the robot side in the case that the internet connection drops.

Can the costmaps generated by the ros navigation stack be layered onto the 2d map shown in the video above?

In Settings under the Environment tab, you can add additional maps so you can add in a global and local costmap. When you do so, make sure you set the bandwidth on the costmaps to be at least 1Hz or so any changes are reflected in the app.

1 Like

Looks like an awesome tool!

It seems this is targeted mostly towards AMR’s and mobile bases - would this apply to a large piece of equipment like an agricultural robot with an arm?

1 Like

@mjsobrep Good questions! We use a combination of signaling (STUN) and proxy (TURN) servers which identify the most efficient path between your robot and computer. When you have local connectivity, it is directly p2p. When you are on different networks, it goes through the TURN proxy which is very light weight.
For the API, currently, you will want to use our app for controlling it. The connections are clean, but WebRTC has a lot of edge cases on connection, reconnect, setting bandwidth, etc which need the browser code to also handle it correctly. We will be working to make it more transparent over time!
We can help you test it out and would love to understand how we can improve it.

Also, for Academia, we can do educational pricing that makes it work for you in the long term - let me know.

1 Like

@bcontino Would love to hear more about your robot - are you saying it’s a fixed robot used in agriculture?
We’ve seen uses for our platform beyond what we could’ve imagined. I’ve seen it used for streaming IoT data from hundreds of sensors and for massive industrial robots. The more complex the robot, the higher the need for a tool that can monitor all systems and provide quick introspection when a component goes down.
Specifically for Pilot, let’s imagine an autonomous robot in the field picking grapes. At one point, you get a smart alert from our platform telling you that your cartesian or free space plan failed because a controller tolerance issue. You inspect and see that the arm is bumping into a wire or branch that wasn’t picked up by the sensors. With Pilot, you can take manual control of the robot for this edge case and unstuck the robot remotely using small motions. Once it’s cleared, you trigger autonomous execution again using the injectable commands.
Even though remotely operating a robot arm at the lower levels might not be feasible for continuous execution, it does allow you to take over in edge case scenarios where human-level intelligence is required.

1 Like

not a real expert here as i’m still trying to learn… but it does looks useful. Will try to learn how to operate it. may i ask you questions if anything? thanks again

THANK YOU !

As soon as my robot is finished I’m going to give this a shot. I only performed the first test of the motors this morning, so, I’m not quite ready yet. This particular bot is just a household bot, but I see a better use case for the one I’m building at work (If I’m ever allowed to go back to work that is)

So, Thank You. This looks awesome

1 Like

Of course - you can ask questions on our app if they are about our product or here if they are about Teleoperation for ROS. Let us know how we can improve and help!

@Spyder - let us know when you get this up and running! We’d love to see a picture or robot video :slight_smile: Who doesn’t like a good robot video :slight_smile:

1 Like

The last video I took is still in the preliminary testing phase. Basically testing functionality via the computer, not even implementing ROS yet. Just using python to make the motors do things to make sure I didn’t break anything while I bolted it all together.

The shell is an old Omnibot2000 that I gutted and printed a bunch of new parts for, like the riser and the left arm, which used to be nothing but a dummy arm

He’s got a Jetson Nano in the head as the “main” computer, a pi3b for the fingers and wrists, another pi3b for the arms and head, a third pi3b for the voice, and an arduino for the ultrasonic sensors (of which only 1 is connected so far), and I’ve still yet to install the LIDAR

I did get proper batteries for it after this video was taken, but, quickly discovered I’d overestimated the capacity of them. The 4 inch wheels @ 313RPMs comes out to approximately 16.14MPH, which, I found, is FAR too fast for my living room. I now have a pair of 106RPM motors that I’m about to dread installing

It’s embarrassing, but, here’s what I’ve got so far…

2 Likes