Thanks for the kind words and great questions, @David_Cardozo!
Are you able to track video streaming metrics (such as latency and how many seconds does it take to set up a P2P connection)
Latency is shown in the heads-up display. Setting up a connection takes a few seconds as the shortest possible path between devices is negotiated.
Can you send waypoints on the video like for making certain actions given the position on a click on a video.
Yes, right now we pipe the location of where you clicked through which you can convert on the robot to a waypoint. In the video that @sjhansen3 posted, you can see how clicking on the screen makes the Fetch robot move its pan-tilt head to look around.
Do you have any tool for doing local path planning?
Right now, the local path planning is done on the robot. We don’t currently do the planning ourselves so users can use the navigation stack of their choice (or use it as a click to point feature, or even for their ML pipeline), but if you have ideas on what you would like to see in such a feature, let me know!
Any idea on how could I put custom routes on the maps view?
That’s a feature in beta mode, more on this soon
How computational intensive is the video transmission on the robot side? Which parameters do I have control for video transmission?
There are two ways you can transmit video: 1) You can do a low FPS JPEG stream, which is ok for monitoring but not recommended for teleoperation. 2) We enable WebRTC (p2p video streaming) where you have full control to tune the video transmission. You can specify the resolution, frame rates, bandwidth thresholds, etc. Depending on the configuration it will consume more resources but with the right configuration you can have a webrtc connection even if in low-compute environments (like a Raspberry Pi 3).
Let me know if that helps.