Let's do a quick RoboRant

Quick brainstorming here:

  1. Imagine you’re about to build a robot that would automatize sidewalk cleaning in a Disneyland.
    What would be a single most annoying thing to get through to do that?

  2. Imagine you want to have a robot that would stream video to your VR goggles and be mobile so you can for ex. drive around the office and spy on your coworkers. What would be three most annoying things to get through to do that?

1 Like

My reply. Based on the fact I personally already have mobile base to use.

  1. Finding an OTS cleaning/sweeping effector or a partner so I could focus on the implementation itself rather than rediscovering something that is already well done.

  2. One: Finding a non-proprietary 360 camera <-> VR streaming software over the Internet.
    Two: Most probably building the software I find (sic!)
    Three: Setting web interface of the robot to work over Internet so I could access it. (I could use Formant or Freedom Robotics to do that, but most probably the camera<->VR software won’t allow me).

What fun brainstorming topics! (-:

  1. One annoying thing would be dealing with the wide variety of things a sidewalk cleaner has to pick up at a place like disney world (i.e. in addition to cleaning dirt and small debris, how do you deal with picking up things like a mickey ear hat or a popcorn bucket that were left behind? or do you just detect and avoid those larger objects and have humans or another robot deal with them?).

a. smoothly streaming the images to your headset at a rate that wouldnt make you sick from jitter or lag (assuming there is no out of the box solution and you have to build that feature yourself).
b. finding a VR headset with an open software ecosystem.
c. since i suspect finding a usable 360 camera would be painful, i would probably first try stitching together images from a set of cameras, but that has its own pitfalls.

1 Like

Designing places to charge and empty waste that guests wouldn’t able to see, hear, or smell.

I’d say:

Re. 1.: Working through Disney’s insane bureaucracy to get their buy-in. Their accelerator may help with that, but it’s still difficult.

Of course, I’d use Transitive to create the robot-to-web data-connection and stream the video. I actually don’t think Item 2 would be so hard. I’m assuming there are already good fisheye-image-to-VR renderers that will handle the “in-image” pan and tilt. I think that’s the only thing that needs to be smooth to avoid getting sick. Since the camera on the robot most likely won’t pan/tilt, there is no need for ultra-low latency, and 200ms should be fine for remote telop.

1 Like