Yes, it would be nice if the robot were fully autonomous. But actually I think a robot like this one would be teleoperated, that is moved with a joystick. So there is a human driver using the vision system remotely. Still there would be some lag and human drivers are not perfect. I think it might be good enough that if the robot detected an imminent collision would simply apply the brakes hard and stop. The human driver would then figure out what to do.
I’m using the common HC-SR04 as a last-resort collision sensor. In theory my robot should never crash as there are sensors to prevent this. But hc-sr04 is a backup. Inside the base controller there is a loop that runs once for every twist message. But the base controller looks at the distance reading from the ultrasonic sensors and decides if it “wants to” execute the twist message. This decision is made completely outside of ROS. The point is that we should only detect an obstacle if there is a failure in the ROS based system so the base controller, the process that actually commands the traction motors does the “pinging” itself and will refuse to power into a fixed object.
You may or may not want to use this same design but if operating in a crown and the robot owner is not within arm’s length with a hand ready to punch an “e-stop” bottom you need maybe TWO not one fail safe checks. In other words if the ROS planner says “go” it also must have an OK from TWO independent non-ROS based systems. Perhaps the second one is a mechanical switch that detects physical contact between a bumper bar and the obstacle. For your robot I’m thinking of a ring that encircles the robot and is held by springs and if a spring moves say 1/4 inch it means that your vision system, the humans driver and the ultra sonic sensors have all failed to detect the obstacle. So this system that uses mechanical switches disconnects the motors from power using a mechanical relay (no software in the loop)
Safety is hard. The easy task is to design the machine to be safe when it is operating as designed. The harder task that is 100% required to operate remotely in a crowd is that the machine remains safe even after unanticipated failure modes. As an example I had an electrical fire last week in a prototype. Lithium batteries have high power density and I ended up vaporizing a power cable. My design was not fail safe, in that obviously there was a failure mode that could cause a fire.
So your anti collision system must work even if there is a bug in the software and even if there is a mechanical fault. Vision is CLEARLY to complex to be unconditionally safe but could be a very good primary system given enough redundant backups.
Back to my work, I’d like to be able to use one camera but I’m undecided. getting 3D data from one camera requires very good motion estimation. I don’t think my IMU and odometry will be good enough so I may need stereo vision. I think in your mixed indoor/outdoor use case, moving in a crowd you will need stereo vision to get usable depth. Your obstacles are all moving and you will need to snap the pair of images simultaneously, not sequentially.
Summary: I think stereo vision is a great primary sensor. But for remote operation you’d better have multiple independent backups that are each so simple they can’t fail and being truly independent the chances of both failing at the same time is the product of the probabilities, a tiny number.
Telepresence seems easy at first, basically it is a remote control car with a webcam glued to the top but the problem is that telepresence by definition means the operator is not present. I think that implies extreme reliability and safety.