Does anyone know of existing work on localizing a VR/MR Quest headset with a robot that has cameras and LiDAR? I know its doable with a Hololens, but don’t have one on hand. I was thinking I could do ICP with the LiDAR and depth feed from the headset, but it doesn’t seem like the Quest exposes its point cloud for streaming use. I’ve also thought about strapping a quest pro controller to my robot, though I imagine the accuracy might be harder to get.
Check out my project AR-RViz. It doesn’t use a localization server but instead maps the two reference frames together using a known world pose, either a QR code or manual placement.
Quest 3 and Apple Vision Pro are under active development and if you’d like to help out, let me know.
This looks great! Do you have any leads on getting the quest working? I’ve noticed that Meta doesn’t seem to expose raw data in many cases so I’m trying to figure out how localization would work (I guess you could setup the robot base as a table in spatial setup and use that for initial pose, but that would be a bit janky). Would be happy to help where I can.
I have the quest partially working but never got to do much testing and the controls are all a bit limited. Right now I don’t have the QR code working so you just have to select any world frame you know. Usually the map frame or base frame of the robot are the easiest to manually select using the headset. Similar to this
This requires 1. that the robot localization works well and 2. the quest localization stays consistent.
To communicate a bit easier I created a slack workspace if you or anyone else is interested in working on this.
This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.