Interesting topics for student assignments

@smac, the visual fiducial-based approach that Tully linked continuously uses and refines the approach to the dock so long as it can see the marker(s). It progresses through multiple states along the way, with the initial “look” at the large fiducials being used to calculate an “approach point” that is intended to be orthogonal to the dock. Anecdotally, we found this to be important for diff-drive robots; spinning and driving towards this approach point seems a bit ugly/hacky at first, but we found that it helps the docking reliability to ensure that the final “guided approach” starts basically orthogonal to the dock.

Once the robot has driven to the orthogonal “approach point,” the algorithm spins back towards the dock and approaches it head-on while continually using marker sightings. It continually refines the dock position estimate using TF frame estimates of the “small” fidicual that is intended to be placed at “eye level” of the robot’s camera so that it can see it for as long as possible during the final approach to the dock.

The “marker observer” functionality is provided by a separate node, which is continually publishing TF frames of the detected features (aruco_detect in the example simulation provided in the repo). The autodock node itself just subscribes to TF frames and steers based on them.

2 Likes