Fiducial Marker Based Localization System - Package Annoucement

Hello Fellow ROS Users and Developers,

We are excited to announce our fiducial based localization system fiducials.

We love current LIDAR based localization methods, however they require expensive LIDAR for good results. LIDAR methods are also subject to the “kidnapped robot problem” which is the inability to unambiguously localize ab-initio in spaces which have a similar layout (e.g. if you move your robot to one of many similar offices it will get lost). Common LIDAR localization packages like amcl need to be initialized with a pose estimate on every run, something that can be difficult to do accurately. LIDAR based methods can also be difficult to tune and set up.

Our fiducial localization system enables a robot with a camera to engage in robust unequivocal localization based on pre-placed fiducial markers. The node simultaneously maps and localizes with these markers, and is robust against movements of single fiducials. This robustness is due to the fact that it continuously recomputes both the map of fiducials and the error associated with each fiducial. It then computes the reliability of each fiducial based on the estimate error of each fiducial. The required sensor is inexpensive and the method is relatively simple to set up. We use the Raspberry Pi Camera V2 ($25), but any calibrated camera with a ROS driver will work.

Here is a screenshot of rviz visualizing the fiducial map:
media-20180226

This localization method may be used stand-alone or it can be used as a compliment to more traditional LIDAR methods to create unambiguous localization at all times, using a system like robot_localization.

For creating and detecting fiducial markers we use OpenCV’s ArUco module.
More about operation and usage can be found on the wiki page

Have an issue, or an idea for improvement? Open an issue or PR on the GitHub repo.

This package will be part of the robots that we will release via crowdfunding on Indiegogo at 1 minute past midnight EST on March 10th 2018 (less than 2 weeks from now).

The Ubiquity Robotics Team
https://ubiquityrobotics.com

5 Likes

Hi guys, looks like you’ve done some great work.

I haven’t had a chance to go through your repository, but you might be interested in some work I did a little while back (https://github.com/qutas/marker_localization). More of a quick hack, but my method uses a refining tree structure to get a precise camera pose estimate across the whole map.

Looking forward to a chance to see how you approached the problem!

Looks like a nice package! RViz markers are always a great addition.

Have you quantified the position accuracy with the Pi camera? Presumably it depends on how many markers are visible, but are we talking cm or mm here? One other thing, the convention for visible/invisible markers seems flipped to me - I would think that visible markers should be green in RViz, and out-of-frame ones red.

I can see a lot of uses cases for this. I’m excited to use it for quickly calibrating a manipulator to an environment.

1 Like

Cool to see others working on similar systems.

Taking a quick look at your repo, I don’t see a License file, what is the code licensed under?

For the Pi camera with 14cm fiducials mounted on the ceiling with a ground based robot, in a well lit room we get position data that is good down to a couple cm. The biggest problem is noise in the pose estimates of single fiducials, as a couple pixels of noise can change the angular estimate wildly.

We have a method in the works that instead of doing individual marker pose estimates, does a more sophisticated approach using a single pose estimate on all the detected marker vertices. We want to have the data set and testing to measure if it is actually an improvement before merging though.

Rohan

You were right about the license, I don’t get too many people looking at the work, so it was never much of a concern. I just updated it with a GNU Public License, which seems about right, although the commercial part never really was my forte.

Either way, if you’re interested in my work, feel free to contact my directly. I don’t want to hijack your thread any more than I have.

I had a little bit more of a look through your code, but it was a bit rushed so forgive me if I missed it, do you make any assumptions about marker placement? I found that it really helped my estimations during the marker pose estimate step to have an option to allow aligning the marker with a plane.

As a lot of the use cases for your markers may be on a single ceiling of constant height, it’s relatively easy to project the marker estimate from the camera such that you can take the pose at the intersection with the common plane. Not sure how the SLAM methods work with this, but it may be worth considering if you haven’t already!

We do not make any assumptions about marker placement. We have a “2d” mode that assumes that the robot is on a flat plane, and only moves in linear x, y and rotational z.

We used to use a system that made a lot more assumptions, such as a constant height flat ceiling, but minor variations from this assumption caused the pose estimations to be unstable, and going to 6DOF solved this issue, as well as providing much more flexibility in marker placement (allowing for sloped ceilings, and even placing markers on walls).

Yes we have characterized the accuracy as Rohan says in normal conditions its good to a couple cm. Our system dynamically computes the accuracy based on the accuracy of all the markers it can see and combines the poses from all the visible markers in making its estimate. If you increase the density of markers the accuracy will improve but it is a n^0.5 process. Moving markers closer to the robot will also increase accuracy as will increasing their size.

Its important to note that the size of the fiducial can be determined both in principle and practice to less than 1 pixel, based on fitting lines to the edge of the aruco marker square, a pattern that crosses many pixels. Signal to noise and sensible lighting is important to get decent results here.

The results that we are getting are good for most of the applications we use, that doesn’t mean we aren’t going to continue to improve it. The things that are on the list of proposed improvements include:

  • Estimate pose from all the fiducials simultaneously rather than independently computing pose and combining them (as already discussed by Rohan)
  • Automatically optimizing camera exposure parameters to maximize accuracy
  • Automatically optimizing aruco parameters to maximize accuracy given computational constraints (you can improve accuracy by doing more fitting computations at the expense of computer cycles)
  • Automatically optimizing camera calibration data based on the large amount of data that’s available from looking at many fiducials - each fiducial is a bit like a checker board (calibration board) so you should be able to get better results from driving around and looking at many fiducials than from the relatively small amount of data available from a normal camera calibration. This is actually really important for a whole host of reasons not least that manufacturing variances between robots can get frictionlessly calibrated away with the user not having to do anything other than navigate and drive. The improvement also spills over in to other uses of the camera.
  • and many many more…

We will of course continue to work on refining this package, but its always great to get the input, ideas and most importantly code commits from the broader community to make this software better.

First of all: great initiative, and I’d be happy to try this implementation.
I’ve had a short look through your code, so please correct me if I missed pieces.
Is it right that, once you’ve seen a marker, its position will not be updated again?
The fiducial marker SLAM problem strikes me as a perfect case for Graph SLAM. Is there any reason you are not using this?

Great question.

The positions are continuously updated based on the estimated error of the measurement, as well as the error estimate of the current position estimate of the marker.The position error estimate is based on the estimated error of previous measurements, and the errors on the other markers that are visible at the same time.

I believe that this approach is similar to many graph SLAM methods, except we are not currently doing any global error minimization at the moment.

We would like to add bundle adjustment, or a similar global optimization system to both improve the mapping and even adjust the camera calibration to reduce the error in measurement, but we haven’t been able to put the time necessary into it yet. (We do accept PRs if you want to add this :slightly_smiling_face:)

Rohan
Ubiquity Robotics
ubiquityrobotics.com

How would one give the known pose of Aruco markers in an existing map and then use the fiducials localization system to localize the robot?

Background: We have an existing map (pgm) of our real-world environment. We would like to position specific Aruco markers in known locations in the real-world, record their poses relative to the map coordinates, and then somehow use the fiducials localization node to localize in the map.

Probably the best approach is to place the robot in a known location with a known pose (the easiest would be the default starting location of 0,0,0) tell the algorithm that pose (or in the case of the robot being in the default position do nothing) and allow it to build the map.

Guys, I am new to ROS and fiducial marker tracking. I need a help. When tracking the fiducial, if I move the marker faster then the 3D visualization shows lag and the does not show the continuous movement of marker instead it kinds of jumps and apears when the marker come to static. I will really appreciate if you can help me solve this issue. Thanks,

Hi @aaryan

I would love to help, but please ask either by creating a issue here https://github.com/UbiquityRobotics/fiducials or on the Ubiquity Forums here https://forum.ubiquityrobotics.com/.

The ROS Discourse is for general announcements or discussion only.

1 Like

Hey @rohbotics,

I was wondering if it would be possible to fiducial_slam with multiple cameras?

Cheers,

Josh

Just wanted to chime in that I’ve wanted to do this as well! (So actually I wanted to make a low-cost fiducial-based AR version of laser tag).

So funnily enough one of our DepthAI customers reached out about the April Tag support we’re building for DepthAI and he’s apparently done a multi-camera VSLAM implementation here:

https://github.com/ptrmu/fiducial_vlam

So once we have April Tags offloaded by DepthAI/megaAI, likely we’ll be directly compatible (plug/play) with this GitHub as well (old branch here… we’re optimizing it on the vector processors so it will be higher-res and faster now).

Thoughts?

Thanks,
Brandon
Luxonis Embedded AI & CV