I wanted to share a repository I’ve written, based on some of my Ph.D. work and motivated by the fact that fundamentally, currently available generalized inverse kinematics solvers are quite slow and unreliable. The novelty in this solver is that it uses 3rd-order derivative of the kinematic error function to converge to a solution much faster and more reliably than existing solvers. The algorithm is able to converge 1-2 orders of magnitude faster, and more reliably than other tested algorithms. When run in real-time, your initial guess is always quite good and the algorithm converges in 5-6 microseconds with effectively zero failure rate. The improvements were sufficiently significant that I was able to reliably use this algorithm in real-time at 1 kHz to implement impedance control.
If you’re interested, check out the repository here: https://github.com/steffanlloyd/quik. The work has been published in IEEE-TRO and a preprint is available here, if you’re interested in the theoretical details. Feedback welcome!
7 Likes
this is really impressive.
IKFast – the typical go-to closed-form solver – has similar performance in most cases.
Very nice work 
Have you had any chance to benchmark against / compare with closed form solvers, such as IKFast and/or OPW kinematics?
Edit: your own readme gives a mean solve time of “21 μs” btw. That’s still impressive, but a bit slower than the 5-6 you write in your OP.
Sorry! I didn’t mean to mislead, those were different benchmarks. Full details in the paper linked, but 5-6 microseconds when the initial guess is good (e.g. average joint error within 0.1 rad, from the benchmarking I did). For example, if you’re running inverse kinematics in real-time, then your previous pose is a very good estimate of what the next pose is. The 21 microseconds was the average solve time to solve a randomly generated pose without an initial guess. This is on a 6-DOF robot, benchmarking done on an i7 CPU from 2017.
I never benchmarked against any “automatic” closed-form solver generators, although I did compare a closed-form solution to a 6-DOF robot that I wrote myself (for a KUKA KR6, in this case). It took 0.7 microseconds on average. Likely, my hand-written function would be a bit more efficient than an automatically generated ones. So, the closed-form computations are still a fair amount faster, but not too far off (depending on how good your initial guess is).
no need to apologise for anything. I was just curious about the difference in the nrs quoted.
I believe the biggest advantage optimisation-bases solvers have is they can work with ‘any’ kinematic configuration (within reason) – not just the one they were written/generated for.
The paper already mentions this and mentions the main case where this is advantageous: calibrated mechanisms.
Again, great work and thanks for making this available.
How does QuIK relate to Levenberg-Marquardt with geodesic acceleration? The latter also requires to explicitly compute second order derivatives.
Would it be in principle possible to extend the underlying algorithm of QuIK to
- non-DH parametrizations, like specifying a full 6DoF transformation between each joint and
- to parallel kinematics?
I’ve never seen or looked into an inverse kinematics library that uses Levenberg-Marquardt with geodesic acceleration, but it would depend a lot on the implementation. The QuIK method is an application of Halley’s method, which uses 3rd-order derivative information to improve the computation of the step. It is effectively a third-order extension of the second-order Levenberg-Marquardt (also called damped Newton’s) methods. The geodesic acceleration part (as I understand it) usually attempts to correct for curvature in the error surface, but typically its implemented through approximations (estimating curvature through subsequent Hessian evaluations, or through finite difference calculations). The QuIK method computes this curvature exactly.
It’s a bit confusing because there are two classes of inverse kinematics methods: Jacobian-based methods, and optimization-based methods. In Jacobian-based methods, the kinematic Jacobian is used similarly to the optimization Hessian in optimization methods, but the Jacobian is a first derivative whereas the optimization Hessian is second-order. The QuIK method computes the derivative of the kinematic Jacobian w.r.t the joint angles, so this is called the kinematic Hessian and it is a 3rd-order tensor (6xDOFxDOF), but it shouldn’t be confused with the optimization Hessian (DOFxDOF): it’s effectively one order of derivation further. I explain this more in the paper, it’s a bit confusing when people say second-order or third-order because it can mean different things.
TLDR:
- Levenburg-Marquardt is a second-order method, geodesic acceleration corrections are usually approximations. This approximately means that, within the basin of convergence, the number of correct digits approximately doubles at each iteration. The amount of help the acceleration terms give depends on how accurate they are.
- QuIK is a third-order method, and within the basin of convergence, the number of correct digits approximately triples at each step.
For your questions:
- Non-DH parametrizations are possible, actually all you would need to do is re-implement the forward kinematics function in the
quik::Robot
class. The computations of the kinematic Jacobian, and Hessian, all follow from that. The downside, however, is that the DH-based forward kinematics is really efficient, and since it needs to be called many times during an inverse kinematics call, it would slow down the function. Could be worthwhile to do nonetheless.
- Parallel kinematics: Definitely possible, but more work would be needed. I would add that in simple cases parallel kinematics problems can be reduced to multiple serial inverse kinematics calls or turned into a single serial kinematics problem by reframing the problem. Not true in general, however.
1 Like