Screen casts and slides by Kenny Erleben for Teaching in Inverse Kinematics

- Introduction to Inverse Kinematics
- Mathematical Symbols used in Inverse Kinematics
- How to use homogeneous coordinates
- How to compute the Jacobian of a Serial Chain End-Effector Function
- Inverse Kinematics as a Unconstrained Minimization Problem, shows how to compute the objective function value, the gradient and exact Hessian hereof.

This material was used to give students in Numerical Optimization classes a case-study to apply their own implemented methods on.

The youtube video 5/5 has an error in the final equation of the gradient (thanks to Gino van den Bergen to report), while the slides have it fixed: use -J transposed, not -J.

Also, can you explain why we need the Hessian, some intuition would help. Thanks!

LikeLike

In the numerical optimization classes we teach at UCPH the students learn many different methods such as: steepest descent, Newton, Gauss-Newton, Dog Leg, Levenberg-Marquardt, and BFGS. The students also try linear constraints on the inverse kinematics problem and study projective linear searches and active set methods for such constraints. For many of these methods the students need to plug in a Hessian evaluation as part of how the numerical methods work, so the students need an equation for the exact Hessian that they can implement.

The Hessian provides second order (curvature) information about the objective function. This can be exploited by numerical methods such as for instance Newtons method. The Hessian allows the method to see how the objective function is “bending”. Taking this into account a method can “cut corners” and hence give a faster convergence rate than say a steepest descent method that only need the gradient for its update. Of course the per-iteration cost of “Hessian” based numerical methods can be too high for some application contexts. It all depends on what one is doing. I must admit that in practice the exact Hessian is although rarely needed. Often one can do quite well with the approximation J^T J for the Hessian. It all depends on the context of what one solves the inverse kinematics problem for (the accuracy and robustness requirements).

I can highly recommend the PhD thesis by Morten Engell-Nørregård (http://wp.me/pT0Cf-4S) for those interested in getting all the nifty gritty details of inverse kinematics as an optimization problem and variations of the inverse kinematics model problem.

LikeLike

Some IK problems suffer from singularity or redundancy and we cannot take the transpose or inverse of the Jacobian matrix J. That’s where the Moore Penrose pseudo inverse or Levenberg-Marquardt, damped least square or SR-inverse is used.

It looks like you discuss this in other sessions of your Numerical Optimization classes.

Will you share slides or videos of those parts that discuss Levenberg-Marquardt as well?

>>I can highly recommend the PhD thesis by Morten Engell-Nørregård

Yes, that is a great reference reading. The Sugihara 2009 paper “Solvability-unconcerned Inverse Kinematics based on Levenberg-Marquardt method with Robust Damping” adds some more recent insights towards IK solving as well. Are you familiar with that paper?

LikeLike

It is exactly for those reasons that we pick inverse kinematics as an optimization problem for a course in numerical optimization. The problems you describe often pops up when needing to solve a sub system to find for instance a search direction and yes students learn to deal with those problems as we learn the students to write robust optimization software. The textbook we currently use (Nocedal and Wright, http://users.iems.northwestern.edu/~nocedal/book/) explains quite many of the ways to deal with rank-deficiency and ill-conditioning in great detail. The book covers many tricks of using iterative solvers, pseudo-inverses and various factorizations, and sub-space methods etc. The inverse kinematics problem has several advantages as a case-study. One can easily draw a little 2D robot arm with an interactive goal that can provide students with instant feedback on the screen. The problem is ill-posed in the sense that it suffers from having multiple solutions and in cases of unreachable goals no solutions at all. The problem is highly non-linear too but still a smooth problem and we can make it low-dimensional (having 2 or 3 links only) enough so running times will not hurt students during an exercise class. The inverse kinematics problem suffers from sensitivity and scaling issues inherently as a small change to a joint near the root will cause a large change in end-effector position. Further, the problem is easily extended to include constraints later in our course. Inverse kinematics problem really offers all the bad traits that one would like to stress test a solver with except perhaps for non-smoothness, although this can be added into the joint constraints if one is creative. As the screencasts and slides was created for a course on numerical methods there are certain modeling aspects that we do not cover or talk about at all. Like adding damping to the model for instance like a term that penalizes distance from last known state of the inverse kinematics skeleton, or weighting of goals, extension to branches and closed loops, using other norms, using data-driven terms, 3D representations of orientations and more general nonlinear joint constraints and much more.

LikeLike