# Understanding The Math

The math provided here for decomposing an affine transform is mostly regurgitated information from Graphics Gems IV and the Matrix Animation and Polar Decomposition paper.

Suppose we want to decompose the *affinate transformation* stored in matrix \(M\). We can factor \(M\) into a linear transformation, followed by a translation.

Factoring out the translation is trivial, assuming \(M\) is a column matrix, set \(T = M\) and zero out all entries of \(T\) except the last column. Next, set \(X = M\) and zero out only the translation values of the last column. This leaves us with the factorization \(M = TX\) where \(X\) is the original matrix with no translation, and \(T\) is only the translation part of the original matrix.

Next, we want to find the polar decomposition of \(X\) such that \(X = QS\) where \(Q\) is a rotation matrix and \(S\) is a symetric matrix describing a deformation. \(S\) is going to contain both scale and skew information. Substituting \(X\) with it's decomposition leaves us with \(M = TQS\).

At this point \(Q\) is either a rotation, or the negative of a rotation (a flip). \(Q\) can be factored into \(Q = FR\) where \(R\) is a rotation matrix and \(F\) is either a flip or not \( \pm I\) (negative or positive identity). \(Q\) is a rotation if the determinant of \(Q\) determinent is positive. If the determinent of \(Q\) is negative, the matrix contains a flip. This makes \(F\) easy to compute: \(F = det(Q)I\). Substituting \(Q\) with it's decomposition leaves us with \(M = TFRS\).

Matrix \(S\) contains both scale and skew (stretch) data. The Spectral Decomposition of \(S\) results in two matrices \(U\) and \(K\) such that \(U\) is an orthogonal matrix containing skew data and \(K\) is a diagonal matrix containing only scale information. The result of spectral decomposition is \(S = UKU^{T}\).

A nother way to think of Spectral Decomposition is that it breaks the matrix down into eigenvectors and eigenvalues. We factor a matrix into eigenvalues and eigen vectors using the QR Algorithm, which in turn uses QR Factorization

The Spectral Decomposition of **S** is not unique. Both **U** and **K** can be re-arranged and still be mathematically valid. We need to pick values for these variables that make geometric sense. To do so, consider interpolating accross multiple transforms such that \(S_{1} = U_{1} K_{1} U_{1}^{T}\), then \(S_{2} = U_{2} K_{2} U_{2}^{T}\). We need to make \(U_{1}\) and \(U_{2}\) as similar as possible.

The transformation that takes \(U_{1}\) into \(U_{2}\) is \(U_{12} = U_{1}^{T} U_{2}\). To make sure that \(U_{1}\) and \(U_{2}\) are as close as possible, we need to minimize the rotation of \(U_{12}\). Since we already know \(U_{1}\), we just need to find \(U_{2}\). There are 24 valid values of \(U_{2}\), all axis permutations (6), multiplied by all axis sign combinations (multiply by 8), achievable by a rotation (divide by 2). Next, evaluate \(U_{12}\) for all \(U_{1}\) and \(U_{2}\) combinations and pick the result with the least rotation.

We now know that the complete decomposition is: \(M = TQS = TFRS = TFRUKU^{T}\). Out of this decomposition, we want to build a new matrix using the \(T\), \(R\) and \(K\) terms, ignoring the others. This will yield a new matrix, that does not contain any undesired skew.