|Inverse Transform without matrices?|
Sat, 23 Mar 2019 22:27:57 +0000
Given an affine transform, expressed as a translation (or position), a scale, and a rotation, how do you compute its inverse?
Well, if you write the transform as a matrix (for column vectors, so the first operation is at the right hand side),
M = T * R * Sthen the inverse is (I'm using a single quote instead of -1 to write the inverse),
M' = (T * R * S)' = S' * R' * T'Can we write that as another affine transform? That is,
M' = S' * R' * T' = T_ * R_ * S_Well, a person from the future wrote in math.stackexchange how to extract the translation, the scale, and the rotation of a given affine transform.
So far, so good. But if you look at the comments, someone comments that the extracted rotation matrix might be a combination of shear and rotation!
This might be a trivial problem, but I never encountered it before. The issue is that, if the scaling is anisotropic, then there's certainly shearing going on in the inverse. That is, you can extract T, R, and S from M using what's described in math.stackexchange, but you can't extract T_, R_, and S_.
What can we do, then?
Well, the translation can still be computed the same. If t is the position in the original transform, then the new position is,
t_ = S' * R' * (-t)and we don't have to use the matrix forms for this. If your rotation is a quaternion, simply invert the quaternion and rotate t with it. Then, divide each component by the scale.
But can we convert (S' * R') into (R_ * S_)? Well, we can use the Singular Value Decomposition to see how it would look like,
S' * R' = U * Σ * V'Σ is a diagonal matrix, so a scaling matrix. But unless V is the identity, I don't see how this would look like an (R_ * S_)
So in the end I bit the bullet and used matrices when I need the inverse and I know the scaling may not be isotropic...
What was I trying to do? I was computing ray to object intersections in an AR sample app. These are computed in the CPU, and it would be costly to transform all the triangles of the object to test the intersection, so I'm converting the ray to model space instead. That's why I needed the inverse of the world transform of the object. You can see the final commit with the bug fix and several unit tests that exercise the different conversions: Fix transform.inverse (VidEngine)
Please message me in twitter @endavid if you have any comments or suggestions!Tweet