I have finally arrived at that fateful and painful point in engine development when I need to make objects "collide correctly". Both broad phase and narrow phase collision detection are working fine now, for both "convex" and arbitrarily irregular "concave" objects. That was enough challenge and work by itself, but now another annoying obstacle presents itself.
Having detected that a collision has happened, I need to "back up" the object-pair to somewhere after the previous frame (where the object-pair were not in collision) but before the current frame (where the object-pair are in collision). This may need to be done iteratively to converge on reasonably precise "first-contact" time and positions, so a good technique must be both precise and fast.
Here is what happens now. The engine still has the "local-to-world transformation matrix" from last frame, as well as the equivalent for this frame. The engine also has an "intermediate transformation matrix" that accumulates all object rotations and translations since the previous frame. This "intermediate transformation matrix" was multiplied by the previous-frame "local-to-world transformation matrix" to generate the current-frame "local-to-world transformation matrix" that was then applied to the local-coordinates of the vertex positions [and surface vectors] to rotate and move the object to the new location where it was just found to be "in collision" with another object.
I assume what I need is a way to create a "fractionalized" (fractional-frame) version of the "intermediate transformation matrix" by "multiplying" (in some sense) a "fraction-of-frame variable" by the "intermediate transformation matrix" to generate a new "intermediate transformation matrix" that contains only that fraction of the rotations and translations accumulated in that "intermediate transformation matrix".
I could then multiply that "intermediate transformation matrix" by the previous-frame "local-to-world transformation matrix" to generate a new "local-to-world transformation matrix" for the desired fractional frame time, which could then be applied to array of the local-coordinate vertices in the object to generate a new array of world-coordinate vertices.
Once this is done to both objects, the engine could then run collision detection on that object-pair again, and hopefully find they are "very, very close to collision... or just barely in collision", at which point physics could begin (and generate a response for the rest of the frame time).
Now, my first question is... is this even possible? Is there a way to take that "intermediate transformation matrix" and "multiply" it (in some sense) by a fraction - to generate a matrix that would have the same effect as reducing the contributions of all individual rotates and translates applied to that matrix by a specific fraction?
If so, what kind of matrix operation does this?
Perhaps viable alternatives to this approach exist, but I don't see them. The non-viable alternatives (due to much slower speed) that I can identify are:
#1: To queue up all the individual rotate and translate operations applied to every object in every frame, just in case a collision happens. Then I could apply fractional versions of these operations to an identity matrix to accumulate a new fractionalized version of the "intermediate translation matrix", which I could then apply to the previous-frame "local-to-world translation matrix" to generate a fractional-frame "local-to-world translation matrix" to apply to the vertices to create new vertex coordinates to re-perform collision detection with.
#2: To queue up all the individual forces applied to the object during the previous frame, (which then led the physics subsystem to update linear and rotational velocities and accelerations, which then resulted in those rotation and translation operations mentioned in #1 above).
My problem with both of these approaches is... they add a LOT more processing to a process that already threatens to become a "frame buster" (extend frame processing beyond the next vertical sync, and thereby "lose one frame and studder"). The engine has already done all that work, so it seems the "fractionalize the intermediate translation matrix" approach is a lot more practical. Assuming there is such an operation, which intutively it certainly seems there should be.
Also note these unfortunate compounding facts. It may be necessary to "iterate" to "converge upon" a precise [enough] first-contact time and position. Which means, whatever operations must be performed here, might need to be executed several times, including re-transformation of all vertices and running narrow-phase collision detection on this object-pair. I'd like to believe we can compute the correct frame-fraction to find a precise first-contact time and position immediately, the first try, without iteration. But with objects both rotating and translating, I am very dubious this is possible. And even if this is possible, when one or both objects are arbitrarily irregular "concave" objects my engine must execute its much slower "concave collision routine" instead of its blazing fast GJK "convex collision routine". I just can't see how there's enough time to allow any waste in this process, even if the whole shebang is offloaded to another CPU core. And if a number of object-pairs are in collision on a given frame... just how many CPU cores do we have to spare? Anyway, I think I've convinced myself a fast technique is crucial.
One [hopefully] important note: My transformation matrices only contain rotations and translations. My engine does support skew, shear, non-linear scale operations, but those operations are performed independently, without modifying the transformation matrix. This effectively separates "operations that modify the object" (skew/shear/scale) from "operations that do not modify the object, but just rotate and move it around" (rotate/translate). This also assures the transformation matrices are as clean and regular as possible, assures transformations of surface vectors (normals, tangents, bitangents) always produce the correct results, and eliminates any need to compute-and-carry-around or compute-on-demand any inverse matrices. And maybe, just maybe (or hopefully), the fact that my transformation matrices are pristine clean and regular means creating "fractionalized" versions is easy (and all I need is to know how). What say the math wizards in this forum?
I assume everyone who ever faces collision detection and response encounters this same problem. Out of curiousity, how is this process usually done?