A couple of weeks back I wrote about inverse kinematics for the arms. I spent some time studying the principles of inverse kinematics and I used this for improving the behaviour of my virtual body by deducing the amount of forward bending of the body from the hand positions.
When I implemented the inverse kinematics for the hands in Unity3D Indie/free, I rather quickly noted that this was quite limited. One situation in which I practised a lot was a room with a book on a table (see my video on grabbing things using physics). When the book is in the middle of the table, I was not able to grab it: my virtual hand could not reach them any more. However I definitely had the feeling I should be able to pick it up.
The reason is that in reality you can bend forward to reach quite a bit further, while the virtual body cannot go there because the torso stayed upright, frozen.
In this situation, the hand targets (the position of the Razer Hydra controllers in space) was further away from the body than the virtual hand were, as the latter could not reach the target. This difference is used to determine the angle in which the body should bend. The method I use for this is exactly the same of for the arms: inverse kinematics.
In the image above you can see the similarity between arm bending and body bending. The triangles are the structures which represent the playing field. The only observation we need to make is that the arm should be completely stretched, otherwise the the triangle at the right hand site is no longer a triangle any more, which complicates things. But he, we are trying to reach something at far away: we definitely will stretch our arm in this case so that is not an issue.
Then the math. Last time I stated I did not understand the mathematics, but by now I do, thanks to the explanation on this site. So here is comes. The inverse kinematics I use in Unity3D is a two step method and relies on the LookAt function.
In the update function, we first aim the Y-axis of the torso by having it look at the target. Yes it does look silly now, like the picture above, but as the update is not finished yet, we will never see this.
torso.transform.LookAt (targetForTorso, torsoTarget.position - torso.position); torso.transform.Rotate (Vector3.right, 90); // I wanted to have the Y axis look at the target, not the Z-axis
Then we should note that we have a triangle of which we know the lengths of all 3 sides: MN = the length of the torso, NP is the length of the arm and MP should be the distance from the hip to the target. Using the cosine rule we can now calculate the angle indicated in the figure.
float shoulderAngle = Mathf.Acos((torsoLength*torsoLength + dHipTarget * dHipTarget - armLength * armLength)/ (2 * torsoLength * dHipTarget)) * Mathf.Rad2Deg;
Then the second step is rotating the body along the X axis by the amount of degrees we just calculated. This ensures that punt P in the picture will go to point Q and the hand will have reached the target!
Of course, things get a bit more hairy when to try to apply this to a 3D human body, but these are the basics.