Physics
Humanoid Control and InstantVR Edge support advanced physics which enables full realistic interaction with the environment. This is enabled by selecting the Physics option in the Humanoid Control or InstantVR script. When this option is disabled, the avatar and his hands will not collide with the environment.
Rationale
The goal of the physics mode is that the virtual world behaves as a real world as closely as possible. This is explicitly not the real world in which the player really is, but all feedback from the virtual world should be like in the real world like you cannot move your hand through walls, heavy objects behave like real heavy objects. Of course the player has real sensory input at the same time which may not match to the virtual world, like you can hit a real wall. This should be avoided as much as possible (e.g. using the Chaperone system) because it is presence breaking. However, some sensory input cannot be avoided like feedback from the muscles.
The fun is that the human brain is able to reject this input when other sensory input is convincingly enough. The brain has a big preference for visual input here.
I have done 100s of demos using the physics implementation. I found that when people do not know about the implementation, they do not notice that there is not 1:1 tracking. I have seen example where the real hand was about 1 meter higher than the virtual hand for a long time and people just did not notice. Why? Because they were trying to pick up a very high (mass > 10) object and they know that it is hard to pick it up. You should not be able to lift your hand high above your head in that case.
On the other hand, there were people experiencing ‘lag’ and noticed that the tracking was not 100% accurate. These people were explicitly paying attention to the difference between their real hands and the virtual hands. To me this is strange: why is the position of the real hands relevant in the virtual world? Real hands do not exist there. The only important thing is that you should be convinced that the movements you try to do with you body in the virtual world are actually executed exactly the way you intend. That is what I try to achieve.
Colliders
In Humanoid Control, hand and body colliders are automatically generated. In InstantVR Edge, only body colliders are generated automatically, hand colliders need to be added to the avatar manually.
Collisions
When the hands of an avatar collide with the environment, the Unity physics engine will determine what will happen. In many cases this will result in that the real, physical hand and the virtual hand no longer have the same location. A good illustration is when the virtual hand is hitting a wall which is not present in the real world. In that case, the virtual hand will be stopped by the physics engine, but the real hand is still able to move forward.
The difference between the real hand position and the virtual hand position is translated into a force which is applied to the object the hand is colliding with. In the case of static objects, like the wall, this will have no effect, but non-static Rigidbodies may start to move under the influence of this force. This enables the player to push objects away, but also experience differences in weight of the objects. A big distance between the real and virtual hands is needed to move a heavy object as opposed to light objects which will start to move with much smaller distances. To the player it feels like heavy objects take more effort to move.
Collisions between players
It is possible that two hands (or objects which are held by the hands) collide with each other. Think for example about a swordfighting game where the swords of the players are actually meant to collide with each other. In this case the movements of the hands will result in different forces as described above. The net result of the forces from the two hands as calculated by the physics engine will then determine how the hands or the grabbed objects will move.
Grabbing objects with joints
It is possible to grab (parts of) non-static Rigidbodies which are connected to the world or other objects with hinges. A good example is a door. When the player has grabbed the handle of the door, the movements of the hand are limited by possible movements of the grabbed object. In the case of the door, it can only rotate around its’ hinges. The consequence is that the movements of the hand are then limited by the door movements. For instance, it is no longer possible to move the hand vertically. Moving the real hand vertically in this case will result in a vertical force, but most of the time this force will have no effect on the door.
Physics vs kinematics
The hand interaction has two possible modes:
- kinematic
- with physics
These two cannot be unified. Kinematic provides 1:1 tracking where the virtual hand and real hand are always at the same location and every movement of the real hand is directly translated to the same movement of the virtual hand. Physics imposes restrictions on the possible movements and will therefore prevent the 1:1 matching in certain situations.
When there are no collisions or other interactions with objects in the scene 1 and 2 should behave exactly the same. When the hands collide the behavior is different. When the hand has grabbed an object, it depends on the object what happens:
a. For free light objects (mass < 1), I decided that these objects should appear weightless and thus kinematic mode should be used in that case.
b. Free Heavy objects (mass > 1) should behave like heavy objects. This is achieved by using physics mode
c. Static objects can only be grabbed with a handle and will not move when grabbed, actually moving the hand will try to move the body in this case. This enables you to climb rocks for example.
d. objects which are connected to static objects with a joint. An example of the latter is the door. Joints are physics objects and only work in physics mode. If you use kinematic mode with the door, you will pull the door from it hinges and the door will appear weightless.