Virtual hands and static object collisions

In a previous article I wrote about grabbing objects and moving them around,using Unity3d in a virtual reality environment but not everything in the world can be moved around. Therefore I will be discussing how you can deal with your hands colliding with static non-moving objects.

My implementation will need to stop the virtual hand moving though static objects like walls, but I also want to be able to move my hand along the surface of the object.

The first observation is that we now need two tracking objects.Hierarchy0_2

  1. The first, the Input Target, will contain the position which is recorded by the input device like a mouse or a Razer Hydra controller
  2. The second, the Hand Target,  will contain the position of the virtual hand

Most of the time the two tracking objects will have the same coordinates, but when a static object is hit, they break apart. The input target will move through the wall, but the Hand Target should stop at the surface.

To detect collisions between kinematic Rigidbodies (the hand) and static objects we cannot use normal colliders (see the table on the bottom of the Box Collider page). Instead we need to use a trigger collider. This should not be a surprise: rigidbodies with normal colliders implement physics. In our case, physics does not apply: the wall does not move and our hand only moves by kinematics.

We add a Trigger Collider and a kinematic Rigidbody to the Hand Target: these will then detect when the Hand Target collides with an object. Again, we will use a simple Sphere Collider, in this case with a radius of 5 cm (0.5) units, because we want to be able to move the hand quite close to static objects.HandCollider0_2

We add a script with implementations for OnTriggerEnter and OnTriggerExit. These function simply register that we have collided using a boolean and we also use the Collider to determine in OnTriggerExit that we left the original Trigger Collider. We also need to check that the object we collide with is marked ‘static’, because we only want to have our hand stopped only by static objects:

public void OnTriggerEnter(Collider other) {
	if (!collided && other.gameObject.isStatic == true) {
		collided = true;
		collider = other;
	}
}

public void OnTriggerExit(Collider other) {
	if (collided && other == collider) {
		collided = false;
		collider = null;
	}
}

The most important functionality is implemented in the Update function of the same script. If the hands haven’t collided, it will make the rotation and position of the Hand Target equal to that of the Input Target. If the hand have collided, we need to adjust only the position of the Hand Target. We will keep updating the rotation of the Hand to that of the Input.

To determine the position of the virtual hand with respect to the Input Target, we use a RayCast function. It will ‘shoot’ a ray from the shoulder into the direction of the Input Target. We need to shoot 0.5 cm further (dir.magnitude + 0.05f, because of the 0.1 scale of the transform) because we want to reach the outer side of the Sphere Collider: that is were the Sphere Collider will meet with the wall first.

Howver when shooting the ray we only want to hit the right colliders. We need to exclude the hand colliders for grabbing objects, because otherwise the hand would be stopped by themselves! In the previous article we already put the hand on a special layer MyHands. We can now use this to exclude them from the RayCast collision. In my demo setup, the MyHand layer is layer number 9, so we exclude those from the layermask rcLayers.

If the RayCast now hits an object this will probably be the wall. This point it hit will be in the direction of the Input Target, but limited by the static object. We now need to set the Hand Target to this point, we only need to correct for the Sphere Collider radius. We do this by subtracting 0.5 cm (0.05 units because of the 0.1 scale of the transform) in the direction of the Input Target.

void Update() {
	if (collided) {
		Ray ray = new Ray(shoulder.position, inputTarget.position - shoulder.position);
		RaycastHit hit;

		Vector3 dir = inputTarget.position - shoulder.position;
		int rcLayers = Physics.DefaultRaycastLayers & ~(1<<9);
		if (Physics.Raycast(ray, out hit, dir.magnitude + 0.05f, rcLayers))
		{
			Vector3 contactPoint = hit.point - dir.normalized * 0.05f;
			transform.position = contactPoint;
		} else
			transform.position = inputTarget.position;
	} else
		transform.position = inputTarget.position;
	transform.rotation = inputTarget.rotation;
}

And that is it: the hands will be stopped by the wall but will still follow the target as closely as possible.

Note that this script is not a full solution. The sphere is a very rough approximation of the hand shape. Especially when the hand is holding large objects like a sword you will see the sword still going through the wall because it is outside the sphere collider. Also when other colliders appear between the shoulder and the static collider during raycasting they will also stop the hand, even when they are not marked static.
I hope however that this shows you the basics of hand collisions with static objects and that it inspires you to further implement hand movements in virtual reality.

I have updated the free VR Hands Unity package for you to try out the script above. A second scene is added which includes a simple wall besides the cube which can be grabbed. Have fun with it!

Posted in News Tagged with: , ,

Leave a Reply