Thorne Brandt

Nothing Up My Sleeve

December 10th, 2016

I've spent a fair amount of time making tools for performance driven animation. I was inspired by the Simpsons live call in show and wanted to create a show driven behind a showcase of "Improv, Audience Interaction, and Tech."

I had an amazing time collaborating with Lyra Hill, who has spent years building up a cult following out of curation of some of the best performance shows that Chicago has ever seen. If you're unfamiliar, stop what you're doing and check out the impeccable documented Brain Frame. The theme we decided for this art & tech show was 'an augmented magic show', which would come to feature projection mapping, VR, mixed reality, live photoshopping, sound responsive animation, and kinect body tracking for a vaudvillian good time.

It is my understanding that a great magician never reveals his secrets. It is also my understanding that I am not a great magician. That being said, nothing would make me happier than if details about my process inspires similar performances and installations. If you'd like to steal these ideas for your projects, be my guest. I'm usually only making things because I want them to exist. I'm more than happy to share the technical details of the tricks in the show.

Aura Reading

Source Code

I made an installation that would project a unique distinct 'aura' onto each person that walked across the front wall, before the main stage show started. This allowed Hannah Simon Kim to improvisationally interpret the auras.

This trick was accomplished with some Unity scripts, a Microsoft Kinect hidden above the projector, and a lot of calibrating. The complete source code is here. A grave warning that it will require recalibration for a new location.

				
private Dictionary _Bodies = new Dictionary();
private Dictionary _Colors = new Dictionary();
private Dictionary _offColors = new Dictionary();
				
			

The important code takes place within three distinct dictionaries. The first being the bodies detected, so that multiple individuals will retain their distinct "sprititual identity" (represented as a ulong) which serves as a key for the second two arrays, "_Colors" and "_offColors." I'll explain what _offColors are later.

				
void Update (){

...

   foreach(var body in data)
   {
       if (body == null)
       {
           continue;
       }

       if(body.IsTracked)
       {
           if(!_Bodies.ContainsKey(body.TrackingId))
           {
               _Bodies[body.TrackingId] = CreateBodyObject(body.TrackingId);
               _Colors[body.TrackingId] = Random.ColorHSV(0f, 1f, 1f, 1f, 0.5f, 1f);
               _offColors[body.TrackingId] = Random.Range(.02f, .2f);
           }

           RefreshBodyObject(body, _Bodies[body.TrackingId]);
       }
   }
}

				

			

When a body is initiated, it's tracking identity is used to create random main Color for the _Colors dictionary, and likewise, a random float is assigned for offColor. We're projecting on a black background, so we need the majority of main colors to be bright enough. We also don't want to bum most people out for having black auras. We're able accomplish the color range we ant via a handy Random.ColorHSV() method, which ouputs a Color instance within a specificed range.

							
private void RefreshBodyObject(Kinect.Body body, GameObject bodyObject){
    for (Kinect.JointType jt = Kinect.JointType.SpineBase; jt <= Kinect.JointType.ThumbRight; jt++)
    {
        Kinect.Joint sourceJoint = body.Joints[jt];
        Kinect.Joint? targetJoint = null;

        if(_BoneMap.ContainsKey(jt))
        {
            targetJoint = body.Joints[_BoneMap[jt]];
        }

        GameObject jointObj = bodyObject.transform.FindChild(jt.ToString()).gameObject;
        jointObj.transform.localPosition = GetVector3FromJoint(sourceJoint);
        assignColor(jointObj, _Colors[body.TrackingId]);
        assignSomeRandom(jointObj, _offColors[body.TrackingId]);
    }
}


private void assignRandomColor(GameObject jointObj){
    Color randomColor = Random.ColorHSV(0f, 1f, 1f, 1f, 1f, 1f);
    assignColor(jointObj, randomColor);
}

private void assignSomeRandom(GameObject jointObj, float freq){
    if(Random.value < freq){
        assignRandomColor(jointObj);
    }
}

							
						

Working on this felt very much like painting with code. I wasn't satified with one pass at random colors. No matter how I adjusted the thresholds and parameters, I found the aesthetic to be too artificial. So I eventually decided to add another filter called "assignSomeRandom." and the dictionary of 'offColors.' This would be a threshold that would decide how many completely random, more harsh, potentially dark colored added to the predicatble colors, distinct to each aura's id. This added a sense of organic individuality. One person's aura would be elicit a peaceful lake, while the next aura would consistently resemble a calico cat vomiting up jelly beans.

These colors were used to tint variations of this grayscale texture that I meticulously handpainted /sarcasm

A particle emitter using this colored texture is attached to each joint of the body.

Resolume's "trail" effect provided the final touch.


Sarah Squirm and the Crystal Vegas Snowglobe

The main gag for this act was to capture an image of an audience member in real time and then embarrass them with family photos. There was also a projection mapped crystal-ball-esque snowglobe, which would reveal the subject's past and future involved mostly watching hentai.

The hardest part of this process would be actually obtaining a clear enough photo of the volunteer and photoshopping them in a way that doesn't disrupt the performance. Luckily, Sarah is a brilliant character actor and was able to gracefully occupy the presence of annoying family member that is always pressuring for family photos. She then discretely handed a memory card off and pointed out the victim.

I'm always looking for opportunities to work live photoshopping into performance. I feel that it has a lot of potential as a performance tool, especially with scripting. After the photo was handed off via memory card, it was roughly 45 seconds before the masked, feathered, and resized face was correctly named in the special Resources folder that Unity used to texture a sprite that lived within the compositions that made up the embarassing family photos.

										
public GameObject kevinObject;
public GameObject[] photos;
private Texture kevinTexture;
private int kevinIndex = 0;


Texture2D loadKevinTexture(){
    string filePath = "Assets/Resources/kevin.png";
    Texture2D tex = null;
    byte[] fileData;

    if(!File.Exists(filePath)){
    	filePath = "Assets/Resources/kevin_sample.png";
    }

    if(File.Exists(filePath)){
        fileData = File.ReadAllBytes(filePath);
        tex = new Texture2D(2, 2);
        tex.LoadImage(fileData);
    }
    return tex;
}

void showNextImage(){
	kevinIndex++;
	if(kevinIndex > photos.Length){
		kevinIndex = 0;
	}
	showImage(kevinIndex);
}

void showImage(int index){
	for(int i = 0; i < photos.Length; i++){
		GameObject photo = photos[i];
		bool _active = false;
		if(i == kevinIndex){
			_active = true;
		}
		photo.SetActive(_active);
	}
}

										
									

Pretty self explanatory code here. loadKevinTexture() allows for the executable to load new image assets from the resources folder. ShowImage() demonstrates how you can basically make a series of positioned GameObjects inside Unity into a powerpoint presentation. I eventually wanted to replace this function with an animation of a 3d page turning.
TODO: Web version of this trick.

The semi-translucent snowglobe was a circular mapped rear projection ( via Resolume ) onto $2 Home Depot dropcloth plastic wrapped around a hula-hoop. I gave Sarah a discrete MIDI controller in which she could fade in Hentai whenever she felt her son had been thinking about it.

Eryka's Virtual Ventriloquism

Source code

This was one of the more abstract, conceptual, and elegant pieces of the show. It directly confronted ways in which the intrinsicly personal experience of VR can be presented to an audience and what that means about the idea of 'play' and 'performance.' Instead of a marionette being pulled by a string, Eryka's body was confined to the umbilical cord of the VR headset. When she tugged on her own string, it affected the head of the puppet which looked down upon her.

The technical process was for Eryka's physical position would manipulate the puppet on two separate axis, as well as guide it's orientation. What is accomplished is the erie effect of a magic portrait that follows you around the room.

How was Eryka's face modeled? At first tried I captured a high poly version of Eryka's face with 1233dcatch. Eryka's vision involved a transition between an embryonic texture and a detailed adult self portrait for one axis and two different expressions ( Lust and Rage ) for another axis. The high-poly 3d scan proved to be a little too cumbersome to actually animate the expressions, so I ended up throwing it away after using it as a size template and symetrically hand modeled in Maya.


This symetrical model allowed me to make two blendshapes of two different expressions.

These blend shapes are attached to an .fbx and are accessible within Unity.

				

public GameObject faceObject;
private Vector3 mousePosition;
private SkinnedMeshRenderer faceRend;
private SkinnedMeshRenderer noFaceRend;
private Mesh faceMesh;
private Mesh noFaceMesh;
private Material faceMaterial;

void Start () {
	setupMeshes();
}


void setupMeshes(){
	GameObject face = faceObject.transform.Find("face").gameObject;
	GameObject noFace = faceObject.transform.Find("face_no_texture").gameObject;
	faceRend = face.GetComponent();
	noFaceRend = noFace.GetComponent();
}

...
				
			

Why are there two Meshes in this code? I found that the most painless way to transition in between the two textures was to write a custom shader with an alpha fade and fade one to be completely transparent while revealing an unchanging static second version of the model.

				
Shader "Unlit/UnlitAlphaWithFade"
 {
     Properties
     {
         _Color ("Color Tint", Color) = (1,1,1,1)
         _MainTex ("Base (RGB) Alpha (A)", 2D) = "white"
     }

     Category
     {
         Lighting On
         ZWrite On
         Cull back
         Blend SrcAlpha OneMinusSrcAlpha
         Tags {Queue=Transparent}
         SubShader
         {
              Pass
              {
                SetTexture [_MainTex]{
                     ConstantColor [_Color]
                    Combine Texture * constant
                 }
             }
         }
     }
 }
				
			

Here's the transparent shader.

				
void blendTexture(Vector3 pos){
	float alpha = pos.x;
	Color color = faceMaterial.color;
	faceMaterial.color = new Color(color.r, color.g, color.b, alpha);
}
				
			

The alpha can then be manipulated with the alpha channel of the color property of the material.

				

void blendExpression(Vector3 pos){
	float blendY;
	if(pos.y > .5f){
		blendY = (pos.y - .5f) * 200f;
		blendLust(blendY);
	} else {
		blendY = (.5f - pos.y) * 200f;
		blendRage(blendY);
	}
}

void blendLust(float blend){
	faceRend.SetBlendShapeWeight(0, blend);
	noFaceRend.SetBlendShapeWeight(0, blend);
}

void blendRage(float blend){
	faceRend.SetBlendShapeWeight(1, blend);
	noFaceRend.SetBlendShapeWeight(1, blend);
}

				
			

SetBlendShapeWeight() is how we transition in between two blend shapes of a mesh. It takes a value in between 1 & 100. Since Eryka wanted the middle of the screen to have no expression, we start outwards from the center with blendExpression(); I made a test that used mouse position before taking the VR tracking position as an input.

				
Vector3 getRelativePosition(Vector3 mousePosition){
	float newX = Input.mousePosition.x / Screen.width;
	float newY = Input.mousePosition.y / Screen.height;
	Vector3 relativePosition = new Vector3(
		newX,
		newY,
		mousePosition.z
	);
	return relativePosition;
}
				
			

The relative position calculated by x & y mouse position was then easily translated to the x & z of the vive headset.

				
void Update(){
	vivePosition = viveHeadset.transform.position;
	Vector3 relativePosition = getRelativePosition(vivePosition, groundPlane);
	lookAtVive(vivePosition);
	blendExpression(1f - relativePosition.x);
	blendTexture(relativePosition.z);
}

Vector3 getRelativePosition(Vector3 vivePosition, Collider groundPlane){
	Bounds b = groundPlane.bounds;
	float xSize = b.max.x - b.min.x;
	float zSize = b.max.z - b.min.z;
	float relativeX = Mathf.Abs((b.max.x - vivePosition.x)/b.size.x);
	float relativeZ = Mathf.Abs((b.max.z - vivePosition.z)/zSize);
	Vector3 relativePosition = new Vector3(
		relativeX,
		0f,
		relativeZ
	);
	return relativePosition;
}
				
			

This is an also an installation that requires some calibration. I quickly made an adhoc 'virtual floor tape' by making a manual ground plane within the virtual environment that mapped to specific dimensions of the physical stage. This ground plane wasn't seen to audience members because I made a stationary virtual camera that was focused on the head. We made the background green so we could chromakey it within Resolume to prepare for transitions in between sets. I can only imagine what it was like to dance technically blindfolded within this field of white and bright green sky during the performance.