Live motion capture performance

I’m still hard at work on the Central High projection mapping project. However, I wrote 90% of this post months ago, but just let it sit around and I really wanted to get it published – so here it is.

Back in April a student of mine, Anna Udochi, and I installed a live motion capture (mocap) artwork for her senior BA exhibit in the Baum Gallery on the UCA campus. The installation featured actor Jordan Boyett in a mocap suit performing a character created by Anna. Jordan would interact with the gallery audience as they found their way into the room dedicated to the installation. He could see and hear the people looking at a large TV in the gallery space. The TV showed Anna’s character, Class Gales, standing in a pastoral environment. Jordan interacted with his audience by speaking to them and making specific comments, such as mentioning what they were wearing, to let them know that he was a live character. He then had a short impromptu Q&A with the viewer, often about the biography of Class.

Gallery space

This project started in the fall semester of 2016. I worked with Anna as her mentor for her Art department 4388 course. She wanted to dig into 3D animation especially character design. After throwing different ideas around I mentioned that I have a motion capture system and that, if she was interested, she could use it for her project. Additionally, I thought that doing a live performance of her character design would also allow her/us to learn a game engine, which would give her the best realtime rendering and work with the mocap system. To my surprise she was very interested so we developed a timeline to make it happen.

For the rest of the fall semester she learned to model and rig a character appropriate for realtime motion capture and export to the Unreal game engine. The character work was done in Blender. She also used Nvidia’s cloth tools to setup the character’s veil and cape. At the end of the semester we got Jordan to come in and wear the suit and test the whole pipeline. By early spring, Anna created a 3rd person game so Class could run around the countryside using a game pad. This allowed her installation to have something running when Jordan was not there driving the character.

Class Gales in Blender

Class in his pastoral environment as seen in front of the Unreal Engine editor

To make the installation happen we ended up throwing a lot of gear at it. Jordan wore the suit to articulate the character. We needed to have audio and video of the gallery space for Jordan to see and hear in the adjacent room. Then his voice needed to be heard in the gallery. Last, we needed to send the realtime game engine to the TV in the gallery and be able to see it in the performance room. It’s a mix of my own equipment and school equipment. I didn’t want to use any school equipment that would normally be checked out by Film students since we would need the gear for most of April.

Here’s how we did it:

  • The mocap suit sends data to software created by the same company that makes the suit. That data is then broadcast and picked up by a plugin running in Unreal Engine. Unreal is then able to drive the character with the constantly incoming data.
  • In the gallery we used a flat screen TV that was already hung rather than a projector.
  • A little Canon VIXIA camera was mounted under the TV and a speaker was set on top of the screen. The camera sent an image and sound of people in the gallery to Jordan, who is in the storage/work room next door. The speaker is for the actor’s voice to the audience.
  • The computer was an old Windows PC I built several years ago for rendering that I let Anna use to do the Unreal stuff. It barely ran the mocap software and Unreal at the same time, but it somehow made it happen and never crashed.
  • The computer was cabled to a projector via VGA and out the pass-through to the TV. Worked, but no audio like HDMI. The TV’s audio input that links with VGA/RGB was optical only, thus I put in a separate speaker instead of using the TV’s.
  • Audio from Jordan: Mic into an old signal processor we were getting rid of at school. It does a pitch shift to his voice. From there into my old little Behringer mixer. I needed an amp since I was using a passive speaker, so I used an old Pro-Logic tuner I had…
  • The TV for Jordan was an old Dell monitor that had a million inputs, but the internal speaker wouldn’t work so the audio coming from the camera was going to a little Roland powered speaker.
  • AV wiring between the two rooms was through three long BNC cables we had that I just uses BNC to RCA adapters at each end.

Jordan in the mocap room next to the gallery

What could have been better…

If the computer was more powerful I would have run Jordan’s mic into it and setup Max or Pure Data to synthesize his voice rather than using a dedicated piece of equipment to do that. We would have tracked down two long HDMI cables for the gallery screen and camera, which would have simplified our cabling. We spent almost no money though. Anna bought a USB extension cable for the game controller, but that was it. Two other lessons learned were that the Canon camera did not have a wide enough lens so Jordan did not have a view of the whole gallery space; and I should have had him wear headphones to hear the audience rather than a speaker – we fought with hearing the audience through his microphone a little.

Next steps

Realtime facial mocap on a budget. I think I’ve got it figured out, but did not have the time to implement it. Also, I see doing a lot with Unreal after I learn it more. It can render some beautiful stuff in realtime and it is a great platform for VR. The mocap suit is also being used for VR so your avatar can have a body when you look down.

Final thoughts

The project was very successful. It was seamless to the audience and we were able to see real responses from people of all ages as they were often startled (“who’s talking to me?”) into realizing that there was a virtual character interacting with them. The kids seemed to get into it with no problems, while several adults were freaked out or suspicious while talking to Class. I am also very proud of Anna’s work and Jordan’s ability to learn to improvise and drive the character.

The reason I did this project was that I originally purchased the mocap suit to do realtime character stuff, as opposed to using it to record human movement, but I hadn’t been doing very much. I also wanted to start learning Unreal Engine. Luckily Anna was up for it and I knew she could pull it off working mostly on her own for the character creation and world building in Unreal. Originally she just wanted to learn 3D and make a character she had designed, but she was really into the idea of making the character live. The downside was that we could only meet a few times during the spring semester since I was on sabbatical leave. I also wasn’t able to put as much time into helping with Unreal as I wanted to due to the Central High project needing most of my attention.

This was the first time since I came to UCA that I’ve worked with a student on a shared project and research. Technically it was her work, but she was also doing a lot for my own creative research interests so rather than just being a mentor to her, she was also a research assistant for me. Until this project I’ve really only been in a mentor role with students at UCA as they do their work and I do my work completely separately. The project with Anna also finally picked up where I left off with the realtime performance mocap work I was doing with my grad students and faculty collaborators at Purdue back in 2001-2004.

Late in 2016, this project got a lot of buzz. In their interviews one would think no one ever thought of using a realtime mocap’d actor before them, but their work really was amazing in scope and bleeding edge. I think we could meet somewhere in between to make live mocap performances viable for lots of different events and art forms.

Thanks so much to Brian Young, Director of the Baum Gallery, for helping us with the installation!