LRCH Projection Mapping Show

LRCH_blueprints

Photo by Waid Raney

The show went very well overall with lots of positive feedback. Full wrap-up post forthcoming, but for now, here is the event.

Advertisements

LRCH Teaser

Needed to get a video to local media to tease the projection mapping project and show something at a press conference Monday, August 28th. I don’t really want to show any of it until it premieres on September 23rd for a couple of reasons. First, the piece is a little over eight minutes long so showing practically anything gives away a lot IMO. Second is that part of the power of projection mapping is the illusion, often surprise, which can only be fully experienced at the event – on the building – live.

I know we need to build an audience for this, but I wish there was a better way than previewing the work. So, this video shows some production process and two short moments that are in the final piece. The rest are tests or something that might make it into the final, but not necessarily the way it looks in this video.

Live motion capture performance

I’m still hard at work on the Central High projection mapping project. However, I wrote 90% of this post months ago, but just let it sit around and I really wanted to get it published – so here it is.

Back in April a student of mine, Anna Udochi, and I installed a live motion capture (mocap) artwork for her senior BA exhibit in the Baum Gallery on the UCA campus. The installation featured actor Jordan Boyett in a mocap suit performing a character created by Anna. Jordan would interact with the gallery audience as they found their way into the room dedicated to the installation. He could see and hear the people looking at a large TV in the gallery space. The TV showed Anna’s character, Class Gales, standing in a pastoral environment. Jordan interacted with his audience by speaking to them and making specific comments, such as mentioning what they were wearing, to let them know that he was a live character. He then had a short impromptu Q&A with the viewer, often about the biography of Class.

Gallery space

This project started in the fall semester of 2016. I worked with Anna as her mentor for her Art department 4388 course. She wanted to dig into 3D animation especially character design. After throwing different ideas around I mentioned that I have a motion capture system and that, if she was interested, she could use it for her project. Additionally, I thought that doing a live performance of her character design would also allow her/us to learn a game engine, which would give her the best realtime rendering and work with the mocap system. To my surprise she was very interested so we developed a timeline to make it happen.

For the rest of the fall semester she learned to model and rig a character appropriate for realtime motion capture and export to the Unreal game engine. The character work was done in Blender. She also used Nvidia’s cloth tools to setup the character’s veil and cape. At the end of the semester we got Jordan to come in and wear the suit and test the whole pipeline. By early spring, Anna created a 3rd person game so Class could run around the countryside using a game pad. This allowed her installation to have something running when Jordan was not there driving the character.

Class Gales in Blender

Class in his pastoral environment as seen in front of the Unreal Engine editor

To make the installation happen we ended up throwing a lot of gear at it. Jordan wore the suit to articulate the character. We needed to have audio and video of the gallery space for Jordan to see and hear in the adjacent room. Then his voice needed to be heard in the gallery. Last, we needed to send the realtime game engine to the TV in the gallery and be able to see it in the performance room. It’s a mix of my own equipment and school equipment. I didn’t want to use any school equipment that would normally be checked out by Film students since we would need the gear for most of April.

Here’s how we did it:

  • The mocap suit sends data to software created by the same company that makes the suit. That data is then broadcast and picked up by a plugin running in Unreal Engine. Unreal is then able to drive the character with the constantly incoming data.
  • In the gallery we used a flat screen TV that was already hung rather than a projector.
  • A little Canon VIXIA camera was mounted under the TV and a speaker was set on top of the screen. The camera sent an image and sound of people in the gallery to Jordan, who is in the storage/work room next door. The speaker is for the actor’s voice to the audience.
  • The computer was an old Windows PC I built several years ago for rendering that I let Anna use to do the Unreal stuff. It barely ran the mocap software and Unreal at the same time, but it somehow made it happen and never crashed.
  • The computer was cabled to a projector via VGA and out the pass-through to the TV. Worked, but no audio like HDMI. The TV’s audio input that links with VGA/RGB was optical only, thus I put in a separate speaker instead of using the TV’s.
  • Audio from Jordan: Mic into an old signal processor we were getting rid of at school. It does a pitch shift to his voice. From there into my old little Behringer mixer. I needed an amp since I was using a passive speaker, so I used an old Pro-Logic tuner I had…
  • The TV for Jordan was an old Dell monitor that had a million inputs, but the internal speaker wouldn’t work so the audio coming from the camera was going to a little Roland powered speaker.
  • AV wiring between the two rooms was through three long BNC cables we had that I just uses BNC to RCA adapters at each end.

Jordan in the mocap room next to the gallery

What could have been better…

If the computer was more powerful I would have run Jordan’s mic into it and setup Max or Pure Data to synthesize his voice rather than using a dedicated piece of equipment to do that. We would have tracked down two long HDMI cables for the gallery screen and camera, which would have simplified our cabling. We spent almost no money though. Anna bought a USB extension cable for the game controller, but that was it. Two other lessons learned were that the Canon camera did not have a wide enough lens so Jordan did not have a view of the whole gallery space; and I should have had him wear headphones to hear the audience rather than a speaker – we fought with hearing the audience through his microphone a little.

Next steps

Realtime facial mocap on a budget. I think I’ve got it figured out, but did not have the time to implement it. Also, I see doing a lot with Unreal after I learn it more. It can render some beautiful stuff in realtime and it is a great platform for VR. The mocap suit is also being used for VR so your avatar can have a body when you look down.

Final thoughts

The project was very successful. It was seamless to the audience and we were able to see real responses from people of all ages as they were often startled (“who’s talking to me?”) into realizing that there was a virtual character interacting with them. The kids seemed to get into it with no problems, while several adults were freaked out or suspicious while talking to Class. I am also very proud of Anna’s work and Jordan’s ability to learn to improvise and drive the character.

The reason I did this project was that I originally purchased the mocap suit to do realtime character stuff, as opposed to using it to record human movement, but I hadn’t been doing very much. I also wanted to start learning Unreal Engine. Luckily Anna was up for it and I knew she could pull it off working mostly on her own for the character creation and world building in Unreal. Originally she just wanted to learn 3D and make a character she had designed, but she was really into the idea of making the character live. The downside was that we could only meet a few times during the spring semester since I was on sabbatical leave. I also wasn’t able to put as much time into helping with Unreal as I wanted to due to the Central High project needing most of my attention.

This was the first time since I came to UCA that I’ve worked with a student on a shared project and research. Technically it was her work, but she was also doing a lot for my own creative research interests so rather than just being a mentor to her, she was also a research assistant for me. Until this project I’ve really only been in a mentor role with students at UCA as they do their work and I do my work completely separately. The project with Anna also finally picked up where I left off with the realtime performance mocap work I was doing with my grad students and faculty collaborators at Purdue back in 2001-2004.

Late in 2016, this project got a lot of buzz. In their interviews one would think no one ever thought of using a realtime mocap’d actor before them, but their work really was amazing in scope and bleeding edge. I think we could meet somewhere in between to make live mocap performances viable for lots of different events and art forms.

Thanks so much to Brian Young, Director of the Baum Gallery, for helping us with the installation!

 

LRCH Update – BTS

It’s been a while since I’ve updated. I’ve spent most of my time working on the Central High project, of course, but in late May and early June I took some time to do some outside projects and personal projects. One of the main reasons for a lack of updates is that I just don’t want to show anyone the work – it’s a surprise!

Progress Report

I had to do a progress report for the NEA so some of the following is a little stiffer than usual, but it was easier to copy it and tweak than re-write.

The animation has four parts; opening “construction”; “school life”; desegregation crisis; and a “future” themed close. They are not equal in length since they follow sections of the music and have different amounts of importance to the piece. The “school life” section is 98% complete as well as some transition pieces, which together make up about a third of the eight minutes of music. The opening section is well underway and will be the next section to finish. Desegregation is also well underway, but is the most reliant on outside resources, such as photos and films, which slows the process. The close is still in pre-production as it gets its identity from the other sections and as the artists get more input from the school, park, and community.

schoolLife-projection

A shot from the School Life section. Perimeter lighting is not finalized nor is the blue color in the windows.

To create the school life and close sections, the artists interviewed LRCH student council representatives and researched historic school newspapers and yearbooks found at the Arkansas State Archives as well as the LRCH library. The librarian and Gayle Seymour were also able to find resources from alumni. The librarian produced a bound version of the book released at the opening of the school in 1927, which has been helpful for the opening section. The National Park also had several resources for the opening.

The desegregation section is the most important and will have the most expectations by the audience so it is getting the most attention. The artists have run into several copyright issues as far as using iconic photographs from the crisis so they are compiling as much original media as possible and legal and will design the rest of the piece from a more artistic theme rather than a documentary theme, which fits with the overall mission of the piece.

Coverings

The team worked with the LRCH Principal, Nancy Rousseau, to determine what could be covered for projection. The dark woodwork and clear glass on the doors and arched windows create virtual holes in the projection so must be covered with a lighter material that matches the surrounding stone. The doors and windows cannot be covered during the daylight hours due to visitors to the site taking pictures of the building. It was agreed that the arched windows can be covered on the interior, which makes it possible to project onto the glass panels. The wood in the windows will still be visible so the projection designs were changed to accommodate. The doors will be covered by an exterior flat drape that can be hung minutes before the event. That drape is currently in the design phase to determine the best way to attach it to the building quickly. The exterior wall sconces will also be covered at the same time as the doors. Those covers are also in the design phase.

Logistics (re-written from the NEA report)

We decided to handle the generator and scaffold for the projectors locally and that’s turned into quite a drawn-out process of qualifying the equipment. Also, the funding source for these and the sound system wanted multiple quotes so we are drawing it out with three vendors… Hopefully this part will come to a close in the next day or so.

Something Unexpected

Thanks to this blog, I was contacted by a member of the Morehead Planetarium at UNC Chapel Hill. He was interested in using my LRCH model for their upcoming production that focuses on the American South. Pretty cool.

Geek Stuff

Software and hardware being used to do the project:

Software
Blender – 3D Animation. Easily my favorite computer program and I wish I could use it for practically everything I do on a computer, I just like working in it because it is so customizable and responsive.

Affinity Designer – vector-based texture maps and working with vector art, such as logos. I hate Adobe Illustrator with a passion – just never liked it. I originally learned vector graphics on Freehand, but it was bought and killed by Adobe years ago. I’m not much of a vector graphics person anyway so if the program doesn’t behave like I expect or is slow then make it go away. Designer is fast and easy to use and is especially friendly to cleaning up vector graphics.

Affinity Photo – raster-based texture maps and photo manipulation. I haven’t opened Photoshop in over 6 months. Photoshop is probably my favorite Adobe app, but Photo just feels nice. A few things that I think work well compared to Photoshop: great vector integration – it’s like a healthy marriage between a raster and a vector program; all adjustments, such as levels, are treated like adjustment layers that can affect everything below it in the layer stack or a single layer; it’s fast.

After Effects – compositing 3D rendered and 2D images. Using this more out of habit than anything else and I’ll be sharing projects with my fellow artists.

Apple Motion – I used this on a couple animated textures for the school life section. It’s super fast and I have always liked it. My only gripe is the timeline can get crowded.

Premiere Pro – using to compile all of the rendered sequences and audio. Sharing this with my fellow artists, otherwise I would not use it.

DaVinci Resolve (14, latest beta) – used to create an photo sequence for the school life section. I really like 14. It finally has a fast and responsive UI. The sequence had three areas on the screen for photos so I used three tracks and transformed the photos on each (position, scale). Track 1 for left, track 2 for center, track 3 for right. I started doing this with Premiere, but it was sluggish with the high res photos. Resolve handled them easily. I assumed FCPX wouldn’t be good for what I was doing, but later, I found it would have done a great job and was faster than Resolve. Here’s why I know that.

I built the photo sequence in Resolve expecting to composite it on the 3D imagery in After Effects. I thought I would use the alpha channel to handle times when I needed pure transparency. Turns out Resolve can’t export a full timeline with transparency, but it can with individual clips. I needed the full timeline!!! I tried sending the sequence to Premiere via XML and AAF. The AAF failed to import, but XML did an okay job, except that the far left and right images were not in the correct place. Same issue with FCPX, left and right were moved outward. I started moving them back in FCPX just to see what would happen and I found it to be super fast and though there aren’t tracks technically, they sure looked like it in the timeline. Since I was multiplying one of the sequences in AE, I decided to just change the background to white in Resolve – done. The other was a hard light blend mode so I had to make its background medium gray – done.

Lesson learned – Resolve rocks so far, but don’t expect an alpha channel on the full sequence.

Hardware

1st gen 5K iMac – Blender modeling and animation and all other apps. Best monitor ever – so crisp. Blender can handle custom DPI in its interface so it looks so smooth and easy to read because I push the text and buttons up a little more than 2x dpi like other apps do.

Downside – pixel doubling makes lower-res images look soft. The HD resolution we are using make the images small when viewing 1:1 in graphics programs. This monitor is made for 4K production so HD is small…

Custom Linux PC – Blender rendering using two GPUs for acceleration. Technically a faster computer than the iMac for multi-threaded apps because it has 6 hardware CPU cores. I should try After Effects on the Windows side of it, but I despise working in Windows – I only have it there to learn VR via Unreal Editor and use my mocap system.

 

LRHS Projection Mapping – Animation Experiments

I animate and render what the projector will playback and then project that animation back on the facade model, that has a similar texture to the real building, to simulate what it will look like on site.

The first animation has three statues moving their arms. After starting the rendering process I went for a walk (for those new to the blog that’s what the name means render + walk because you can’t do much else on the computer while rendering). It occurred to me that when this is projected onto the building, the statue arms will be quite a distance from the actual statue due to the facade’s depth. This isn’t much of an issue when looking at the building from front-center especially near the projector, but off-axis I felt like it may suck.

So I rendered a view off-axis to check.

I didn’t like it for two reasons. One, my original hypothesis was correct and the arms are pretty far away. This is an issue for about a third of the crowd thanks to the trees that force the audience towards the center of the viewing area, but I still don’t like it. The other reason is that any illumination on the actual statues makes them stand out as statues so I feel like we won’t be able to really remove them like I hoped. The side view does look cool even without illumination on the sides of the pillars and arches. It’s possible to project onto them too, but beyond this project’s budget.

So I created a new animation. This is better in terms of making it so the statues are seen when I want them to be seen. However, there is a moment when I have the statue “niches” rise up behind them, but it’s too late, they can already be seen. The lesson is that as parts of the building are highlighted or animated they need a strong silhouette – subtlety will be lost as soon as there is any light on them.

I’ve left the exterior lanterns, doors, and windows their natural color, which is dark, on the projection model for now. It is our goal to cover those with a material that reflects light better.

Here’s a fun experiment… A little bit of depth shading on a blueprint.

blueprint

Geek stuff warning

When I was preparing the model to simulate the projection on the building I found that some of the proportions of the statues were off by too much to let go. Thanks to some new photos I took of the building I had more modeling work to do to get it right. I had to spend some time moving parts of the statues around until they properly aligned with the real statues. I also tweaked the building, windows, and doors a little. Was a one step forward, two steps back moment, but it looks a lot better now and I have a lot more confidence in the projection.

The animations above were 750 frames each. Rendering them and then rendering the projection simulation was 4500 frames. Plus some re-rendering sections after deciding to make some tweaks. I use two computers to render. One is a Retina iMac and the other is a custom-built Linux/Windows PC. The iMac renders using its CPU (4 CPU cores/8 hyperthreaded cores) and the PC renders using two Nvidia GPUs. In some cases the PC can render four or more frames for every one the iMac can render because the GPU acceleration is so great.

Unfortunately/fortunately the Blender Cycles developers have been working hard on the GPU acceleration including, BTW, developers at AMD working on making it so Cycles is not limited to Nvidia GPUs. I say unfortunately because on one of the animations I found the PC Cycles render was crashing every 40 frames or so. It’s a sad morning when you see that the faster computer hadn’t been rendering for the last 6+ hours…

I don’t have time to troubleshoot the issue. It’s a mix of Blender/Cycles and Nvidia software and it’s not that bad in the grand scheme of things. To deal with it I decided to dust off a python script I wrote several years ago for a compute cluster we had at UCA. It created a job script for the distributed computing software. I was able to simplify it quite a bit and have it spit out a shell script (like a batch file for you Windows weirdos) that I could run so that Blender would render each frame as a new job rather than one job rendering all of the frames. Essentially it changes this one line that I manually type in a terminal:

blender -b blendfile.blend -a
(this tells blender to start without a UI to save resources and then render the animation based on the project’s settings)

To this listed in a shell script that I start by typing render.sh:

blender -b blendfile.blend -f 1
(render frame 1 based on the project’s settings and then close Blender)
blender -b blendfile.blend -f 2 (then render frame 2)
blender -b blendfile.blend -f 3 (then render frame 3)

Works like a charm. I could make the python script do a lot more tricks, but for now this is nice.

Last, Blender has a simple method of allowing multiple computers to render the same animation without using a render management application. Set the output to not overwrite and to make placeholders. A computer will look for frame 1 in the folder where the rendered images are saved (the output folder) and if it sees it then it will look for frame 2, etc. When it finds a frame that hasn’t been rendered it will create placeholder image, render, and replace the placeholder with the finished image. Each computer can claim a frame as they go, which is nice since one computer renders so much faster than the other. After Effects works this way too if you use multiple computers to render.

Since I’m not using a management system there is no check to make sure a frame actually gets rendered properly so I also wrote a python script back in the day that looks for frames with zero bytes to tell me if there were some bad frames. I might automate that with my other script, but I don’t want to dedicate the time to that right now. The macOS Finder does a nice job of listing “zero bytes,” which stands out in a list, or listing by size, so I’ve manually deleted bad frames too. To render those bad ones after deleting I just run the first command with the “-a” to find missing frames and render.

LRCHS – Projection Mapping – 1st Post

The 90th anniversary of the opening of the Little Rock Central High School building and the 60th anniversary of the Desegregation Crisis are coming September 18-25, 2017. It will be a week of activities that commemorates the anniversaries and culminates in an event that features a projection mapped animation on the façade of the high school building.

This first blog post is about a major milestone for the animation, which is a completed virtual 3D model of the facade including its four statues. Now that the model is complete we can finally get to work. The majority of the animation we create will be based on the architectural structure of the facade. I can’t believe February is almost over! It took me over a week longer than I expected to finish this phase of the project due to distractions including an illness that caused horrible headaches as well as external issues and projects and some personal goals beyond the projection mapping project. Hopefully the headaches are past – I can manage the rest.

Here’s the basic model:

4statues

We can add lighting that can make it appear as if we’ve hung actual lights near the building:

spotlights

We can also play around (this is just a test and not final imagery):

lightjade

And add stuff:

1927

Here’s what it should look like at the campus. We intend to add some lighting around the central facade as well.

projectiontest

The Facade

The limestone part of the high school’s main entry has several nice 1920s Art Deco details and is sculptural in nature with deep set doors and windows and jutting pedestals for the four statues. I still need to add the letters for the statues. We will hopefully be able to temporarily cover the windows and doors so they won’t be so dark. We will also need to cover the lanterns so they will reflect the projections.

dsc00432

Ambition, Personality, Opportunity, and Preparation

When facing the building the four statues from left to right are Ambition (male), Personality (female), Opportunity (female), and Preparation (male).

I’ve been told that the four statues were “ordered from a catalog” and not unique to the building project. Their body styles are reminiscent of Michelangelo sculptures with their long muscular arms and Greek facial features. Preparation must have been the sculptor’s version of David – see his contrapposto stance, physique, lowered right arm (holding a scroll in this case), raised left arm holding a book instead of a sling, and a left-facing gaze.

ch-interior_110 512px-27david27_by_michelangelo_jbu0001

Their dress is based on ancient Greek Chiton. The sculptural style is “wet drape” where the cloth clings to the skin to reveal the figure’s body underneath. This is most obvious in Preparation with his torso that practically looks bare, and you can see it in Opportunity as well. I modeled these statues by starting with nudes so I could get the wet drape look right.

I think later blog posts will go on another website dedicated to this project. Geeky stuff will stay on this blog though.

Geek Stuff (most of you will want to skip this)

I modeled the facade by building basic geometric shapes and aligning them to a photograph I took last summer. I actually got most of this model finished by last fall. In January I added the smaller details and lanterns.

The statues were very time consuming and I knew they would be… I downloaded a few nude “base models” from Blendswap, which are designed to be a starting place for creating a character. For the females, I used the body of one and the hands and head of another. After splicing them together I pushed and pulled and extruded faces, edges, and vertices to make them match the sculpture. I also used sculpting tools to smooth and guide areas of the model. The models are considered low-poly, which makes them easy to animate and handle in the 3D software. When they are rendered they are smoothed using Pixar’s subdivision surface technology. It turns a blocky mess of polygons into flowing garments.

For the capes I essentially started with a line and extruded it and moved it to create the overlapping folds. For smaller details I just cut the larger polygonal faces into smaller ones that I could then push, pull, and sculpt into their final form.

Once a model seemed ready to go I aligned it with the main photo of the facade. I had closeups of the statues to do most of the work, but since the photos were taken from below, the proportions were not accurate so aligning with the main photo was key to getting them the correct overall size. Because of the proportion issues and a number of other things, I modeled them just looking at my photos rather than trying to align them to photos in the 3D viewport, which is common for character design.

While modeling, the virtual statue is standing in a T-pose. I used a T-pose because we will most-likely apply some custom motion capture animation and our motion capture system (Perception Neuron) requires a T-pose to start. Another common starting point for a character model is an A-pose, which is more relaxed, but not a good idea for our purposes.

After getting the proportions correct I added a skeleton to the model. The skeleton is based on the needs of the motion capture system. The model is bound to the skeleton so whenever I move a bone, the model with deform with it. I used the bones to pose the model to match the statues. I actually animated the movement so I could go back to the T-pose easily as well as test the model deformations as the bones moved. Some of the dress is not driven by the skeleton at the moment. That will come later via cloth simulations.

opportunityposing

I modeled the statues this way because I knew we would be animating them and they needed a structure that would support animation. A more accurate alternative to modeling by eye would have been to scan the actual sculptures. Scanning could be done via LIDAR, but would have been prohibitively expensive. Or, it can be done with lots of photographs from multiple angles via photogrammetry. Shooting the sculptures with a drone and extracting frames from the video would have been a way to get the images needed.

The upside to scanning would be a very accurate model, but there are downsides. One is that the scan would have to be retopologized, which can be time intensive, to make it animatable. Another is that the models would not have a backside and the arms would be stuck to the bodies so they would need hand modeling to create the back and make the arms free. I would have been up for these things had they been scanned last fall. Unfortunately they are 22 feet above the ground so logistically it is not a trivial issue to get to them.

From here it is a matter of lighting, creating cool surface materials, animating the statues, opening the doors, or whatever else we come up with. Even things that don’t directly change the facade, such as showing a photo, will be rendered against the virtual facade so the photo will appear to interact with the building.

Blender

screenshot

I used Blender to do all of this work. It is just a joy to use. Some things that came in handy (these aren’t necessarily unique to Blender BTW):

  • Use photos as a background in the camera viewport to help create a 3D environment that is similar to the size of the actual building
  • Changed one of my 3D panels into an image viewer so I could have a photo of a statue up at all times.
  • The Shift Key – I use a Wacom Intuos 4 Medium when working with graphics software. It has a bad habit of moving during a click or not really making the mark you tried because it was so small. When changing a parameter in Blender (practically no matter what it is), you can hold down the Shift Key while doing it and it will increase the accuracy of the parameter by not allowing it to change drastically no matter how much you move the stylus. I can make big movements to do small changes. BTW, some graphics programs do have a similar function, just not all…
  • Matcaps – haven’t really used them before, but they make modeling organic forms much easier. They allow you to customize how the model is shaded in the viewport so you can see the curved surfaces easier.
  • Proportional Editing – Used when moving a vertex or small group of vertices and wanting surrounding vertices to move with them, but not as much. Super helpful when making proportion changes or needing to move parts of the model around to accommodate the posed body. Especially useful is the “Connected” mode where it will only move vertices connected to the one you are moving rather than ones that are just nearby. You can also change the falloff to control how the other non-selected vertices will change. BTW, this works on more than just vertices, just using that as an example.
  • Subdivision Surfaces – Blender can show the subd effect while editing the model either by showing the base model and the smoothing separately or by bending the base model’s edges along the surface of the smoothed model. This really helps know how the changes of the low resolution model will change the smoothed model.
  • Solidify modifier – I made the capes a single polygon thickness and used this modifier to give it dimensional thickness. When sending the models out to Jim and Jonathan, who use Cinema4D and Maya, I will “Apply” this effect to make the geometry permanent.
  • Cycles with two GPUs – it’s so fast! To do test renderings and make the images in this blog post it is amazing how fast Cycles can be. The images here took about a minute and a half to render each one. It’s also crazy easy to make objects into light sources. I do most of the work on my iMac and then switch over to my Linux computer for rendering.