LRHS Projection Mapping – Animation Experiments

I animate and render what the projector will playback and then project that animation back on the facade model, that has a similar texture to the real building, to simulate what it will look like on site.

The first animation has three statues moving their arms. After starting the rendering process I went for a walk (for those new to the blog that’s what the name means render + walk because you can’t do much else on the computer while rendering). It occurred to me that when this is projected onto the building, the statue arms will be quite a distance from the actual statue due to the facade’s depth. This isn’t much of an issue when looking at the building from front-center especially near the projector, but off-axis I felt like it may suck.

So I rendered a view off-axis to check.

I didn’t like it for two reasons. One, my original hypothesis was correct and the arms are pretty far away. This is an issue for about a third of the crowd thanks to the trees that force the audience towards the center of the viewing area, but I still don’t like it. The other reason is that any illumination on the actual statues makes them stand out as statues so I feel like we won’t be able to really remove them like I hoped. The side view does look cool even without illumination on the sides of the pillars and arches. It’s possible to project onto them too, but beyond this project’s budget.

So I created a new animation. This is better in terms of making it so the statues are seen when I want them to be seen. However, there is a moment when I have the statue “niches” rise up behind them, but it’s too late, they can already be seen. The lesson is that as parts of the building are highlighted or animated they need a strong silhouette – subtlety will be lost as soon as there is any light on them.

I’ve left the exterior lanterns, doors, and windows their natural color, which is dark, on the projection model for now. It is our goal to cover those with a material that reflects light better.

Here’s a fun experiment… A little bit of depth shading on a blueprint.

blueprint

Geek stuff warning

When I was preparing the model to simulate the projection on the building I found that some of the proportions of the statues were off by too much to let go. Thanks to some new photos I took of the building I had more modeling work to do to get it right. I had to spend some time moving parts of the statues around until they properly aligned with the real statues. I also tweaked the building, windows, and doors a little. Was a one step forward, two steps back moment, but it looks a lot better now and I have a lot more confidence in the projection.

The animations above were 750 frames each. Rendering them and then rendering the projection simulation was 4500 frames. Plus some re-rendering sections after deciding to make some tweaks. I use two computers to render. One is a Retina iMac and the other is a custom-built Linux/Windows PC. The iMac renders using its CPU (4 CPU cores/8 hyperthreaded cores) and the PC renders using two Nvidia GPUs. In some cases the PC can render four or more frames for every one the iMac can render because the GPU acceleration is so great.

Unfortunately/fortunately the Blender Cycles developers have been working hard on the GPU acceleration including, BTW, developers at AMD working on making it so Cycles is not limited to Nvidia GPUs. I say unfortunately because on one of the animations I found the PC Cycles render was crashing every 40 frames or so. It’s a sad morning when you see that the faster computer hadn’t been rendering for the last 6+ hours…

I don’t have time to troubleshoot the issue. It’s a mix of Blender/Cycles and Nvidia software and it’s not that bad in the grand scheme of things. To deal with it I decided to dust off a python script I wrote several years ago for a compute cluster we had at UCA. It created a job script for the distributed computing software. I was able to simplify it quite a bit and have it spit out a shell script (like a batch file for you Windows weirdos) that I could run so that Blender would render each frame as a new job rather than one job rendering all of the frames. Essentially it changes this one line that I manually type in a terminal:

blender -b blendfile.blend -a
(this tells blender to start without a UI to save resources and then render the animation based on the project’s settings)

To this listed in a shell script that I start by typing render.sh:

blender -b blendfile.blend -f 1
(render frame 1 based on the project’s settings and then close Blender)
blender -b blendfile.blend -f 2 (then render frame 2)
blender -b blendfile.blend -f 3 (then render frame 3)

Works like a charm. I could make the python script do a lot more tricks, but for now this is nice.

Last, Blender has a simple method of allowing multiple computers to render the same animation without using a render management application. Set the output to not overwrite and to make placeholders. A computer will look for frame 1 in the folder where the rendered images are saved (the output folder) and if it sees it then it will look for frame 2, etc. When it finds a frame that hasn’t been rendered it will create placeholder image, render, and replace the placeholder with the finished image. Each computer can claim a frame as they go, which is nice since one computer renders so much faster than the other. After Effects works this way too if you use multiple computers to render.

Since I’m not using a management system there is no check to make sure a frame actually gets rendered properly so I also wrote a python script back in the day that looks for frames with zero bytes to tell me if there were some bad frames. I might automate that with my other script, but I don’t want to dedicate the time to that right now. The macOS Finder does a nice job of listing “zero bytes,” which stands out in a list, or listing by size, so I’ve manually deleted bad frames too. To render those bad ones after deleting I just run the first command with the “-a” to find missing frames and render.

Advertisements

LRCHS – Projection Mapping – 1st Post

The 90th anniversary of the opening of the Little Rock Central High School building and the 60th anniversary of the Desegregation Crisis are coming September 18-25, 2017. It will be a week of activities that commemorates the anniversaries and culminates in an event that features a projection mapped animation on the façade of the high school building.

This first blog post is about a major milestone for the animation, which is a completed virtual 3D model of the facade including its four statues. Now that the model is complete we can finally get to work. The majority of the animation we create will be based on the architectural structure of the facade. I can’t believe February is almost over! It took me over a week longer than I expected to finish this phase of the project due to distractions including an illness that caused horrible headaches as well as external issues and projects and some personal goals beyond the projection mapping project. Hopefully the headaches are past – I can manage the rest.

Here’s the basic model:

4statues

We can add lighting that can make it appear as if we’ve hung actual lights near the building:

spotlights

We can also play around (this is just a test and not final imagery):

lightjade

And add stuff:

1927

Here’s what it should look like at the campus. We intend to add some lighting around the central facade as well.

projectiontest

The Facade

The limestone part of the high school’s main entry has several nice 1920s Art Deco details and is sculptural in nature with deep set doors and windows and jutting pedestals for the four statues. I still need to add the letters for the statues. We will hopefully be able to temporarily cover the windows and doors so they won’t be so dark. We will also need to cover the lanterns so they will reflect the projections.

dsc00432

Ambition, Personality, Opportunity, and Preparation

When facing the building the four statues from left to right are Ambition (male), Personality (female), Opportunity (female), and Preparation (male).

I’ve been told that the four statues were “ordered from a catalog” and not unique to the building project. Their body styles are reminiscent of Michelangelo sculptures with their long muscular arms and Greek facial features. Preparation must have been the sculptor’s version of David – see his contrapposto stance, physique, lowered right arm (holding a scroll in this case), raised left arm holding a book instead of a sling, and a left-facing gaze.

ch-interior_110 512px-27david27_by_michelangelo_jbu0001

Their dress is based on ancient Greek Chiton. The sculptural style is “wet drape” where the cloth clings to the skin to reveal the figure’s body underneath. This is most obvious in Preparation with his torso that practically looks bare, and you can see it in Opportunity as well. I modeled these statues by starting with nudes so I could get the wet drape look right.

I think later blog posts will go on another website dedicated to this project. Geeky stuff will stay on this blog though.

Geek Stuff (most of you will want to skip this)

I modeled the facade by building basic geometric shapes and aligning them to a photograph I took last summer. I actually got most of this model finished by last fall. In January I added the smaller details and lanterns.

The statues were very time consuming and I knew they would be… I downloaded a few nude “base models” from Blendswap, which are designed to be a starting place for creating a character. For the females, I used the body of one and the hands and head of another. After splicing them together I pushed and pulled and extruded faces, edges, and vertices to make them match the sculpture. I also used sculpting tools to smooth and guide areas of the model. The models are considered low-poly, which makes them easy to animate and handle in the 3D software. When they are rendered they are smoothed using Pixar’s subdivision surface technology. It turns a blocky mess of polygons into flowing garments.

For the capes I essentially started with a line and extruded it and moved it to create the overlapping folds. For smaller details I just cut the larger polygonal faces into smaller ones that I could then push, pull, and sculpt into their final form.

Once a model seemed ready to go I aligned it with the main photo of the facade. I had closeups of the statues to do most of the work, but since the photos were taken from below, the proportions were not accurate so aligning with the main photo was key to getting them the correct overall size. Because of the proportion issues and a number of other things, I modeled them just looking at my photos rather than trying to align them to photos in the 3D viewport, which is common for character design.

While modeling, the virtual statue is standing in a T-pose. I used a T-pose because we will most-likely apply some custom motion capture animation and our motion capture system (Perception Neuron) requires a T-pose to start. Another common starting point for a character model is an A-pose, which is more relaxed, but not a good idea for our purposes.

After getting the proportions correct I added a skeleton to the model. The skeleton is based on the needs of the motion capture system. The model is bound to the skeleton so whenever I move a bone, the model with deform with it. I used the bones to pose the model to match the statues. I actually animated the movement so I could go back to the T-pose easily as well as test the model deformations as the bones moved. Some of the dress is not driven by the skeleton at the moment. That will come later via cloth simulations.

opportunityposing

I modeled the statues this way because I knew we would be animating them and they needed a structure that would support animation. A more accurate alternative to modeling by eye would have been to scan the actual sculptures. Scanning could be done via LIDAR, but would have been prohibitively expensive. Or, it can be done with lots of photographs from multiple angles via photogrammetry. Shooting the sculptures with a drone and extracting frames from the video would have been a way to get the images needed.

The upside to scanning would be a very accurate model, but there are downsides. One is that the scan would have to be retopologized, which can be time intensive, to make it animatable. Another is that the models would not have a backside and the arms would be stuck to the bodies so they would need hand modeling to create the back and make the arms free. I would have been up for these things had they been scanned last fall. Unfortunately they are 22 feet above the ground so logistically it is not a trivial issue to get to them.

From here it is a matter of lighting, creating cool surface materials, animating the statues, opening the doors, or whatever else we come up with. Even things that don’t directly change the facade, such as showing a photo, will be rendered against the virtual facade so the photo will appear to interact with the building.

Blender

screenshot

I used Blender to do all of this work. It is just a joy to use. Some things that came in handy (these aren’t necessarily unique to Blender BTW):

  • Use photos as a background in the camera viewport to help create a 3D environment that is similar to the size of the actual building
  • Changed one of my 3D panels into an image viewer so I could have a photo of a statue up at all times.
  • The Shift Key – I use a Wacom Intuos 4 Medium when working with graphics software. It has a bad habit of moving during a click or not really making the mark you tried because it was so small. When changing a parameter in Blender (practically no matter what it is), you can hold down the Shift Key while doing it and it will increase the accuracy of the parameter by not allowing it to change drastically no matter how much you move the stylus. I can make big movements to do small changes. BTW, some graphics programs do have a similar function, just not all…
  • Matcaps – haven’t really used them before, but they make modeling organic forms much easier. They allow you to customize how the model is shaded in the viewport so you can see the curved surfaces easier.
  • Proportional Editing – Used when moving a vertex or small group of vertices and wanting surrounding vertices to move with them, but not as much. Super helpful when making proportion changes or needing to move parts of the model around to accommodate the posed body. Especially useful is the “Connected” mode where it will only move vertices connected to the one you are moving rather than ones that are just nearby. You can also change the falloff to control how the other non-selected vertices will change. BTW, this works on more than just vertices, just using that as an example.
  • Subdivision Surfaces – Blender can show the subd effect while editing the model either by showing the base model and the smoothing separately or by bending the base model’s edges along the surface of the smoothed model. This really helps know how the changes of the low resolution model will change the smoothed model.
  • Solidify modifier – I made the capes a single polygon thickness and used this modifier to give it dimensional thickness. When sending the models out to Jim and Jonathan, who use Cinema4D and Maya, I will “Apply” this effect to make the geometry permanent.
  • Cycles with two GPUs – it’s so fast! To do test renderings and make the images in this blog post it is amazing how fast Cycles can be. The images here took about a minute and a half to render each one. It’s also crazy easy to make objects into light sources. I do most of the work on my iMac and then switch over to my Linux computer for rendering.