The show went very well overall with lots of positive feedback. Full wrap-up post forthcoming, but for now, here is the event.
Needed to get a video to local media to tease the projection mapping project and show something at a press conference Monday, August 28th. I don’t really want to show any of it until it premieres on September 23rd for a couple of reasons. First, the piece is a little over eight minutes long so showing practically anything gives away a lot IMO. Second is that part of the power of projection mapping is the illusion, often surprise, which can only be fully experienced at the event – on the building – live.
I know we need to build an audience for this, but I wish there was a better way than previewing the work. So, this video shows some production process and two short moments that are in the final piece. The rest are tests or something that might make it into the final, but not necessarily the way it looks in this video.
It’s been a while since I’ve updated. I’ve spent most of my time working on the Central High project, of course, but in late May and early June I took some time to do some outside projects and personal projects. One of the main reasons for a lack of updates is that I just don’t want to show anyone the work – it’s a surprise!
I had to do a progress report for the NEA so some of the following is a little stiffer than usual, but it was easier to copy it and tweak than re-write.
The animation has four parts; opening “construction”; “school life”; desegregation crisis; and a “future” themed close. They are not equal in length since they follow sections of the music and have different amounts of importance to the piece. The “school life” section is 98% complete as well as some transition pieces, which together make up about a third of the eight minutes of music. The opening section is well underway and will be the next section to finish. Desegregation is also well underway, but is the most reliant on outside resources, such as photos and films, which slows the process. The close is still in pre-production as it gets its identity from the other sections and as the artists get more input from the school, park, and community.
To create the school life and close sections, the artists interviewed LRCH student council representatives and researched historic school newspapers and yearbooks found at the Arkansas State Archives as well as the LRCH library. The librarian and Gayle Seymour were also able to find resources from alumni. The librarian produced a bound version of the book released at the opening of the school in 1927, which has been helpful for the opening section. The National Park also had several resources for the opening.
The desegregation section is the most important and will have the most expectations by the audience so it is getting the most attention. The artists have run into several copyright issues as far as using iconic photographs from the crisis so they are compiling as much original media as possible and legal and will design the rest of the piece from a more artistic theme rather than a documentary theme, which fits with the overall mission of the piece.
The team worked with the LRCH Principal, Nancy Rousseau, to determine what could be covered for projection. The dark woodwork and clear glass on the doors and arched windows create virtual holes in the projection so must be covered with a lighter material that matches the surrounding stone. The doors and windows cannot be covered during the daylight hours due to visitors to the site taking pictures of the building. It was agreed that the arched windows can be covered on the interior, which makes it possible to project onto the glass panels. The wood in the windows will still be visible so the projection designs were changed to accommodate. The doors will be covered by an exterior flat drape that can be hung minutes before the event. That drape is currently in the design phase to determine the best way to attach it to the building quickly. The exterior wall sconces will also be covered at the same time as the doors. Those covers are also in the design phase.
Logistics (re-written from the NEA report)
We decided to handle the generator and scaffold for the projectors locally and that’s turned into quite a drawn-out process of qualifying the equipment. Also, the funding source for these and the sound system wanted multiple quotes so we are drawing it out with three vendors… Hopefully this part will come to a close in the next day or so.
Thanks to this blog, I was contacted by a member of the Morehead Planetarium at UNC Chapel Hill. He was interested in using my LRCH model for their upcoming production that focuses on the American South. Pretty cool.
Software and hardware being used to do the project:
Blender – 3D Animation. Easily my favorite computer program and I wish I could use it for practically everything I do on a computer, I just like working in it because it is so customizable and responsive.
Affinity Designer – vector-based texture maps and working with vector art, such as logos. I hate Adobe Illustrator with a passion – just never liked it. I originally learned vector graphics on Freehand, but it was bought and killed by Adobe years ago. I’m not much of a vector graphics person anyway so if the program doesn’t behave like I expect or is slow then make it go away. Designer is fast and easy to use and is especially friendly to cleaning up vector graphics.
Affinity Photo – raster-based texture maps and photo manipulation. I haven’t opened Photoshop in over 6 months. Photoshop is probably my favorite Adobe app, but Photo just feels nice. A few things that I think work well compared to Photoshop: great vector integration – it’s like a healthy marriage between a raster and a vector program; all adjustments, such as levels, are treated like adjustment layers that can affect everything below it in the layer stack or a single layer; it’s fast.
After Effects – compositing 3D rendered and 2D images. Using this more out of habit than anything else and I’ll be sharing projects with my fellow artists.
Apple Motion – I used this on a couple animated textures for the school life section. It’s super fast and I have always liked it. My only gripe is the timeline can get crowded.
Premiere Pro – using to compile all of the rendered sequences and audio. Sharing this with my fellow artists, otherwise I would not use it.
DaVinci Resolve (14, latest beta) – used to create an photo sequence for the school life section. I really like 14. It finally has a fast and responsive UI. The sequence had three areas on the screen for photos so I used three tracks and transformed the photos on each (position, scale). Track 1 for left, track 2 for center, track 3 for right. I started doing this with Premiere, but it was sluggish with the high res photos. Resolve handled them easily. I assumed FCPX wouldn’t be good for what I was doing, but later, I found it would have done a great job and was faster than Resolve. Here’s why I know that.
I built the photo sequence in Resolve expecting to composite it on the 3D imagery in After Effects. I thought I would use the alpha channel to handle times when I needed pure transparency. Turns out Resolve can’t export a full timeline with transparency, but it can with individual clips. I needed the full timeline!!! I tried sending the sequence to Premiere via XML and AAF. The AAF failed to import, but XML did an okay job, except that the far left and right images were not in the correct place. Same issue with FCPX, left and right were moved outward. I started moving them back in FCPX just to see what would happen and I found it to be super fast and though there aren’t tracks technically, they sure looked like it in the timeline. Since I was multiplying one of the sequences in AE, I decided to just change the background to white in Resolve – done. The other was a hard light blend mode so I had to make its background medium gray – done.
Lesson learned – Resolve rocks so far, but don’t expect an alpha channel on the full sequence.
1st gen 5K iMac – Blender modeling and animation and all other apps. Best monitor ever – so crisp. Blender can handle custom DPI in its interface so it looks so smooth and easy to read because I push the text and buttons up a little more than 2x dpi like other apps do.
Downside – pixel doubling makes lower-res images look soft. The HD resolution we are using make the images small when viewing 1:1 in graphics programs. This monitor is made for 4K production so HD is small…
Custom Linux PC – Blender rendering using two GPUs for acceleration. Technically a faster computer than the iMac for multi-threaded apps because it has 6 hardware CPU cores. I should try After Effects on the Windows side of it, but I despise working in Windows – I only have it there to learn VR via Unreal Editor and use my mocap system.
I started a new blog just for the projection mapping project.
Just added these to it:
I started with this tiger and then re-shaped its head, tweaked its texture map, and then highly tweaked its texture for the bronze version.
The 90th anniversary of the opening of the Little Rock Central High School building and the 60th anniversary of the Desegregation Crisis are coming September 18-25, 2017. It will be a week of activities that commemorates the anniversaries and culminates in an event that features a projection mapped animation on the façade of the high school building.
This first blog post is about a major milestone for the animation, which is a completed virtual 3D model of the facade including its four statues. Now that the model is complete we can finally get to work. The majority of the animation we create will be based on the architectural structure of the facade. I can’t believe February is almost over! It took me over a week longer than I expected to finish this phase of the project due to distractions including an illness that caused horrible headaches as well as external issues and projects and some personal goals beyond the projection mapping project. Hopefully the headaches are past – I can manage the rest.
Here’s the basic model:
We can add lighting that can make it appear as if we’ve hung actual lights near the building:
We can also play around (this is just a test and not final imagery):
And add stuff:
Here’s what it should look like at the campus. We intend to add some lighting around the central facade as well.
The limestone part of the high school’s main entry has several nice 1920s Art Deco details and is sculptural in nature with deep set doors and windows and jutting pedestals for the four statues. I still need to add the letters for the statues. We will hopefully be able to temporarily cover the windows and doors so they won’t be so dark. We will also need to cover the lanterns so they will reflect the projections.
Ambition, Personality, Opportunity, and Preparation
When facing the building the four statues from left to right are Ambition (male), Personality (female), Opportunity (female), and Preparation (male).
I’ve been told that the four statues were “ordered from a catalog” and not unique to the building project. Their body styles are reminiscent of Michelangelo sculptures with their long muscular arms and Greek facial features. Preparation must have been the sculptor’s version of David – see his contrapposto stance, physique, lowered right arm (holding a scroll in this case), raised left arm holding a book instead of a sling, and a left-facing gaze.
Their dress is based on ancient Greek Chiton. The sculptural style is “wet drape” where the cloth clings to the skin to reveal the figure’s body underneath. This is most obvious in Preparation with his torso that practically looks bare, and you can see it in Opportunity as well. I modeled these statues by starting with nudes so I could get the wet drape look right.
I think later blog posts will go on another website dedicated to this project. Geeky stuff will stay on this blog though.
Geek Stuff (most of you will want to skip this)
I modeled the facade by building basic geometric shapes and aligning them to a photograph I took last summer. I actually got most of this model finished by last fall. In January I added the smaller details and lanterns.
The statues were very time consuming and I knew they would be… I downloaded a few nude “base models” from Blendswap, which are designed to be a starting place for creating a character. For the females, I used the body of one and the hands and head of another. After splicing them together I pushed and pulled and extruded faces, edges, and vertices to make them match the sculpture. I also used sculpting tools to smooth and guide areas of the model. The models are considered low-poly, which makes them easy to animate and handle in the 3D software. When they are rendered they are smoothed using Pixar’s subdivision surface technology. It turns a blocky mess of polygons into flowing garments.
For the capes I essentially started with a line and extruded it and moved it to create the overlapping folds. For smaller details I just cut the larger polygonal faces into smaller ones that I could then push, pull, and sculpt into their final form.
Once a model seemed ready to go I aligned it with the main photo of the facade. I had closeups of the statues to do most of the work, but since the photos were taken from below, the proportions were not accurate so aligning with the main photo was key to getting them the correct overall size. Because of the proportion issues and a number of other things, I modeled them just looking at my photos rather than trying to align them to photos in the 3D viewport, which is common for character design.
While modeling, the virtual statue is standing in a T-pose. I used a T-pose because we will most-likely apply some custom motion capture animation and our motion capture system (Perception Neuron) requires a T-pose to start. Another common starting point for a character model is an A-pose, which is more relaxed, but not a good idea for our purposes.
After getting the proportions correct I added a skeleton to the model. The skeleton is based on the needs of the motion capture system. The model is bound to the skeleton so whenever I move a bone, the model with deform with it. I used the bones to pose the model to match the statues. I actually animated the movement so I could go back to the T-pose easily as well as test the model deformations as the bones moved. Some of the dress is not driven by the skeleton at the moment. That will come later via cloth simulations.
I modeled the statues this way because I knew we would be animating them and they needed a structure that would support animation. A more accurate alternative to modeling by eye would have been to scan the actual sculptures. Scanning could be done via LIDAR, but would have been prohibitively expensive. Or, it can be done with lots of photographs from multiple angles via photogrammetry. Shooting the sculptures with a drone and extracting frames from the video would have been a way to get the images needed.
The upside to scanning would be a very accurate model, but there are downsides. One is that the scan would have to be retopologized, which can be time intensive, to make it animatable. Another is that the models would not have a backside and the arms would be stuck to the bodies so they would need hand modeling to create the back and make the arms free. I would have been up for these things had they been scanned last fall. Unfortunately they are 22 feet above the ground so logistically it is not a trivial issue to get to them.
From here it is a matter of lighting, creating cool surface materials, animating the statues, opening the doors, or whatever else we come up with. Even things that don’t directly change the facade, such as showing a photo, will be rendered against the virtual facade so the photo will appear to interact with the building.
I used Blender to do all of this work. It is just a joy to use. Some things that came in handy (these aren’t necessarily unique to Blender BTW):
- Use photos as a background in the camera viewport to help create a 3D environment that is similar to the size of the actual building
- Changed one of my 3D panels into an image viewer so I could have a photo of a statue up at all times.
- The Shift Key – I use a Wacom Intuos 4 Medium when working with graphics software. It has a bad habit of moving during a click or not really making the mark you tried because it was so small. When changing a parameter in Blender (practically no matter what it is), you can hold down the Shift Key while doing it and it will increase the accuracy of the parameter by not allowing it to change drastically no matter how much you move the stylus. I can make big movements to do small changes. BTW, some graphics programs do have a similar function, just not all…
- Matcaps – haven’t really used them before, but they make modeling organic forms much easier. They allow you to customize how the model is shaded in the viewport so you can see the curved surfaces easier.
- Proportional Editing – Used when moving a vertex or small group of vertices and wanting surrounding vertices to move with them, but not as much. Super helpful when making proportion changes or needing to move parts of the model around to accommodate the posed body. Especially useful is the “Connected” mode where it will only move vertices connected to the one you are moving rather than ones that are just nearby. You can also change the falloff to control how the other non-selected vertices will change. BTW, this works on more than just vertices, just using that as an example.
- Subdivision Surfaces – Blender can show the subd effect while editing the model either by showing the base model and the smoothing separately or by bending the base model’s edges along the surface of the smoothed model. This really helps know how the changes of the low resolution model will change the smoothed model.
- Solidify modifier – I made the capes a single polygon thickness and used this modifier to give it dimensional thickness. When sending the models out to Jim and Jonathan, who use Cinema4D and Maya, I will “Apply” this effect to make the geometry permanent.
- Cycles with two GPUs – it’s so fast! To do test renderings and make the images in this blog post it is amazing how fast Cycles can be. The images here took about a minute and a half to render each one. It’s also crazy easy to make objects into light sources. I do most of the work on my iMac and then switch over to my Linux computer for rendering.
It’s been five years since I’ve designed a theatrical production with UCA Theatre. My last design was The Bacchae that was both a set and a projection design project. This time around it’s Radium Girls and again I designed the physical scenery and projected imagery. Radium Girls was directed by my colleague, Chris Fritzges.
About Radium Girls
From wikipedia – “The Radium Girls were female factory workers who contracted radiation poisoning from painting watch dials with self-luminous paint at the United States Radium factory in Orange, New Jersey, around 1917. The women, who had been told the paint was harmless, ingested deadly amounts of radium by licking their paintbrushes to give them a fine point; some also painted their fingernails and teeth with the glowing substance.
Five of the women challenged their employer in a case that established the right of individual workers who contract occupational diseases to sue their employers.”
The play, by D.W. Gregory, tells this story through one of the girls, Grace Fryer, and the president of the U.S. Radium Corporation, Arthur Roeder.
The design team, which was made up of myself and theatre faculty and students, met several times to discuss the play including what the story means and what our production goals were. One of the big goals scenically was to include projected imagery. The main reason for projections was that the play has many scenes in different locations and it shouldn’t be staged with a lot of traditional scenery. The thought was that projections could quickly change and help inform the audience of where the different scenes were taking place. Another overall goal was to use scenery that was abstract and allowed for interesting staging, such as multiple platforms at different heights, rather than being realistic looking. Realism is best used for costumes and properties (props) – the things that are closest to the characters want some authenticity, while the playing space can be more abstract or symbolic.
Chris started the process of developing the design by discussing different themes he saw in the story. The following are a few of the larger themes:
- The Corporation vs. the Worker
- Masculine vs. Feminine
- Science vs. Business
- Fighting time
- The media
Some visual themes/motifs included clocks, gears, and flowers.
The next step in the process was to do some research. The play’s time period was the 1920s and it recounts actual events so the team, including a student dramaturg (one who is dedicated to researching the play in detail and making his research available to the rest of the team), looked for pictures and articles about the girls, Marie Curie, the U.S. Radium Corporation, radium products and research, and general 1920s trends in clothing, art, and architecture.
I was ultimately most influenced by the work of Hugh Ferriss, the U.S. Radium plant, and timepieces of the era.
Sometimes the set design will just come to me and I quickly work on about three variations of an idea. Not for this play. Instead, I drew sketches of several different ideas and shared them with the design team. The gear and clock influences are a thread throughout the ideas as well as the factory windows, which are referenced in the play. What I was unsure of, was the actual projection surfaces – how integrated should they be into the playing spaces? Also, should we project flat on typical screens or consider other shapes for projection surfaces?
After looking at sketches for a couple of weeks, we decided that we liked three levels of platforms and that they should be round (more feminine shape, clocks, gears, radium symbol). We also worked out the size of each platform. The projection surface ended up taking a little longer, but we finally worked out a projection mapping-oriented wall that had an industrial skyline silhouette at the top. The projection mapping aspect of it was that the screen was not just one plane stretching across the back of the platforms. Instead, it was broken into multiple planes at different angles. Doors through the projection surfaces were the last pieces to go in.
We made some last-minute changes to the heights of the platforms for time and cost savings, which ultimately made the set work better. You’ll notice that the doors are above the platforms in the renderings because I was trying to show the change in height as fast as I could… Also, since it had been awhile since I had done a theatrical set, and I was preoccupied by the projected imagery, Shannon Moore, the theatre Technical Director, was instrumental in dealing with some finishing touches like steps and platforms on the upstage side of the set through the doors.
Lastly, I created a painter’s elevation for the platforms. Two platforms were clock faces and the third was a watch/industrial gear.
After the set design was done we moved onto the projection design. I primarily worked with Chris rather than working with the whole design team. The cast also had some input on projection ideas. Chris and I met three times to go through possible imagery for each scene. In the early meetings I discussed imagery ideas that were documentary-like. Imagery would be based on period photos, actual photos of the characters portrayed, newspaper clippings, etc. As we got into discussing the imagery and getting ideas from the cast I felt that the documentary idea wasn’t working with the production style and ideas. The final overall design concept was experiencing each location using either symbolic imagery and/or closeups of objects that would be in that particular location.
In the scenes that were in character’s homes I tried to focus on fireplace mantels because I wanted to feature some style of clock. I included enough clocks that Chris mapped out the time that should be on each clock face starting at 1:00 and going to 11:45.
The doors didn’t quite work with the concept of closeups and symbolism so I had to come up with a way to change the apparent scale of the spaces depicted in the imagery. During an early rehearsal I attended I saw the problem and came up with a solution almost immediately. I chose to use as much of the screen as possible to do the closeup objects, such as a fireplace mantel, and then change the scale around the door to make it more realistic. I used the scale of the objects and wallpaper pattern to show that if one were to really bend their head around what I created that they could rationalize the different sized objects. I imagined what a door across a room would look like if I were standing close to the fireplace. The fireplace objects would be large in my view and the door small due to its distance away from me.
There were a few places where I tweaked this concept. In the exterior porch of the Roeder home I chose to keep the door in scale, but the house’s siding and eve would be large and out of scale. In the health department I created oversized filing cabinets that dwarf the door. In Grace’s home both doors are used so I couldn’t use the same technique so I made the props, like hanging lights and the mantel clock oversized.
Figure 53’s Qlab was used to playback the imagery on an iMac. A VGA signal was sent to two 4000 Lumen projectors at 1920×1080 pixel dimensions. Both projectors got the same image so they were overlapping each other to increase the overall brightness. Qlab was used to warp the image to counteract the warping from the angled screens (projection mapping!).
Blender was used for almost all of the imagery. I used as many pre-modeled objects as possible to save time. There are some recurring scenes with two newspaper reporters and most of those images were created in Photoshop. I used two computers concurrently to stay productive. My main computer is an iMac and I used it to do the modeling and setup lighting and materials in Blender as well as Photoshop work. I then moved over to an older Linux computer I have with two Nvidia graphics cards. Blender’s Cycles renderer can be accelerated using Nvidia cards (AMD cards are almost ready to accelerate too BTW) so I finalized the shading and lighting and did final renders with it.
Oh yeah, I also made some tables for the show
The show’s overall production quality was amazing. The set, projections, costumes (Designed by Shauna Meador), lighting, props, sound, and performances went together so well. We often talk about a unified production, but sometimes there is one element or another that just doesn’t seem to fit. Not in this case. The show looked really good and was well directed and performed. I can be very critical especially of my own work so I am surprised at how good I feel about the work.
There were problems of course.
- I started making the images way too late. I literally did 85% of the images in the last weekend before it opened (it was UCA’s fall break so that last weekend was several days…).
- There were 50 images – the most I’ve made for a single show
- Because I was so late I didn’t give Chris very many opportunities for feedback. I think he was happy with my work overall, but we should have been able to work together more.
- I wanted some of the imagery to be animated, such as spinning newspapers, smoke or dust in the air, subtle movements of objects, etc. There were no animations.
- We either started our whole process a little late or took too long to design the set – maybe both. Construction on the set should have started at least a week earlier than it did.
- The way I setup the projectors was lame. They were sitting on an angled board in the theater’s 2nd catwalk. Because they were not locked down by any kind of rig they had to be touched every night to make sure they were aligned to each other.
- The projectors were not perfectly aligned. Cheap projectors don’t have the tools to do fine adjustments aligning the images of multiple projectors so I got it as close as I could. The image looked out of focus toward the bottom left side (as seen by the audience) and overall had a soft look due to the slight mismatch.
- A workaround would have been to send individual signals to the projectors and used Qlab to do the final alignment by giving each projector a custom warping. Instead, I sent a signal to one projector and used the loop-thru to get the signal to the other projector. Sending two signals would have meant using a different computer too.
- The projections needed to be brighter. Dr. Greg Blakey, the lighting designer, did a lot of last-minute changes to the lights to try to keep as much illumination off the screen as possible. The only way we could have gone brighter would have been renting a large-venue projector (10K or greater Lumens) and that would have blown the budget unfortunately.
Some of the projections:
The images below are a mix of photos and actual projection images. The photos are untouched jpegs from the camera. When I have more time I’ll work on the raw images. The screen in these photos looks a little darker than it actually was live.