Imagine if Buildings Could Talk – Wrapup

You are going to want to get a cup or glass of a favorite beverage for this one – it’s long.

Imagine

Project Timeline

  • 2015: Summer – Gayle Seymour contacts me about doing a projection mapping event on the facade of LRCH
  • 2015: Contact Jim Lockhart and Jonathan Richter (J & J) about working on it with me
  • 2015: NEA Grant submission
  • 2015: Fall – Sabbatical Application for spring 2017
  • 2016: Spring – Get news of NEA grant (awarded, but not fully funded). Awarded sabbatical leave
  • 2016: Summer
    • Meet with Blake Tyson about the piece
    • Setup Google Drive (shared docs with Gayle, Jennifer, and Blake), Dropbox (shared files with Jim and Jonathan), and Basecamp (communication with J & J mostly, but with Gayle, Jennifer, and Blake as needed)
    • Contacted projection companies about possibly doing the project
    • Late summer – MooTV got on-board as projection company with Travis Walker as project manager and Tim Monnig as systems engineer
  • 2016: Fall
    • Blake Tyson composes and records music
    • Developed themes for imagery
    • Started modeling the facade
    • Occasional meetings with National Park Rangers and LRCH Principal, Nancy Rousseau
    • Nashville meeting with J & J
    • Not much more besides answering questions and trying to make it through the semester
  • 2017: Spring
    • Started full-time production – finished facade and statues
    • January Nashville meeting with J & J
    • Demos and marketing images, several things to show what projection mapping is, but not imagery that would go into the final piece
    • Developed “School Life” section
    • Two visits to the Arkansas State Archives
    • Multiple meetings with Nancy Rousseau, Rangers, and working with potential vendors for Sound and Generator
    • Request for permission to use Will Counts photos (early April)
  • 2017: Summer
    • 3D production – finish School Life; develop Desegregation Crisis section with several re-starts throughout summer
    • Finalize vendors for sound and generator and work out scaffolding, ultimately finding a scaffolding vendor
    • Get permission for Will Counts photos and Raymond Preddy photos; Finish Desegregation Crisis section (August)
    • Ongoing communication with J & J regarding their sections
    • PROMISE youth camp workshop
  • 2017: September
    • All imagery assembled from the three of us artists
    • September 15th – Meet at MooTV in Nashville (J&J, Travis and Tim) to walk through playback, and designing the lighting looks
    • Window, door, and sconce drapes made
    • September 21st
      • Scaffold built; projectors, sound, and lights loaded-in
      • Windows covered from interior
      • Doors and sconces covered on exterior
      • Projector alignment
    • September 22nd – image mapping and alignment
    • September 23-24 – public shows and load-out
    • September 25th – 60th Anniversary event

 

LRCHblueprints

Original Blueprints at the Visitor’s Center

The Content

The sections of the animation:

  • Opening – Construction and Statues (Jonathan)
  • School Life – academics and athletics over the history of the school (Scott)
  • Desegregation Crisis and Lost Year (Scott)
  • Close – the spirit of the students in school now (Jim)
ProjectorPOV

Projector Point of View, students installing exterior drapery

Why did it take so long?

There are two main reasons it took months to get the work finished. The first is that 3D animation takes a while to do, especially if there is a lot of modeling detail. I spent a few weeks modeling the statues and, though the rest of the facade was relatively simple, I spent quite a bit of time on the model so it would match the photographs as closely as possible. Once I got into the School Life section, I found myself mired in the 3D details too.

4statues

Finished 3D Model of the LRCH Entrance

The second is that turning a general theme into imagery can take time. The School Life section was the first one I designed because I thought I had a strong sense of what I wanted it to be. As I pecked away at it by building or procuring models of books and sports equipment and then creating materials and lighting, I was not only burning time, but also slowly developing an idea of what the section should look like. On top of the 3D animation, I also spent about 10 hours at the Arkansas State Archives over two trips going though old yearbooks and newspapers. To finish researching the photographs I needed, I visited the LRCH library with help from Stella Cameron and went through some materials Gayle found at an estate auction. A lot of time went into bringing the 3D and 2D elements together as well as over two weeks of rendering the final animation.

The Desegregation section also took a great deal of time changing it from a theme to actual imagery. I wasn’t sure I would have permission to use the photographs made by Will Counts until it finally happened in mid-July (I originally asked permission to use them in early April). Through the summer I worked on several different ideas to illustrate the events of 1957-59, but was never happy with any of them. Either the ideas were too great in scope or they just didn’t look as good as I wanted them to when I tried them out.

When I got permission to use the photos I decided to celebrate the photos themselves with an overall look that was reminiscent of a project I did several years ago, but this version ended up being much better. Also, with having a strong sense of what I wanted to do with the photos, I was able to get the work done quickly. The actual production time on the Desegregation Crisis was significantly shorter than the School Life section, but it took even longer to get to the point of actually producing final imagery. The slowest part was creating the images for when the Little Rock Nine enter the school. Those photos exist, but are blurry and taken from too far away so I re-created them in a slightly illustrative style.

One of the best aspects of doing this project was that there was no client. Jim, Jonathan, and I chose to take our time to think about the piece and let ideas evolve. There was a lot of idea incubation time as well, which was useful to try an idea out and then let it sit for a while to see if it was right. The three of us normally have to think on our feet and get work done quickly for clients, but this project afforded us the time to consider what we were doing and be satisfied and proud of our choices.

Similarly, when there is no client and there is time, I am able to let my INTP-ness express itself. I prefer to think through a problem and try different approaches and I am willing to work through solutions in my head and let them go if they aren’t working. For personal work, this too often means that I may not necessarily finish a project, but since this particular project did have a real deadline, I was able to mix my inclination for mental play and get things done.

SaturdayNightAudience

Saturday night audience

Show

For me, the show started production on September 15th when I went to Nashville to program the lights and check playback. Jim, Jonathan, Tim, Travis, and I met for the first time in the same space to talk through playback of imagery and sound and see what we were capable of doing with lights. I had sketched out lighting ideas for each section and we started with that, but it turned out that Travis is a lighting designer too and was doing the programming, so we worked together to create the looks for each section. This process took about eight hours, but it paid off since we were able to just tweak some timing on-site at the school, rather than doing any more design work.

September 21st was load-in day for projection, scaffolding, sound, electrical generator, and drapery. Considering all of those elements had to come together in the same late afternoon, it went remarkably smooth. I created an itinerary so each of the vendors could make sure things got setup in a certain order. Rock City Staging installed an 8’ x 16’ x 8’ platform with roof and side covers. A/V Arkansas installed a sound system and supervised getting the generator, provided by RIGGS, located and power cables run. Our primary directive from the principal and the rangers was to keep all equipment out of sight so anyone taking a picture of the school would not have AV equipment in the picture. A/V Arkansas was able to accommodate by placing the speakers behind trees to the sides of the main entrance. The UCA Physical Plant assisted Travis and Tim in getting the 240 lb. projectors lifted onto the 8’ platform. The Physical Plant also setup barricades around the projection platform.

ProjectionPlatform

Scott (in too baggy clothes), Travis, and Tim

Shauna led a team of Film students to install drapery on the interior of the arched windows and I got some help from Jim and Matt Rogers, Film student, carrying lighting fixtures up to the fourth floor roof and to the sides of the facade.

LightsOnRoof

Lights on the roof over the entrance

That evening we were able to power up the projectors and lighting equipment as it was getting dark enough to see them. The school was having a parents open-house so we had to wait until about 7:45 PM until we were able to install the outside drapery for the first time.

DrapesGoingUp

Students installing drapery, right – Jonathan and Jim

The rest of the night until about 3:00 AM Tim worked on aligning the projectors. The process is slow since there are multiple projectors and the alignment software on the projectors is slow (click to move a pixel then wait several seconds to see the change…).

ProjectorAlignment3

Projector alignment

September 22nd was about testing the playback, lighting, and sound systems first and then mapping/image alignment. There was a home football game that night against rival North Little Rock so we could not turn off the building lights. Luckily, the projectors were bright enough to be seen over the building light. After running through the show a few times we let the sound guys from AV Arkansas go and turned off the lights. The rest of the night was about mapping the animation to the building.

We had some technical issues with a model I provided for mapping, but it was undiscovered until late. Tim worked through the night to get the mapping software to do the best it could with what it was given. OVERALL, the mapping looked good. Unfortunately, there was offsetting towards the bottom of the image that couldn’t be fixed, but wasn’t that noticeable.

TimMappingBinoculars

Image mapping – Tim with binoculars checking his work

September 23rd and 24th were the shows starting at 7:30 and running every 15 minutes until 9:30. The final animation with credits was a little over nine minutes. I created a countdown animation, which was Shauna’s suggestion, so the audience would know when it would run again. The lighting console ran the show by sending a start command to the video/audio system and running the light cues.

The two nights started around 6:45 PM by installing the exterior drapery. Then we would wait for 7:30. Saturday night there was a jazz concert that preceded the show that started late and ended late so we didn’t start the show until nearly 8:00 PM. Sunday night there was an event at the Commemorative Garden across the street before the show, which ran on time and was designed to lead the participants and audience over to the school building to see the animation. Both nights were well attended, but Sunday night, to my surprise, had a bigger audience. Each night there were several people who stayed the whole time and watched the show repeat, which was strange and cool.

Sunday night we took down (struck) most of the equipment. The students, Chris Churchill, Steve Stanley, and Jim helped take down the exterior drapery and the lights. UCA Physical Plant came back to help take down the projectors for MooTV. A/V Arkansas also struck all of their equipment. Monday the platform and generator were removed from the grounds.

Mr. John Robert, facilities engineer, was instrumental in getting us into the building, out on the roof, and turning the lights off. I can’t thank him enough for his work and extra time he put in to help us.

CheckingLights

Lighting check

Monday, September 25th, was the official 60th anniversary event and I was pleased to be invited. It was amazing to see and hear the eight surviving members of the Little Rock Nine, as well as experiencing President Bill Clinton’s keynote address.

NineClinton

Bill Clinton with the surviving members of the Little Rock Nine

Response

I was overwhelmed by the audience response to the work. Several of my friends and colleagues saw the show and let me know how well they liked it. I was also approached by people from the community and LRCH alums that truly appreciated it and thanked us for doing it. Each person seemed to have a favorite moment for them and thankfully, the favorite moments were from each of the sections. Probably most of the comments were about the moving statues in the opening section; the tiger in the school life section was mentioned several times; when the Nine entered the building was another highlight; and the rainbow spheres were also a big hit. I was especially pleased with the emotional response to it. After working on it for so long it was hard to know whether it was really going to work or not and I think it did very well. Nancy Rousseau had seen the school life and desegregation sections on a computer weeks before the show and she really liked the whole show and I believe she truly appreciated the effort that went into lighting up her school.

Statues

Opening – Statues introduction

What struck me the most was how well Blake’s music and the imagery came together. I’ll admit that Jim and Jonathan worked harder at synching the imagery to the music than I did and it paid off. The place I worked the most on blending the imagery and the music was the end of the desegregation section where the music brightens while we see the capitol statues rush past. The music gave the imagery an emotional depth that was very satisfying.

The lighting surrounding the projection area was awesome. It was part of my original vision and it worked out so well. The lights were bright and colorful and not only expanded the projection area, but also removed the rectangle by fading it upwards and to the sides. The scale of the piece increased dramatically by incorporating the lighting.

SchoolLife

School Life

I knew we had something special based on Shauna’s response. She hadn’t seen it prior to the first show Saturday night and she was blown away. She was reluctant to support the project at first (in 2015) because she knew it would be big and that I would be working on it for a long time, but after seeing it the first time she was ready for me to start doing the next one, wherever that may be – even if it was back to a stack of boxes.

Projection mapping events are special because they are unique to their locations. I’m pleased to show the video of the work, but it does not do it justice. The vibrant color, bright light and images, and the overall scale of the piece does not translate to video. Similar to watching a play or concert on TV – it’s just not the same.

The project was highlighted by a press conference and a kick-off event preceding the show on the UCA campus, an interview in the Democrat-Gazette, an interview on KTHV 11, an interview for The Echo, and an interview on Spotlight. It was also covered by UCA media before and after the event and was featured in the UCA President’s Update email newsletter.

LittleRockNineEnters

Desegregation Crisis section – Little Rock Nine entering the building

Lessons Learned

I refuse to nitpick the piece. I’m proud of our work and I know how it could have been better, but overall I am more satisfied with this work than practically anything I’ve done in the past. Having said that, there are a few things that I’d like to document as far as lessons learned.

  • We should have done more cool stuff. The moving statues were a HUGE hit and though we originally planned to do a lot more with them, we only animated them once. It would have been nice to move them at least one more time. The rainbow spheres were also a big hit. Projection mapping events commonly incorporate playful animation of the architecture and while we did some, we could have done more. It was mostly my fault. I was so worried about respecting the events and school that I forgot to have fun. I also found that through my mapping simulations that the building was hard to transform. The projection area is already quite 3-dimensional so I found it difficult to mess with it much. It was a good lesson to learn based on audience responses and will definitely incorporate more transformational animation in future projects.
  • Projection mapping alignment took quite a long time. I should have worked with Tim more in the weeks prior to the show to go through the details of the mapping process. I was providing a 3D model, but not really getting into the rest of the process. I assumed that at the very least we would do a flat projection on the building since the rendered animation was from the point-of-view of the projectors, but as I saw the mapping process I realized that I was being naive. Tim and I are currently working through the process to see where we can make it better in the future and why we had some model compatibility and scaling issues.
  • The lighting was so important to the look of the final piece. It was a very effective way of expanding the scale of the projection and softening the rectangular shape. I definitely see using that technique again.
  • Get the sprinklers turned off. Though Jim would probably not agree, I’m glad we were there when the sprinklers went on at 1:00 AM or so in the morning. I ended up covering a couple of them with buckets with weights on them.
DigitalLights

Close – Lines of Light

Hours and Emails

I logged 623 hours on the project from early January, 2017 until a few days after the event. I created a Google Form that I kept open in a browser window. It had a line for me to say what I did that day and a choice of a number from 1-10 for the number of hours spent. My most consistent hours were from late January through May and then again in mid-July through August. Though I did work in June, it was spotty due to some outside projects and taking some time off. The hours included doing graphics work as well as time spent on emails, documentation, meetings, and other tasks related to the project.

There are 496 emails in my Central High folder.

On Basecamp we had 20 discussions with 152 total messages.

There are 39,217 files on my computer related to the project. The number does not account for duplicate files, such as photos on a local hard drive and a copy on Dropbox. Nearly 2/3rds of the files are rendered frames (30 frames per second of animation).

Sabbatical Leave

I was awarded a sabbatical leave for the spring semester of 2017 to work on the project. The leave was needed for the project, but also something I NEEDED to do for myself.

I needed a break from teaching and service activities. I’ve been teaching each fall and spring semester at a university for the past 17 years. That may be grand for some professors, but I have such a love/hate relationship with academia and large organizations that I need breaks. I have also been going through a transition in my career both as an educator and working professional and needed some time to take stock in where I’ve been and where I would like to be in the future. A big part of applying for a sabbatical leave of absence was to give myself some time to recharge.

Similarly, I needed some time to focus and do some deep thinking about a creative problem. Teaching classes, helping colleagues on their creative projects, and doing short professional projects are fine, but they do not give me the opportunity to do my own creative work. The projection mapping project made me do the things I like to do, such as research, create 3D animation (modeling, animation, lighting, shading, rendering, compositing), and do some creative problem solving regardless of the content. In this case, in terms of how to communicate ideas of; the history of a school, school spirit, racism and the desegregation crisis, diversity, and education. I don’t believe I could have done the work while also carrying a 4-4 course load and the service expectations at UCA. I’ve done several other projects during a regular semester that were highly compromised due to the time and mental effort taken away by the needs of the job of professor.

Geek Stuff

Just a list of software used on the project:
3D: Blender (Scott), Maya (Jonathan), Cinema4D (Jim)
2D: Affinity Photo and Designer, Photoshop
Compositing: After Effects, Fusion, Premiere Pro
Video Editing: Premiere Pro, DaVinci Resolve
Projection Mapping: Pandora’s Box

We compiled the show and synched sound in Premiere Pro. The project file was on Dropbox and we would text each other when we had the file open so someone else wouldn’t open it and cause a problem overwriting it.

RainbowSpheres

Close – Rainbow spheres

Thank You

Thanks so much to everyone who helped and supported the project. Blake, Jim, Jonathan, and I could not have pulled it off without these people.

First, Gayle Seymour, Associate Dean of the College of Fine Arts and Communication at the University of Central Arkansas who was my partner on this project. This whole thing is her fault. She recruited Blake and I back in 2015. Throughout the process she sheltered me from the politics surrounding the 60th events and she dealt with the things I needed to worry about, but couldn’t do anything about, for instance getting 24-hour security for the equipment on-site and dealing with contracts and funding sources. There are so many more things that she did, including working with Jennifer Deering, Grant Writer in UCA’s Sponsored Programs department, to write grants and organize many more events besides the projection mapping event.

I also want to say that Tim Monnig and Travis Walker at MooTV were instrumental in making this happen. We were in contact with each other for 12 months to make sure everything was going to work properly. They are great to work with and hope we can do it again soon. They mentioned projecting on the Capitol Building, which sounded cool:)

The following is a list of those who contributed to the project:

  • Produced by Gayle Seymour, Jennifer Deering
  • Animations by W. Scott Meador, Jim Lockhart, Jonathan Richter
  • Projection by MooTV – Nashville, TN. Travis Walker, Tim Monnig
  • Musical score – The Surface of the Sky by Blake Tyson
    • Performed by UCA Percussion Ensemble – Carter Harlan, Victoria Kelsey, Jarrod Light, Bradlee Martin, Scott Strickland, Stephen Timperley
  • Sound by A/V Arkansas
  • Projection Platform by Rock City Staging
  • Lighting Equipment by UCA Theatre, Greg Blakey
  • UCA Film Student Crew – Rebecca Koehler, Melissa Foster, Jonhatan Nevarez Arias, Matt Rogers, Zack Stone, Takuma Suzuki, Dawn Webb
  • Projection Drapery by UCA Theatre, Shauna C. Meador, Sidney Kelly, Donna Dahlem, Hannah Pair, Autumn Toler
  • UCA Physical Plant – Dustin Strom, Jeremy Davis, Dale Gilkey, David Mathews, Tom Melrose, Skipper Pennington, Joe Richards, Joey Williams
  • National History Sites – Robin White, Tarona Armstrong, David Kilton, Marchelle Williams, Jodi Morris, Chelsea Mott, Toni Phinisey-Webber
  • Special thanks to:
    • Nancy Rousseau, Jane Brown, Stella Cameron, Scott Hairston and the LRCH Student Council, Mr. John Roberts
    • Kristy Carter
    • Carri George and Arkansas PROMISE staff
    • University Relations and Creative Services
  • Student images used in closing section
    • Ashlyn Sorrows – Stand Together
    • Shelby Curry – Human
    • Madison Bell – I am Human
    • Erbie Jennings III – Laying the Foundation
    • Charis Lancaster – We Come In Pieces
    • Mae Roach – Released from Chains
    • Joah Gomez – the world is in your hands
  • Funding provided by
    • National Endowment for the Arts
    • National Park Service
    • Mid-America Arts Alliance
    • Arkansas Arts Council
    • Department of Arkansas Heritage
    • City of Little Rock
    • University of Central Arkansas
  • Will Counts photographs provided by Vivian Counts, Bradley Cook, and The Arkansas Arts Center
  • Raymond Preddy’s photographs provided by UA Little Rock Center for Arkansas History and Culture
Advertisements

LRHS Projection Mapping – Animation Experiments

I animate and render what the projector will playback and then project that animation back on the facade model, that has a similar texture to the real building, to simulate what it will look like on site.

The first animation has three statues moving their arms. After starting the rendering process I went for a walk (for those new to the blog that’s what the name means render + walk because you can’t do much else on the computer while rendering). It occurred to me that when this is projected onto the building, the statue arms will be quite a distance from the actual statue due to the facade’s depth. This isn’t much of an issue when looking at the building from front-center especially near the projector, but off-axis I felt like it may suck.

So I rendered a view off-axis to check.

I didn’t like it for two reasons. One, my original hypothesis was correct and the arms are pretty far away. This is an issue for about a third of the crowd thanks to the trees that force the audience towards the center of the viewing area, but I still don’t like it. The other reason is that any illumination on the actual statues makes them stand out as statues so I feel like we won’t be able to really remove them like I hoped. The side view does look cool even without illumination on the sides of the pillars and arches. It’s possible to project onto them too, but beyond this project’s budget.

So I created a new animation. This is better in terms of making it so the statues are seen when I want them to be seen. However, there is a moment when I have the statue “niches” rise up behind them, but it’s too late, they can already be seen. The lesson is that as parts of the building are highlighted or animated they need a strong silhouette – subtlety will be lost as soon as there is any light on them.

I’ve left the exterior lanterns, doors, and windows their natural color, which is dark, on the projection model for now. It is our goal to cover those with a material that reflects light better.

Here’s a fun experiment… A little bit of depth shading on a blueprint.

blueprint

Geek stuff warning

When I was preparing the model to simulate the projection on the building I found that some of the proportions of the statues were off by too much to let go. Thanks to some new photos I took of the building I had more modeling work to do to get it right. I had to spend some time moving parts of the statues around until they properly aligned with the real statues. I also tweaked the building, windows, and doors a little. Was a one step forward, two steps back moment, but it looks a lot better now and I have a lot more confidence in the projection.

The animations above were 750 frames each. Rendering them and then rendering the projection simulation was 4500 frames. Plus some re-rendering sections after deciding to make some tweaks. I use two computers to render. One is a Retina iMac and the other is a custom-built Linux/Windows PC. The iMac renders using its CPU (4 CPU cores/8 hyperthreaded cores) and the PC renders using two Nvidia GPUs. In some cases the PC can render four or more frames for every one the iMac can render because the GPU acceleration is so great.

Unfortunately/fortunately the Blender Cycles developers have been working hard on the GPU acceleration including, BTW, developers at AMD working on making it so Cycles is not limited to Nvidia GPUs. I say unfortunately because on one of the animations I found the PC Cycles render was crashing every 40 frames or so. It’s a sad morning when you see that the faster computer hadn’t been rendering for the last 6+ hours…

I don’t have time to troubleshoot the issue. It’s a mix of Blender/Cycles and Nvidia software and it’s not that bad in the grand scheme of things. To deal with it I decided to dust off a python script I wrote several years ago for a compute cluster we had at UCA. It created a job script for the distributed computing software. I was able to simplify it quite a bit and have it spit out a shell script (like a batch file for you Windows weirdos) that I could run so that Blender would render each frame as a new job rather than one job rendering all of the frames. Essentially it changes this one line that I manually type in a terminal:

blender -b blendfile.blend -a
(this tells blender to start without a UI to save resources and then render the animation based on the project’s settings)

To this listed in a shell script that I start by typing render.sh:

blender -b blendfile.blend -f 1
(render frame 1 based on the project’s settings and then close Blender)
blender -b blendfile.blend -f 2 (then render frame 2)
blender -b blendfile.blend -f 3 (then render frame 3)

Works like a charm. I could make the python script do a lot more tricks, but for now this is nice.

Last, Blender has a simple method of allowing multiple computers to render the same animation without using a render management application. Set the output to not overwrite and to make placeholders. A computer will look for frame 1 in the folder where the rendered images are saved (the output folder) and if it sees it then it will look for frame 2, etc. When it finds a frame that hasn’t been rendered it will create placeholder image, render, and replace the placeholder with the finished image. Each computer can claim a frame as they go, which is nice since one computer renders so much faster than the other. After Effects works this way too if you use multiple computers to render.

Since I’m not using a management system there is no check to make sure a frame actually gets rendered properly so I also wrote a python script back in the day that looks for frames with zero bytes to tell me if there were some bad frames. I might automate that with my other script, but I don’t want to dedicate the time to that right now. The macOS Finder does a nice job of listing “zero bytes,” which stands out in a list, or listing by size, so I’ve manually deleted bad frames too. To render those bad ones after deleting I just run the first command with the “-a” to find missing frames and render.

LRCHS – Projection Mapping – 1st Post

The 90th anniversary of the opening of the Little Rock Central High School building and the 60th anniversary of the Desegregation Crisis are coming September 18-25, 2017. It will be a week of activities that commemorates the anniversaries and culminates in an event that features a projection mapped animation on the façade of the high school building.

This first blog post is about a major milestone for the animation, which is a completed virtual 3D model of the facade including its four statues. Now that the model is complete we can finally get to work. The majority of the animation we create will be based on the architectural structure of the facade. I can’t believe February is almost over! It took me over a week longer than I expected to finish this phase of the project due to distractions including an illness that caused horrible headaches as well as external issues and projects and some personal goals beyond the projection mapping project. Hopefully the headaches are past – I can manage the rest.

Here’s the basic model:

4statues

We can add lighting that can make it appear as if we’ve hung actual lights near the building:

spotlights

We can also play around (this is just a test and not final imagery):

lightjade

And add stuff:

1927

Here’s what it should look like at the campus. We intend to add some lighting around the central facade as well.

projectiontest

The Facade

The limestone part of the high school’s main entry has several nice 1920s Art Deco details and is sculptural in nature with deep set doors and windows and jutting pedestals for the four statues. I still need to add the letters for the statues. We will hopefully be able to temporarily cover the windows and doors so they won’t be so dark. We will also need to cover the lanterns so they will reflect the projections.

dsc00432

Ambition, Personality, Opportunity, and Preparation

When facing the building the four statues from left to right are Ambition (male), Personality (female), Opportunity (female), and Preparation (male).

I’ve been told that the four statues were “ordered from a catalog” and not unique to the building project. Their body styles are reminiscent of Michelangelo sculptures with their long muscular arms and Greek facial features. Preparation must have been the sculptor’s version of David – see his contrapposto stance, physique, lowered right arm (holding a scroll in this case), raised left arm holding a book instead of a sling, and a left-facing gaze.

ch-interior_110 512px-27david27_by_michelangelo_jbu0001

Their dress is based on ancient Greek Chiton. The sculptural style is “wet drape” where the cloth clings to the skin to reveal the figure’s body underneath. This is most obvious in Preparation with his torso that practically looks bare, and you can see it in Opportunity as well. I modeled these statues by starting with nudes so I could get the wet drape look right.

I think later blog posts will go on another website dedicated to this project. Geeky stuff will stay on this blog though.

Geek Stuff (most of you will want to skip this)

I modeled the facade by building basic geometric shapes and aligning them to a photograph I took last summer. I actually got most of this model finished by last fall. In January I added the smaller details and lanterns.

The statues were very time consuming and I knew they would be… I downloaded a few nude “base models” from Blendswap, which are designed to be a starting place for creating a character. For the females, I used the body of one and the hands and head of another. After splicing them together I pushed and pulled and extruded faces, edges, and vertices to make them match the sculpture. I also used sculpting tools to smooth and guide areas of the model. The models are considered low-poly, which makes them easy to animate and handle in the 3D software. When they are rendered they are smoothed using Pixar’s subdivision surface technology. It turns a blocky mess of polygons into flowing garments.

For the capes I essentially started with a line and extruded it and moved it to create the overlapping folds. For smaller details I just cut the larger polygonal faces into smaller ones that I could then push, pull, and sculpt into their final form.

Once a model seemed ready to go I aligned it with the main photo of the facade. I had closeups of the statues to do most of the work, but since the photos were taken from below, the proportions were not accurate so aligning with the main photo was key to getting them the correct overall size. Because of the proportion issues and a number of other things, I modeled them just looking at my photos rather than trying to align them to photos in the 3D viewport, which is common for character design.

While modeling, the virtual statue is standing in a T-pose. I used a T-pose because we will most-likely apply some custom motion capture animation and our motion capture system (Perception Neuron) requires a T-pose to start. Another common starting point for a character model is an A-pose, which is more relaxed, but not a good idea for our purposes.

After getting the proportions correct I added a skeleton to the model. The skeleton is based on the needs of the motion capture system. The model is bound to the skeleton so whenever I move a bone, the model with deform with it. I used the bones to pose the model to match the statues. I actually animated the movement so I could go back to the T-pose easily as well as test the model deformations as the bones moved. Some of the dress is not driven by the skeleton at the moment. That will come later via cloth simulations.

opportunityposing

I modeled the statues this way because I knew we would be animating them and they needed a structure that would support animation. A more accurate alternative to modeling by eye would have been to scan the actual sculptures. Scanning could be done via LIDAR, but would have been prohibitively expensive. Or, it can be done with lots of photographs from multiple angles via photogrammetry. Shooting the sculptures with a drone and extracting frames from the video would have been a way to get the images needed.

The upside to scanning would be a very accurate model, but there are downsides. One is that the scan would have to be retopologized, which can be time intensive, to make it animatable. Another is that the models would not have a backside and the arms would be stuck to the bodies so they would need hand modeling to create the back and make the arms free. I would have been up for these things had they been scanned last fall. Unfortunately they are 22 feet above the ground so logistically it is not a trivial issue to get to them.

From here it is a matter of lighting, creating cool surface materials, animating the statues, opening the doors, or whatever else we come up with. Even things that don’t directly change the facade, such as showing a photo, will be rendered against the virtual facade so the photo will appear to interact with the building.

Blender

screenshot

I used Blender to do all of this work. It is just a joy to use. Some things that came in handy (these aren’t necessarily unique to Blender BTW):

  • Use photos as a background in the camera viewport to help create a 3D environment that is similar to the size of the actual building
  • Changed one of my 3D panels into an image viewer so I could have a photo of a statue up at all times.
  • The Shift Key – I use a Wacom Intuos 4 Medium when working with graphics software. It has a bad habit of moving during a click or not really making the mark you tried because it was so small. When changing a parameter in Blender (practically no matter what it is), you can hold down the Shift Key while doing it and it will increase the accuracy of the parameter by not allowing it to change drastically no matter how much you move the stylus. I can make big movements to do small changes. BTW, some graphics programs do have a similar function, just not all…
  • Matcaps – haven’t really used them before, but they make modeling organic forms much easier. They allow you to customize how the model is shaded in the viewport so you can see the curved surfaces easier.
  • Proportional Editing – Used when moving a vertex or small group of vertices and wanting surrounding vertices to move with them, but not as much. Super helpful when making proportion changes or needing to move parts of the model around to accommodate the posed body. Especially useful is the “Connected” mode where it will only move vertices connected to the one you are moving rather than ones that are just nearby. You can also change the falloff to control how the other non-selected vertices will change. BTW, this works on more than just vertices, just using that as an example.
  • Subdivision Surfaces – Blender can show the subd effect while editing the model either by showing the base model and the smoothing separately or by bending the base model’s edges along the surface of the smoothed model. This really helps know how the changes of the low resolution model will change the smoothed model.
  • Solidify modifier – I made the capes a single polygon thickness and used this modifier to give it dimensional thickness. When sending the models out to Jim and Jonathan, who use Cinema4D and Maya, I will “Apply” this effect to make the geometry permanent.
  • Cycles with two GPUs – it’s so fast! To do test renderings and make the images in this blog post it is amazing how fast Cycles can be. The images here took about a minute and a half to render each one. It’s also crazy easy to make objects into light sources. I do most of the work on my iMac and then switch over to my Linux computer for rendering.

Wake Me Up When September Ends

Been quite a while since I updated the blog. Here’s what I’ve been up to:

The summer started off with a gig doing graphics for the current Brad Paisley tour. I created a few 3D models for the other artists to work with at MooTV and created an animation of a space shuttle taking off. The space shuttle animation opened the show. This gig got the summer started right by funding the 3D printer and my home office remodel. I also helped design the stage for this concert. Here’s a pic of the stage with an image of a shuttle on the screens (not my shuttle, but you get the idea)

Testing to see if the space shuttle look will work with the stage design

Testing to see if the space shuttle look will work with the stage design

Garage/Shop

Before getting the remodel going I really needed to cleanup my garage/workshop, fix some work surfaces, and do something about my aging air compressor and air-line setup. I replaced the 14-year old compressor and re-worked the air distribution. The 3D printer came in handy here. My first custom printed model was a bracket to mount a pneumatic junction on my ceiling. I also replaced a cheesy work table with one that will be more useful for projects.

Air BracketPrinted Air Bracket

 

 

 

 

Home Office/Studio

Remodeling the home office needed to happen. Last summer we remodeled most of the downstairs except for the office. I had done some work previously to a wall (exterior and interior) that was damaged by water and time, but the rest of the office looked like it did after we moved in 10 years ago. I took my time because I could and stretched out re-texturing the ceilings, painting, trimming, carpeting, and decorating over three weeks (about a week or so worth of actual work). The Brad Paisley gig paid for the materials and some furniture, but I also built some of my own furniture.

Back room before the remodel

Back room before the remodel

One of 3 selfies I've ever done

One of 3 selfies I’ve ever done

Painting getting started

Painting getting started

 

 

 

 

 

3D Printer Table

Five years ago I built a simple table/stand for my Linux rendering computers (cheap custom PCs). I decided that this table should be used for the 3D printer so I built a new top for it. I had some shelves I built many years ago out of 1×12 that I decided not to use again so I cut the wood into strips and made a table top.

Table Top Glue UpFinished Table Top

 

 

 

 

Drawing Table

I’ve been carrying around an old drafting/drawing table for about 20 years without actually using it. My father got it about 40 years ago from the Arkansas Tech ROTC building that had partially burned (you can see some smoke staining in one of the pictures). I loved the table, but didn’t have room for it’s size and I wanted to finish it somehow before I put it back into service. My father used it with another drafting surface on top of it to protect it, but I wanted to do something a little different.

I cut it down to a more manageable size and built a new base for it. I had the old saw-horse style base, but decided not to re-finish the old base. Once the top was cut down, I sanded and sealed it. Then, I attached it to a new base and put a self-healing cutting mat on it to protect it. The base is kinda lame so I am thinking I’ll build a base that is more inspired next summer.  I’m so happy to finally be using the table top. My father died quite unexpectedly 20 years ago (in February) and this table top is one of the few things I have of his that I knew he truly enjoyed having. It’s great to put it back in service.

See the smoke under the supports

See the smoke under the supports

Drafting Table Cut Down

Cut to more usable length

Completed Drawing Table next to store-bought table

Completed Drawing Table next to store-bought table

 

 

 

 

 

Artwork

I’ve been doing some drawing over the last year. I posted about how iPad drawing wasn’t doing for me so I’ve been doing as much traditional drawing as I can (as I feel like it too…). Also, as part of my office remodeling project, I photographed all of my artwork. I expect to post my artwork soon now that I have it photographed. I just need to do some color correction on the photos. Here are some recent drawings I’ve done in my sketchbook (just a tease – there are lots more).

Demon of Dutch Hill. Inspired by Stephen King's Dark Tower: Wasteland

Demon of Dutch Hill. Inspired by Stephen King’s Dark Tower: Wasteland

Some rocks in my head

Some rocks in my head

Sasha WIP

Sasha WIP

 

 

 

 

 

 

 

Beer Brewing

During the semesters I don’t brew very much compared to the summer break. This summer I brewed a lot. I was able to try out some different recipes and refine my all-grain process a little more. I also added a crazy-cheap chest freezer to handle my need for consistent fermentation temperatures. In the fall, winter, and spring, I can keep the fermentation temperature pretty close to optimal in my house, but during the summer my temps go up since my house is somewhat old and cannot maintain the right temp for brewing . The deep freeze takes care of the problem nicely and I can tell that my summer brews are much better. I just set the temp I want on the controller and it tells the freezer when to turn on and off to maintain that temp.

Mr. Olympia

Towards the end of summer/beginning of the semester I got another gig. This time I needed to create a virtual 3D version of a sculpture of Joe Weider, the first Mr. Olympia, for the 50th Mr. Olympia competition. It was supposed to be just a weekend job, but it took an extra few days of iterations on the likeness until the client was happy. This gig will support a computer upgrade at some point (my computers are 5 years old). Apple just announced the latest iMac, which is cool, but it features AMD graphics hardware, which is bad news for the software I use most. Now I’m trying to decide what I’m going to do (previous iMac that’s Nvidia-based, hackintosh, thunderbolt pcie box, etc…).

 

Mr. Olympia animation last frame

Mr. Olympia animation last frame

ArtsFest

The end of the summer for me was ArtsFest on October 3rd. I created some animations that I projected onto a building in the downtown Conway area. It ended up pretty much sucking compared to the work I did for ArtsFest last year. Last year I did a projection mapping project that drew a nice crowd, but this time the projection was literally over everyone’s head. Most of the audience there did not even know if was happening and the crowd was much smaller than it was last year. Also, a street light washed out half of the projection. The organizers had not seen that particular light on ever, but on 10/03 it came on… I think the thing that really sucked was that I essentially phoned the project in. I had plenty of time to make it really cool, but I did not put in the time or effort. I probably only had 30 hours (maybe) in the whole project so I created things that were easy and uninspired. Next year I plan to do something interactive rather than rendered footage. Interactivity is inherently more interesting at an event like ArtsFest and it is something that I have not done in many years so I think the challenge will kick me into gear to make something cool.

ArtsFest Light Issue

ArtsFest Light Issue

ArtsFest Fabric Fall

ArtsFest Fabric Fall

ArtsFest Cubes

ArtsFest Cubes

ArtFest Spheres

ArtFest Spheres

ArtsFest Lock

ArtsFest Lock

ArtsFest Tunnel Ride

ArtsFest Tunnel Ride

 

 

 

 

 

 

 

 

New semester

This fall semester started out as one of the busiest of my teaching career. Between a new class, a server meltdown, and having a major impact on the careers of four of my colleagues (via the Tenure and Promotion committee), it has been tough to get anything interesting done. It’s fall break now and I’m feeling a bit of relief.

Future projects

I’ve been working on a few other things that I have not mentioned in this post (both professional and personal projects). As those projects mature I’ll post about them. They include brewing projects, 3D printing, artwork, and film work.

Pipeline Tools

The impetus for this post comes from two sources. One was an interview I watched on fxguide.com with the head of the Discreet Logic products at Autodesk (Smoke, Flame, and Lustre – kind of), who talked about Flame and Smoke being finishing tools rather than pipeline tools. He was talking about Nuke and how in many ways it is not comparable to Flame because Flame/Smoke are much more than shot-by-shot applications. Instead they are about seeing your whole project and bringing it to a close with titles, complex effects, color correction, etc. at realtime speeds so clients can be in on the process. So it got me thinking about what apps are used in the pipeline and which apps can be used to complete a whole project (animation or film). The other source was the recent news about Luxology and The Foundry merging. These companies create very different products and there are some pretty cool opportunities with the merger. One thing that came up, however, in interviews with users of their products is that The Foundry’s products are very expensive and out of reach for small studios and Luxology’s products are very inexpensive and easy to use. I was really surprised by the candor of John Knoll (ILM vfx supervisor) and Tim Crowson (Magnetic Dreams, Nashville) about how The Foundry’s tools are expensive pipeline tools that are hard to use and how Luxology’s Modo is low-cost, easy to use, and creates really high-quality output. John Knoll also seemed to say that The Foundry could learn a thing or two about software design from Luxology. Stu Maschwitz has also remarked about how there is a move towards easier to use software over never-ending software options that are hard to understand and rarely used. Even the new version of Maya, which is famous for its ability to make the easiest thing take as many mouse clicks as possible in multiple editors, has simplified its interface to make everyday tasks easier. For years I have argued that it should not be so hard to get work done in software applications. I always thought it was funny coming from me because I like a lot of complexity and I am very technically savvy, but I was finding that it was complexity for complexity’s sake or it was a leftover from the app starting out as a pipeline or in-house tool that grew to a more generalized application. Cinema4D is really popular with motion graphics artists because it is relatively easy to use and can produce high-quality imagery with a complete toolset. Blender is free, but it is shrugged off often because it’s free – how can it be good if you aren’t paying thousands of dollars for it?

There are a lot of software applications out there for creating digital content. Depending on your projects, production team size, level of quality, technical expertise, need for sharing data with other artists, GUI preferences, budget, and probably several other reasons you will find yourself looking at all the products out there and trying to decide how you should invest your overhead funds.

Two general options are out there for developing a pipeline. One is to keep everything under one application and the other is to break the process out to separate software tools. The primary reason to use one tool is simplicity – keeping all of your assets and workflow together in one environment. The main reason to break up the pipeline with separate tools is to use the best features of different applications to raise your production value as much as possible (assuming that each tool helps complete a task better than anything else).

An example process that can be done by one or many applications:

  • Live action post
    • Edit
    • Audio mix and sweetening
    • Color Correction
    • Visual Effects
    • Final compilation for delivery

Live action post from the edit to final delivery can be done in one application, such as Final Cut Pro, Avid MC/Symphony, Premiere Pro, and Smoke 2013 (among others, but these tend to be the ones that everyone likes to talk about). It is possible for one editor/artist or a small team to produce content at a high-level of quality in just one of these applications. However, for projects that require complicated sound mixing, extensive color correction, and/or complicated visual effects, a single application may not have all of the tools to do the job at the expected quality. Here’s how this would look if it were broken out into individual pipeline tools:

  • Live action post
    • Edit – Final Cut Pro, Premiere Pro, Avid MC or Symphony, Smoke 2013, etc.
    • Audio mix and sweetening – Pro Tools, Logic, Ardour, Audition, Nuendo , etc.
    • Conform – Hiero, Resolve, Scratch, Baselight, etc.
    • Color Correction – Resolve, Speedgrade, Scratch, Lustre, Apple Color (discontinued), Baselight, Magic Bullet plugins, etc.
    • Visual Effects – After Effects, Nuke, Digital Fusion, Flame, Smoke 2013, etc.
    • Final compilation for delivery – Avid DS and Symphony, Smoke

Using several applications creates some issues that the post-production team will have to deal with. The first is the ability to easily move data between these applications via XML/EDL/OMF/etc. or exporting from one and importing into another. This can be a headache if the applications fail to move data properly. There are also possibilities for data or image quality loss. The other big issue is cost. If the artist/freelancer, small shop, or institution has to maintain licenses of all of these tools then it can get expensive quick. Looking for bundles, suites, or free software often look better than purchasing from different vendors for each application.

3D animation and visual effects is even worse. There are several general animation and compositing applications that can do very high-quality work, but there are also pipeline tools that can offer better solutions to specific problems that can be tough for the general apps to do. Here are some steps in the workflow for doing 3D and/or VFX work:

  • 3D Animation/Visual Effects
    • Model
    • Rigging
    • Layout
    • Animation
    • Simulation
    • Tracking
    • UV Layout
    • Texture Painting
    • Shading
    • Lighting
    • Rendering
    • Compositing

Lots to do and lots of applications to do each part. First the general animation applications: Autodesk (Maya, 3dsmax, Softimage), Cinema4D, Lightwave, Modo, Blender, Houdini. There are others, but these get most of the press. Each can do everything listed below except for compositing (some can’t composite). For compositing just look at the Visual Effects step from the live-action list above and add Blender, Softimage, and Houdini.

  • 3D Animation/Visual Effects
    • Model – General 3D app, ZBrush, Mudbox, Modo, FormZ, Rhino, Inventor, Vue (for landscapes), Silo
    • Rigging – General 3D app + scripting, Motion Builder
    • Layout – General 3D app, Motion Builder, Katana
    • Animation – General 3D app, Motion Builder, in-house tools
    • Simulation – General 3D app, Naiad (recently assimilated into the Autodesk collective), realflow, Houdini, FumeFX, in-house tools
    • Tracking – PFTrack, Syntheyes, Blender, Mocha Pro, boujou
    • UV Layout – General 3D app, Headus
    • Texture Painting – Mari, Bodypaint, Photoshop, Krita, GIMP, etc.
    • Shading – Renderman, OSL, other renderer shading languages
    • Lighting – General 3D app, Katana, Keyshot
    • Rendering – Mental Ray, Vray, Renderman, Arnold, Maxwell, Luxrender, Appleseed (in early stages), 3Delight, Krakatoa, etc.
    • Compositing – After Effects, Digital Fusion, Nuke, Blender, Houdini, Softimage

A boatload of apps here… Moving data between these applications can be very difficult and some pipeline apps like Realflow and renderers require extra costs to render on more than one computer at a time. At facilities that use multiple pipeline tools and/or multiple general 3D apps there are usually Technical Directors (TDs) that create custom tools in Python and/or Perl or a native scripting language like MEL to efficiently move data between apps. This extra technical support is usually beyond the means of small shops, individual artists, and institutions so they tend to feel the pain when going between applications. How are things changing to help these problems?

  • Since Autodesk owns half these apps they have worked on getting data between each app easier – just have to upgrade each year to wait for better workflows
  • Large studios like ILM and Sony Imageworks have released software as open source to make data interchange easier. Projects include OpenEXR, Alembic, and OSL. Other open source projects like the Bullet physics simulator have been integrated into open source and commercial applications.
  • General 3D apps are getting better at what they do so extra funding for pipeline apps are not as necessary except in extreme situations. See the additions of sculpting to Cinema4D, physics simulators in Maya and Houdini, renderers like Cycles and smoke simulators in Blender.
  • Prices are falling and competition is good. The Pixel Farm recently lowered their costs, Luxology’s Modo continues to get better while keeping its price lower than all other commercial general 3D programs. New lower pricing structures for the higher-priced commercial applications seem to come out each year – or bundling like Autodesk does.
  • Open source solutions exist, such as Blender, Inkscape, Krita, Luxrender, Ardour, etc.

Do you really need the pipeline apps? Ask yourself a few questions, such as, can I afford it beyond my regular tools? will I use it regularly (ROI)? do I have a problem that can only be solved with one of these apps? do they play well with my existing apps? do you need to purchase a specific app for a specific position on your team, such as texture painter or colorist? Hopefully you will not find yourself saying no to being able to afford them, but saying yes to having a problem that can only be solved with one of these apps, but don’t be surprised when it happens. Also look for hidden costs, such as render licenses, limited OS support, and annual license fees. Consider leasing specialty apps when that option is available. More than anything – consider the need at hand and choose the tools that can get the work done at the level of quality you expect. Just because there is a specialty tool out there does not mean that it is the only thing that can do the work. BTW, I’ll try to add some links and costs in a followup post.

A little Python project

I don’t program enough. Simple python programming opens up so many opportunities for pipeline tools and extending graphics applications like Blender, Maya, Cinema4D (latest versions), Lightwave (latest versions), Nuke, Hiero, and several asset and job management systems. Python is also being pushed as the main programming language on the Raspberry Pi, which is sitting next to me wanting some attention…

After Europa wrapped I started a programming to-do list that included a tool that checks for missing or bad rendered frames. I was rendering on a lot of different machines for that project and had thousands of frames to manage. I wasn’t using any render management system so there was nothing checking to make sure a frame got rendered properly except me scanning lists of files. The software that runs the cluster at school has no ability to make sure a frame got rendered properly and at home I use Blender’s simplistic network rendering ability of creating an empty frame (to show that the file exists and a free computer can work on the next frame) and then replacing the empty file when the rendered file is finished (BTW, there is a network rendering manager in Blender, but I don’t use it – yet). If a computer fails to complete a rendering there will be a blank frame left behind. A blank file will be left over if a computer fails on the school’s cluster (Callisto) too because I usually setup the files at home (turn on the “placeholder” and no “overwrite” options). While I didn’t have a lot of problems with broken files and unfinished sequences, I felt like I was vulnerable to issues that would be difficult to fix at the last minute – some frames took about 40 minutes to complete on an 8-core/16GB RAM computer.

Over the last couple of days I carved out some time and wrote a python script that looks for both missing files in a sequence and files with zero (0) bytes and then it writes out a report to tell you what’s missing or has zero bytes. It works by typing something like this in the command line:

python badFrames.py /path/to/rendered/frames 4 0 240 /path/to/logs

  • /path/to/rendered/frames = unix-style path to the directory that has the frames. In OSX, /Volumes/Media/Europa/06 (or something like that)
  • 4 = frame padding. The number 4 shows that a frame name will end with 0000, 0001, 0002, etc. This assumes number + extension, such as file0001.exr. The filename can be anything as long as the last 4 (or whatever) characters are numbers.
  • 0 = start frame
  • 240 = end frame (correct/expected end frame, not what’s actually in the directory)
  • /path/to/logs = (optional) the script creates a text file that is saved in this path. If a path is not here then it will save the log file in the same directory as the frames

There is only a little error checking going on so it could break if there are extra files in the directory, such as text files that are listed before the frames (sub-directories/folders are fine though), or you give it a path that does not exist. If people ask, I will add more error checking so it gives actual error messages. Right now it will give you a help message if something obvious is wrong. I might make a GUI for it too – more on that later.

BTW, this will not work on Windows without a few tweaks, but will run on OSX and Linux fine. I am hard coding some slashes in paths that work on unix-based systems only. It does require Python 2.7x or 3.x due to my print statements. I can change them so it works with 2.6 or earlier if you want.

Grab it here (temporary link)