LRHS Projection Mapping – Animation Experiments

I animate and render what the projector will playback and then project that animation back on the facade model, that has a similar texture to the real building, to simulate what it will look like on site.

The first animation has three statues moving their arms. After starting the rendering process I went for a walk (for those new to the blog that’s what the name means render + walk because you can’t do much else on the computer while rendering). It occurred to me that when this is projected onto the building, the statue arms will be quite a distance from the actual statue due to the facade’s depth. This isn’t much of an issue when looking at the building from front-center especially near the projector, but off-axis I felt like it may suck.

So I rendered a view off-axis to check.

I didn’t like it for two reasons. One, my original hypothesis was correct and the arms are pretty far away. This is an issue for about a third of the crowd thanks to the trees that force the audience towards the center of the viewing area, but I still don’t like it. The other reason is that any illumination on the actual statues makes them stand out as statues so I feel like we won’t be able to really remove them like I hoped. The side view does look cool even without illumination on the sides of the pillars and arches. It’s possible to project onto them too, but beyond this project’s budget.

So I created a new animation. This is better in terms of making it so the statues are seen when I want them to be seen. However, there is a moment when I have the statue “niches” rise up behind them, but it’s too late, they can already be seen. The lesson is that as parts of the building are highlighted or animated they need a strong silhouette – subtlety will be lost as soon as there is any light on them.

I’ve left the exterior lanterns, doors, and windows their natural color, which is dark, on the projection model for now. It is our goal to cover those with a material that reflects light better.

Here’s a fun experiment… A little bit of depth shading on a blueprint.

blueprint

Geek stuff warning

When I was preparing the model to simulate the projection on the building I found that some of the proportions of the statues were off by too much to let go. Thanks to some new photos I took of the building I had more modeling work to do to get it right. I had to spend some time moving parts of the statues around until they properly aligned with the real statues. I also tweaked the building, windows, and doors a little. Was a one step forward, two steps back moment, but it looks a lot better now and I have a lot more confidence in the projection.

The animations above were 750 frames each. Rendering them and then rendering the projection simulation was 4500 frames. Plus some re-rendering sections after deciding to make some tweaks. I use two computers to render. One is a Retina iMac and the other is a custom-built Linux/Windows PC. The iMac renders using its CPU (4 CPU cores/8 hyperthreaded cores) and the PC renders using two Nvidia GPUs. In some cases the PC can render four or more frames for every one the iMac can render because the GPU acceleration is so great.

Unfortunately/fortunately the Blender Cycles developers have been working hard on the GPU acceleration including, BTW, developers at AMD working on making it so Cycles is not limited to Nvidia GPUs. I say unfortunately because on one of the animations I found the PC Cycles render was crashing every 40 frames or so. It’s a sad morning when you see that the faster computer hadn’t been rendering for the last 6+ hours…

I don’t have time to troubleshoot the issue. It’s a mix of Blender/Cycles and Nvidia software and it’s not that bad in the grand scheme of things. To deal with it I decided to dust off a python script I wrote several years ago for a compute cluster we had at UCA. It created a job script for the distributed computing software. I was able to simplify it quite a bit and have it spit out a shell script (like a batch file for you Windows weirdos) that I could run so that Blender would render each frame as a new job rather than one job rendering all of the frames. Essentially it changes this one line that I manually type in a terminal:

blender -b blendfile.blend -a
(this tells blender to start without a UI to save resources and then render the animation based on the project’s settings)

To this listed in a shell script that I start by typing render.sh:

blender -b blendfile.blend -f 1
(render frame 1 based on the project’s settings and then close Blender)
blender -b blendfile.blend -f 2 (then render frame 2)
blender -b blendfile.blend -f 3 (then render frame 3)

Works like a charm. I could make the python script do a lot more tricks, but for now this is nice.

Last, Blender has a simple method of allowing multiple computers to render the same animation without using a render management application. Set the output to not overwrite and to make placeholders. A computer will look for frame 1 in the folder where the rendered images are saved (the output folder) and if it sees it then it will look for frame 2, etc. When it finds a frame that hasn’t been rendered it will create placeholder image, render, and replace the placeholder with the finished image. Each computer can claim a frame as they go, which is nice since one computer renders so much faster than the other. After Effects works this way too if you use multiple computers to render.

Since I’m not using a management system there is no check to make sure a frame actually gets rendered properly so I also wrote a python script back in the day that looks for frames with zero bytes to tell me if there were some bad frames. I might automate that with my other script, but I don’t want to dedicate the time to that right now. The macOS Finder does a nice job of listing “zero bytes,” which stands out in a list, or listing by size, so I’ve manually deleted bad frames too. To render those bad ones after deleting I just run the first command with the “-a” to find missing frames and render.

Making

This summer I changed my blog subtitle from “Artist | Technologist | Educator” to “Artist | Maker | Educator.” For a while I was reticent about the Maker moniker, but there now seems to be some legitimacy to the name. A maker is one who makes or produces something. In pop culture a maker is one who makes something using DIY electronics, but it’s obvious that makers go way beyond that. For instance, there’s been an indie-DIY maker movement in film for some time where filmmakers build production gear at a significantly lower cost than commercial products. This Newsweek article describes the maker movement as:

…a global community of inventors, designers, engineers, artists, programmers, hackers, tinkerers, craftsmen and DIY’ers—the kind of people who share a quality that Rosenstock says “leads to learning [and]…to innovation,” a perennial curiosity “about how they could do it better the next time.”

June 18, 2014 was an official National Day of Making in the U.S. The maker movement is seen not only as a form of personal expression and flexing curiosity muscles, but also as a potential economic engine. Makers could bring invention and manufacturing back to the U.S. The only issue I have with the proclamation is the emphasis on STEM (Science, Technology, Engineering, and Mathematics). While the STEM movement in education is important, I prefer the STEAM movement, which adds Art to the mix.

Being a maker requires a balance of creativity and logic. The creative side poses a problem or challenge and guides it with an aesthetic. The logic side provides the steps needed to implement a solution to the problem and then both sides evaluate the solution to see if it succeeds. Making a film, a piece of interactive art, a painting, designing a kitchen gadget, or a prosthetic hand is as much creative as it is technical, thus requires active thinking and experience from both creativity and logic. I think it is very interesting that the western world has put people like Leonardo da Vinci, Michelangelo, and Benjamin Franklin on pedestals as “Renaissance Men” or polymaths, yet we compartmentalize our education system in a way that allows us to choose between the sciences or the humanities/arts with a total lack of balance between them. General education (GE) requirements in college attempt a balance, but students are trained early (even in high school) to see the GE as a chore to get through rather than something that can actually help them understand the world from multiple points of view and gain thinking and manual skills that will help them throughout their lives.

Who’s not a maker? Most white collar workers and low-skill laborers.

I’m one of those cliche makers. As a kid I disassembled my toys, figured out how they worked, and mixed them to make something new. I had an erector set. I had Lincoln Logs and some Legos. My father was an engineer and he grew up having to fix his cars, home, appliances and be self-sufficient. My mother learned to paint and make crafts completely on her own. I learned early on to fix cars, I learned to drive a manual transmission car at the age of 10. I was taught to build and repair things, use a myriad of tools, and to not be afraid to experiment (or of change). I also learned to draw and paint at home before finally taking art classes in school.

In Matthew Crawford’s Shop Class as Soul Craft, he discusses the demise of shop class from American high school and college curricula during the 1990s. Luckily, I grew up on military bases in the 70s and 80s with amazing shop facilities both in the schools and for the base residents (the original maker spaces). I helped my father work on cars and made ceramics with my mother in the shops and then took industrial arts courses from the 6th through 9th grades and then got into making theatre sets and props starting in the 10th grade though graduate school.

Since the late 1990s, I’ve been building my own maker space (aka “shop”) that’s moved a few times and unfortunately resides in my garage and home office. Someday I’ll consolidate my spaces in a single building, but I think that’s going to happen in a different residence than where I am now.

I make a lot of different things with different materials and using a lot of different tools. A few recent examples are in a previous post. My work is often traditional using wood, metal, glass, paint, and drawing, while other projects live in the ether like projections or computer graphics/film projects. I also make beer, which is its own sub-set of the maker movement called the craft beer and home brewing movements.

To get into making I suggest that one look at his/her activities that get handed over to someone or something else and see if it could be done on one’s own. For instance:

  • Cook your meals. If you haven’t done it before then try starting with something like chili. Find a recipe (the logic), and then spice it and add other ingredients to your liking (the creative).
  • Filmmaking students – make your props and design your costumes beyond what’s in the actor’s closet.
  • Change your car’s oil. BTW, another idea from Crawford is that one should stay away from the “time is money” point of view when considering tasks that could be done by someone else for a few bucks. I heard this years ago and it made me go back to changing my own oil and doing basic maintenance like I did before getting a job back in 2000.
  • Get a Dremel and do some projects.
  • Make some Christmas or birthday presents
  • Learn to program using Python, Javascript, Apple’s Swift, Java, or C. And/or HTML, CSS, and Javascript.
  • Follow stories in Make and Instructables. Try something out that intrigues you.
  • Customize your bicycle (replace the seat, bars, pedals, or whatever makes you uncomfortable when riding)
  • Learn to make 3D models using Sketchup or 123Design or Blender so you can 3D print them.
  • Fix a leaky faucet or constantly running toilet
  • Change the oil in your mower and sharpen the blades
  • Make a lamp including doing the wiring from scratch
  • Fix a lamp
  • Mend a piece of clothing. Sew a button back on
  • Buy a Raspberry Pi or Arduino and teach it to do tricks
  • Carve a pumpkin for Halloween and/or Thanksgiving

A little automation

While working on a film for one of my colleagues, I found myself staring at a folder of QuickTime movies that needed visual effects work done to them. I thought; wouldn’t it be cool if I could do something simple to make a composite for each one in After Effects (AE), without having to import and then make the composites manually – essentially make those first few steps automated.

I looked into it and came up with a solution:

  1. Use Automator to make a Droplet (app that takes input by dragging and dropping files onto it)
  2. Have Automator run a Python script that:
    1. takes the input files
    2. builds an After Effects jsx script that imports the footage and makes compositions based on them
    3. run an AppleScript script that starts AE and tells it to run the jsx script
    4. delete the jsx script
  3. Be robust enough to re-use without Automator later if needed

Here it is for OSX.8 and OSX.9 (Apple made some slight adjustments to Automator in the two OSs so I compiled two versions)

To use: drag and drop a file that AE can import onto the aepCreator icon. Multiple files will make one AE project that contains each file and a composition based on each one. One file will simply create an aep with one file and one composition. It should break if you drop a folder on it. It will probably break if you drop something that AE cannot import, such as a zip file. It will also probably break on PSD files and image sequences. If that is a problem, then let me know and I will make the program a little smarter. I needed it to handle video files, so that’s all I worried about.

The python script can be run from the command line without Automator so a shell script or another Python script could automate building multiple AE projects – when working with students or a team of artists, or if your pipeline is one project file per shot. I commented out a line in the jsx script that tells AE to exit after it has built the compositions and saved a new aep file. It is trivial to make the Droplets behave that way if interested.

The project uses three scripting languages and a compiled Automator App. Python is used to do most of the work. It stores the incoming file paths, creates a jsx file and populates it with Javascript Extended code that AE can run, and then runs an Applescript script that tells AE to start. The Python script creates the other scripts needed by AE and MacOS. I could have gone without the Applescript, but Adobe decided that AE cannot be run from command line on OS X, which is stupid.

Automation like this is pretty easy with 3D animation and compositing applications. Video editors, on the other hand, do not make this easy at all. It seems Premiere Pro supports Adobe’s version of Javascript, but there is no documentation. When handling multiple visual effects shots, it would be nice if the editors could do some automating. Instead, there are expensive products like Hiero or buying a ton of helper apps designed just for workflows (the App store is full of very cool helper apps for FCPX – they go well beyond filling in so-called “missing features”). I’m looking forward to the upcoming version of OpenShot. It is built using Python and can be interactively controlled using a Python shell. It will hopefully make automating timeline manipulation easy, assuming it supports XML in and out.

A little Python project

I don’t program enough. Simple python programming opens up so many opportunities for pipeline tools and extending graphics applications like Blender, Maya, Cinema4D (latest versions), Lightwave (latest versions), Nuke, Hiero, and several asset and job management systems. Python is also being pushed as the main programming language on the Raspberry Pi, which is sitting next to me wanting some attention…

After Europa wrapped I started a programming to-do list that included a tool that checks for missing or bad rendered frames. I was rendering on a lot of different machines for that project and had thousands of frames to manage. I wasn’t using any render management system so there was nothing checking to make sure a frame got rendered properly except me scanning lists of files. The software that runs the cluster at school has no ability to make sure a frame got rendered properly and at home I use Blender’s simplistic network rendering ability of creating an empty frame (to show that the file exists and a free computer can work on the next frame) and then replacing the empty file when the rendered file is finished (BTW, there is a network rendering manager in Blender, but I don’t use it – yet). If a computer fails to complete a rendering there will be a blank frame left behind. A blank file will be left over if a computer fails on the school’s cluster (Callisto) too because I usually setup the files at home (turn on the “placeholder” and no “overwrite” options). While I didn’t have a lot of problems with broken files and unfinished sequences, I felt like I was vulnerable to issues that would be difficult to fix at the last minute – some frames took about 40 minutes to complete on an 8-core/16GB RAM computer.

Over the last couple of days I carved out some time and wrote a python script that looks for both missing files in a sequence and files with zero (0) bytes and then it writes out a report to tell you what’s missing or has zero bytes. It works by typing something like this in the command line:

python badFrames.py /path/to/rendered/frames 4 0 240 /path/to/logs

  • /path/to/rendered/frames = unix-style path to the directory that has the frames. In OSX, /Volumes/Media/Europa/06 (or something like that)
  • 4 = frame padding. The number 4 shows that a frame name will end with 0000, 0001, 0002, etc. This assumes number + extension, such as file0001.exr. The filename can be anything as long as the last 4 (or whatever) characters are numbers.
  • 0 = start frame
  • 240 = end frame (correct/expected end frame, not what’s actually in the directory)
  • /path/to/logs = (optional) the script creates a text file that is saved in this path. If a path is not here then it will save the log file in the same directory as the frames

There is only a little error checking going on so it could break if there are extra files in the directory, such as text files that are listed before the frames (sub-directories/folders are fine though), or you give it a path that does not exist. If people ask, I will add more error checking so it gives actual error messages. Right now it will give you a help message if something obvious is wrong. I might make a GUI for it too – more on that later.

BTW, this will not work on Windows without a few tweaks, but will run on OSX and Linux fine. I am hard coding some slashes in paths that work on unix-based systems only. It does require Python 2.7x or 3.x due to my print statements. I can change them so it works with 2.6 or earlier if you want.

Grab it here (temporary link)