LRCH Projection Mapping Show

LRCH_blueprints

Photo by Waid Raney

The show went very well overall with lots of positive feedback. Full wrap-up post forthcoming, but for now, here is the event.

Advertisements

LRHS Projection Mapping – Animation Experiments

I animate and render what the projector will playback and then project that animation back on the facade model, that has a similar texture to the real building, to simulate what it will look like on site.

The first animation has three statues moving their arms. After starting the rendering process I went for a walk (for those new to the blog that’s what the name means render + walk because you can’t do much else on the computer while rendering). It occurred to me that when this is projected onto the building, the statue arms will be quite a distance from the actual statue due to the facade’s depth. This isn’t much of an issue when looking at the building from front-center especially near the projector, but off-axis I felt like it may suck.

So I rendered a view off-axis to check.

I didn’t like it for two reasons. One, my original hypothesis was correct and the arms are pretty far away. This is an issue for about a third of the crowd thanks to the trees that force the audience towards the center of the viewing area, but I still don’t like it. The other reason is that any illumination on the actual statues makes them stand out as statues so I feel like we won’t be able to really remove them like I hoped. The side view does look cool even without illumination on the sides of the pillars and arches. It’s possible to project onto them too, but beyond this project’s budget.

So I created a new animation. This is better in terms of making it so the statues are seen when I want them to be seen. However, there is a moment when I have the statue “niches” rise up behind them, but it’s too late, they can already be seen. The lesson is that as parts of the building are highlighted or animated they need a strong silhouette – subtlety will be lost as soon as there is any light on them.

I’ve left the exterior lanterns, doors, and windows their natural color, which is dark, on the projection model for now. It is our goal to cover those with a material that reflects light better.

Here’s a fun experiment… A little bit of depth shading on a blueprint.

blueprint

Geek stuff warning

When I was preparing the model to simulate the projection on the building I found that some of the proportions of the statues were off by too much to let go. Thanks to some new photos I took of the building I had more modeling work to do to get it right. I had to spend some time moving parts of the statues around until they properly aligned with the real statues. I also tweaked the building, windows, and doors a little. Was a one step forward, two steps back moment, but it looks a lot better now and I have a lot more confidence in the projection.

The animations above were 750 frames each. Rendering them and then rendering the projection simulation was 4500 frames. Plus some re-rendering sections after deciding to make some tweaks. I use two computers to render. One is a Retina iMac and the other is a custom-built Linux/Windows PC. The iMac renders using its CPU (4 CPU cores/8 hyperthreaded cores) and the PC renders using two Nvidia GPUs. In some cases the PC can render four or more frames for every one the iMac can render because the GPU acceleration is so great.

Unfortunately/fortunately the Blender Cycles developers have been working hard on the GPU acceleration including, BTW, developers at AMD working on making it so Cycles is not limited to Nvidia GPUs. I say unfortunately because on one of the animations I found the PC Cycles render was crashing every 40 frames or so. It’s a sad morning when you see that the faster computer hadn’t been rendering for the last 6+ hours…

I don’t have time to troubleshoot the issue. It’s a mix of Blender/Cycles and Nvidia software and it’s not that bad in the grand scheme of things. To deal with it I decided to dust off a python script I wrote several years ago for a compute cluster we had at UCA. It created a job script for the distributed computing software. I was able to simplify it quite a bit and have it spit out a shell script (like a batch file for you Windows weirdos) that I could run so that Blender would render each frame as a new job rather than one job rendering all of the frames. Essentially it changes this one line that I manually type in a terminal:

blender -b blendfile.blend -a
(this tells blender to start without a UI to save resources and then render the animation based on the project’s settings)

To this listed in a shell script that I start by typing render.sh:

blender -b blendfile.blend -f 1
(render frame 1 based on the project’s settings and then close Blender)
blender -b blendfile.blend -f 2 (then render frame 2)
blender -b blendfile.blend -f 3 (then render frame 3)

Works like a charm. I could make the python script do a lot more tricks, but for now this is nice.

Last, Blender has a simple method of allowing multiple computers to render the same animation without using a render management application. Set the output to not overwrite and to make placeholders. A computer will look for frame 1 in the folder where the rendered images are saved (the output folder) and if it sees it then it will look for frame 2, etc. When it finds a frame that hasn’t been rendered it will create placeholder image, render, and replace the placeholder with the finished image. Each computer can claim a frame as they go, which is nice since one computer renders so much faster than the other. After Effects works this way too if you use multiple computers to render.

Since I’m not using a management system there is no check to make sure a frame actually gets rendered properly so I also wrote a python script back in the day that looks for frames with zero bytes to tell me if there were some bad frames. I might automate that with my other script, but I don’t want to dedicate the time to that right now. The macOS Finder does a nice job of listing “zero bytes,” which stands out in a list, or listing by size, so I’ve manually deleted bad frames too. To render those bad ones after deleting I just run the first command with the “-a” to find missing frames and render.

LRCHS – Projection Mapping – 1st Post

The 90th anniversary of the opening of the Little Rock Central High School building and the 60th anniversary of the Desegregation Crisis are coming September 18-25, 2017. It will be a week of activities that commemorates the anniversaries and culminates in an event that features a projection mapped animation on the façade of the high school building.

This first blog post is about a major milestone for the animation, which is a completed virtual 3D model of the facade including its four statues. Now that the model is complete we can finally get to work. The majority of the animation we create will be based on the architectural structure of the facade. I can’t believe February is almost over! It took me over a week longer than I expected to finish this phase of the project due to distractions including an illness that caused horrible headaches as well as external issues and projects and some personal goals beyond the projection mapping project. Hopefully the headaches are past – I can manage the rest.

Here’s the basic model:

4statues

We can add lighting that can make it appear as if we’ve hung actual lights near the building:

spotlights

We can also play around (this is just a test and not final imagery):

lightjade

And add stuff:

1927

Here’s what it should look like at the campus. We intend to add some lighting around the central facade as well.

projectiontest

The Facade

The limestone part of the high school’s main entry has several nice 1920s Art Deco details and is sculptural in nature with deep set doors and windows and jutting pedestals for the four statues. I still need to add the letters for the statues. We will hopefully be able to temporarily cover the windows and doors so they won’t be so dark. We will also need to cover the lanterns so they will reflect the projections.

dsc00432

Ambition, Personality, Opportunity, and Preparation

When facing the building the four statues from left to right are Ambition (male), Personality (female), Opportunity (female), and Preparation (male).

I’ve been told that the four statues were “ordered from a catalog” and not unique to the building project. Their body styles are reminiscent of Michelangelo sculptures with their long muscular arms and Greek facial features. Preparation must have been the sculptor’s version of David – see his contrapposto stance, physique, lowered right arm (holding a scroll in this case), raised left arm holding a book instead of a sling, and a left-facing gaze.

ch-interior_110 512px-27david27_by_michelangelo_jbu0001

Their dress is based on ancient Greek Chiton. The sculptural style is “wet drape” where the cloth clings to the skin to reveal the figure’s body underneath. This is most obvious in Preparation with his torso that practically looks bare, and you can see it in Opportunity as well. I modeled these statues by starting with nudes so I could get the wet drape look right.

I think later blog posts will go on another website dedicated to this project. Geeky stuff will stay on this blog though.

Geek Stuff (most of you will want to skip this)

I modeled the facade by building basic geometric shapes and aligning them to a photograph I took last summer. I actually got most of this model finished by last fall. In January I added the smaller details and lanterns.

The statues were very time consuming and I knew they would be… I downloaded a few nude “base models” from Blendswap, which are designed to be a starting place for creating a character. For the females, I used the body of one and the hands and head of another. After splicing them together I pushed and pulled and extruded faces, edges, and vertices to make them match the sculpture. I also used sculpting tools to smooth and guide areas of the model. The models are considered low-poly, which makes them easy to animate and handle in the 3D software. When they are rendered they are smoothed using Pixar’s subdivision surface technology. It turns a blocky mess of polygons into flowing garments.

For the capes I essentially started with a line and extruded it and moved it to create the overlapping folds. For smaller details I just cut the larger polygonal faces into smaller ones that I could then push, pull, and sculpt into their final form.

Once a model seemed ready to go I aligned it with the main photo of the facade. I had closeups of the statues to do most of the work, but since the photos were taken from below, the proportions were not accurate so aligning with the main photo was key to getting them the correct overall size. Because of the proportion issues and a number of other things, I modeled them just looking at my photos rather than trying to align them to photos in the 3D viewport, which is common for character design.

While modeling, the virtual statue is standing in a T-pose. I used a T-pose because we will most-likely apply some custom motion capture animation and our motion capture system (Perception Neuron) requires a T-pose to start. Another common starting point for a character model is an A-pose, which is more relaxed, but not a good idea for our purposes.

After getting the proportions correct I added a skeleton to the model. The skeleton is based on the needs of the motion capture system. The model is bound to the skeleton so whenever I move a bone, the model with deform with it. I used the bones to pose the model to match the statues. I actually animated the movement so I could go back to the T-pose easily as well as test the model deformations as the bones moved. Some of the dress is not driven by the skeleton at the moment. That will come later via cloth simulations.

opportunityposing

I modeled the statues this way because I knew we would be animating them and they needed a structure that would support animation. A more accurate alternative to modeling by eye would have been to scan the actual sculptures. Scanning could be done via LIDAR, but would have been prohibitively expensive. Or, it can be done with lots of photographs from multiple angles via photogrammetry. Shooting the sculptures with a drone and extracting frames from the video would have been a way to get the images needed.

The upside to scanning would be a very accurate model, but there are downsides. One is that the scan would have to be retopologized, which can be time intensive, to make it animatable. Another is that the models would not have a backside and the arms would be stuck to the bodies so they would need hand modeling to create the back and make the arms free. I would have been up for these things had they been scanned last fall. Unfortunately they are 22 feet above the ground so logistically it is not a trivial issue to get to them.

From here it is a matter of lighting, creating cool surface materials, animating the statues, opening the doors, or whatever else we come up with. Even things that don’t directly change the facade, such as showing a photo, will be rendered against the virtual facade so the photo will appear to interact with the building.

Blender

screenshot

I used Blender to do all of this work. It is just a joy to use. Some things that came in handy (these aren’t necessarily unique to Blender BTW):

  • Use photos as a background in the camera viewport to help create a 3D environment that is similar to the size of the actual building
  • Changed one of my 3D panels into an image viewer so I could have a photo of a statue up at all times.
  • The Shift Key – I use a Wacom Intuos 4 Medium when working with graphics software. It has a bad habit of moving during a click or not really making the mark you tried because it was so small. When changing a parameter in Blender (practically no matter what it is), you can hold down the Shift Key while doing it and it will increase the accuracy of the parameter by not allowing it to change drastically no matter how much you move the stylus. I can make big movements to do small changes. BTW, some graphics programs do have a similar function, just not all…
  • Matcaps – haven’t really used them before, but they make modeling organic forms much easier. They allow you to customize how the model is shaded in the viewport so you can see the curved surfaces easier.
  • Proportional Editing – Used when moving a vertex or small group of vertices and wanting surrounding vertices to move with them, but not as much. Super helpful when making proportion changes or needing to move parts of the model around to accommodate the posed body. Especially useful is the “Connected” mode where it will only move vertices connected to the one you are moving rather than ones that are just nearby. You can also change the falloff to control how the other non-selected vertices will change. BTW, this works on more than just vertices, just using that as an example.
  • Subdivision Surfaces – Blender can show the subd effect while editing the model either by showing the base model and the smoothing separately or by bending the base model’s edges along the surface of the smoothed model. This really helps know how the changes of the low resolution model will change the smoothed model.
  • Solidify modifier – I made the capes a single polygon thickness and used this modifier to give it dimensional thickness. When sending the models out to Jim and Jonathan, who use Cinema4D and Maya, I will “Apply” this effect to make the geometry permanent.
  • Cycles with two GPUs – it’s so fast! To do test renderings and make the images in this blog post it is amazing how fast Cycles can be. The images here took about a minute and a half to render each one. It’s also crazy easy to make objects into light sources. I do most of the work on my iMac and then switch over to my Linux computer for rendering.

Wake Me Up When September Ends

Been quite a while since I updated the blog. Here’s what I’ve been up to:

The summer started off with a gig doing graphics for the current Brad Paisley tour. I created a few 3D models for the other artists to work with at MooTV and created an animation of a space shuttle taking off. The space shuttle animation opened the show. This gig got the summer started right by funding the 3D printer and my home office remodel. I also helped design the stage for this concert. Here’s a pic of the stage with an image of a shuttle on the screens (not my shuttle, but you get the idea)

Testing to see if the space shuttle look will work with the stage design

Testing to see if the space shuttle look will work with the stage design

Garage/Shop

Before getting the remodel going I really needed to cleanup my garage/workshop, fix some work surfaces, and do something about my aging air compressor and air-line setup. I replaced the 14-year old compressor and re-worked the air distribution. The 3D printer came in handy here. My first custom printed model was a bracket to mount a pneumatic junction on my ceiling. I also replaced a cheesy work table with one that will be more useful for projects.

Air BracketPrinted Air Bracket

 

 

 

 

Home Office/Studio

Remodeling the home office needed to happen. Last summer we remodeled most of the downstairs except for the office. I had done some work previously to a wall (exterior and interior) that was damaged by water and time, but the rest of the office looked like it did after we moved in 10 years ago. I took my time because I could and stretched out re-texturing the ceilings, painting, trimming, carpeting, and decorating over three weeks (about a week or so worth of actual work). The Brad Paisley gig paid for the materials and some furniture, but I also built some of my own furniture.

Back room before the remodel

Back room before the remodel

One of 3 selfies I've ever done

One of 3 selfies I’ve ever done

Painting getting started

Painting getting started

 

 

 

 

 

3D Printer Table

Five years ago I built a simple table/stand for my Linux rendering computers (cheap custom PCs). I decided that this table should be used for the 3D printer so I built a new top for it. I had some shelves I built many years ago out of 1×12 that I decided not to use again so I cut the wood into strips and made a table top.

Table Top Glue UpFinished Table Top

 

 

 

 

Drawing Table

I’ve been carrying around an old drafting/drawing table for about 20 years without actually using it. My father got it about 40 years ago from the Arkansas Tech ROTC building that had partially burned (you can see some smoke staining in one of the pictures). I loved the table, but didn’t have room for it’s size and I wanted to finish it somehow before I put it back into service. My father used it with another drafting surface on top of it to protect it, but I wanted to do something a little different.

I cut it down to a more manageable size and built a new base for it. I had the old saw-horse style base, but decided not to re-finish the old base. Once the top was cut down, I sanded and sealed it. Then, I attached it to a new base and put a self-healing cutting mat on it to protect it. The base is kinda lame so I am thinking I’ll build a base that is more inspired next summer.  I’m so happy to finally be using the table top. My father died quite unexpectedly 20 years ago (in February) and this table top is one of the few things I have of his that I knew he truly enjoyed having. It’s great to put it back in service.

See the smoke under the supports

See the smoke under the supports

Drafting Table Cut Down

Cut to more usable length

Completed Drawing Table next to store-bought table

Completed Drawing Table next to store-bought table

 

 

 

 

 

Artwork

I’ve been doing some drawing over the last year. I posted about how iPad drawing wasn’t doing for me so I’ve been doing as much traditional drawing as I can (as I feel like it too…). Also, as part of my office remodeling project, I photographed all of my artwork. I expect to post my artwork soon now that I have it photographed. I just need to do some color correction on the photos. Here are some recent drawings I’ve done in my sketchbook (just a tease – there are lots more).

Demon of Dutch Hill. Inspired by Stephen King's Dark Tower: Wasteland

Demon of Dutch Hill. Inspired by Stephen King’s Dark Tower: Wasteland

Some rocks in my head

Some rocks in my head

Sasha WIP

Sasha WIP

 

 

 

 

 

 

 

Beer Brewing

During the semesters I don’t brew very much compared to the summer break. This summer I brewed a lot. I was able to try out some different recipes and refine my all-grain process a little more. I also added a crazy-cheap chest freezer to handle my need for consistent fermentation temperatures. In the fall, winter, and spring, I can keep the fermentation temperature pretty close to optimal in my house, but during the summer my temps go up since my house is somewhat old and cannot maintain the right temp for brewing . The deep freeze takes care of the problem nicely and I can tell that my summer brews are much better. I just set the temp I want on the controller and it tells the freezer when to turn on and off to maintain that temp.

Mr. Olympia

Towards the end of summer/beginning of the semester I got another gig. This time I needed to create a virtual 3D version of a sculpture of Joe Weider, the first Mr. Olympia, for the 50th Mr. Olympia competition. It was supposed to be just a weekend job, but it took an extra few days of iterations on the likeness until the client was happy. This gig will support a computer upgrade at some point (my computers are 5 years old). Apple just announced the latest iMac, which is cool, but it features AMD graphics hardware, which is bad news for the software I use most. Now I’m trying to decide what I’m going to do (previous iMac that’s Nvidia-based, hackintosh, thunderbolt pcie box, etc…).

 

Mr. Olympia animation last frame

Mr. Olympia animation last frame

ArtsFest

The end of the summer for me was ArtsFest on October 3rd. I created some animations that I projected onto a building in the downtown Conway area. It ended up pretty much sucking compared to the work I did for ArtsFest last year. Last year I did a projection mapping project that drew a nice crowd, but this time the projection was literally over everyone’s head. Most of the audience there did not even know if was happening and the crowd was much smaller than it was last year. Also, a street light washed out half of the projection. The organizers had not seen that particular light on ever, but on 10/03 it came on… I think the thing that really sucked was that I essentially phoned the project in. I had plenty of time to make it really cool, but I did not put in the time or effort. I probably only had 30 hours (maybe) in the whole project so I created things that were easy and uninspired. Next year I plan to do something interactive rather than rendered footage. Interactivity is inherently more interesting at an event like ArtsFest and it is something that I have not done in many years so I think the challenge will kick me into gear to make something cool.

ArtsFest Light Issue

ArtsFest Light Issue

ArtsFest Fabric Fall

ArtsFest Fabric Fall

ArtsFest Cubes

ArtsFest Cubes

ArtFest Spheres

ArtFest Spheres

ArtsFest Lock

ArtsFest Lock

ArtsFest Tunnel Ride

ArtsFest Tunnel Ride

 

 

 

 

 

 

 

 

New semester

This fall semester started out as one of the busiest of my teaching career. Between a new class, a server meltdown, and having a major impact on the careers of four of my colleagues (via the Tenure and Promotion committee), it has been tough to get anything interesting done. It’s fall break now and I’m feeling a bit of relief.

Future projects

I’ve been working on a few other things that I have not mentioned in this post (both professional and personal projects). As those projects mature I’ll post about them. They include brewing projects, 3D printing, artwork, and film work.

Pipeline Tools

The impetus for this post comes from two sources. One was an interview I watched on fxguide.com with the head of the Discreet Logic products at Autodesk (Smoke, Flame, and Lustre – kind of), who talked about Flame and Smoke being finishing tools rather than pipeline tools. He was talking about Nuke and how in many ways it is not comparable to Flame because Flame/Smoke are much more than shot-by-shot applications. Instead they are about seeing your whole project and bringing it to a close with titles, complex effects, color correction, etc. at realtime speeds so clients can be in on the process. So it got me thinking about what apps are used in the pipeline and which apps can be used to complete a whole project (animation or film). The other source was the recent news about Luxology and The Foundry merging. These companies create very different products and there are some pretty cool opportunities with the merger. One thing that came up, however, in interviews with users of their products is that The Foundry’s products are very expensive and out of reach for small studios and Luxology’s products are very inexpensive and easy to use. I was really surprised by the candor of John Knoll (ILM vfx supervisor) and Tim Crowson (Magnetic Dreams, Nashville) about how The Foundry’s tools are expensive pipeline tools that are hard to use and how Luxology’s Modo is low-cost, easy to use, and creates really high-quality output. John Knoll also seemed to say that The Foundry could learn a thing or two about software design from Luxology. Stu Maschwitz has also remarked about how there is a move towards easier to use software over never-ending software options that are hard to understand and rarely used. Even the new version of Maya, which is famous for its ability to make the easiest thing take as many mouse clicks as possible in multiple editors, has simplified its interface to make everyday tasks easier. For years I have argued that it should not be so hard to get work done in software applications. I always thought it was funny coming from me because I like a lot of complexity and I am very technically savvy, but I was finding that it was complexity for complexity’s sake or it was a leftover from the app starting out as a pipeline or in-house tool that grew to a more generalized application. Cinema4D is really popular with motion graphics artists because it is relatively easy to use and can produce high-quality imagery with a complete toolset. Blender is free, but it is shrugged off often because it’s free – how can it be good if you aren’t paying thousands of dollars for it?

There are a lot of software applications out there for creating digital content. Depending on your projects, production team size, level of quality, technical expertise, need for sharing data with other artists, GUI preferences, budget, and probably several other reasons you will find yourself looking at all the products out there and trying to decide how you should invest your overhead funds.

Two general options are out there for developing a pipeline. One is to keep everything under one application and the other is to break the process out to separate software tools. The primary reason to use one tool is simplicity – keeping all of your assets and workflow together in one environment. The main reason to break up the pipeline with separate tools is to use the best features of different applications to raise your production value as much as possible (assuming that each tool helps complete a task better than anything else).

An example process that can be done by one or many applications:

  • Live action post
    • Edit
    • Audio mix and sweetening
    • Color Correction
    • Visual Effects
    • Final compilation for delivery

Live action post from the edit to final delivery can be done in one application, such as Final Cut Pro, Avid MC/Symphony, Premiere Pro, and Smoke 2013 (among others, but these tend to be the ones that everyone likes to talk about). It is possible for one editor/artist or a small team to produce content at a high-level of quality in just one of these applications. However, for projects that require complicated sound mixing, extensive color correction, and/or complicated visual effects, a single application may not have all of the tools to do the job at the expected quality. Here’s how this would look if it were broken out into individual pipeline tools:

  • Live action post
    • Edit – Final Cut Pro, Premiere Pro, Avid MC or Symphony, Smoke 2013, etc.
    • Audio mix and sweetening – Pro Tools, Logic, Ardour, Audition, Nuendo , etc.
    • Conform – Hiero, Resolve, Scratch, Baselight, etc.
    • Color Correction – Resolve, Speedgrade, Scratch, Lustre, Apple Color (discontinued), Baselight, Magic Bullet plugins, etc.
    • Visual Effects – After Effects, Nuke, Digital Fusion, Flame, Smoke 2013, etc.
    • Final compilation for delivery – Avid DS and Symphony, Smoke

Using several applications creates some issues that the post-production team will have to deal with. The first is the ability to easily move data between these applications via XML/EDL/OMF/etc. or exporting from one and importing into another. This can be a headache if the applications fail to move data properly. There are also possibilities for data or image quality loss. The other big issue is cost. If the artist/freelancer, small shop, or institution has to maintain licenses of all of these tools then it can get expensive quick. Looking for bundles, suites, or free software often look better than purchasing from different vendors for each application.

3D animation and visual effects is even worse. There are several general animation and compositing applications that can do very high-quality work, but there are also pipeline tools that can offer better solutions to specific problems that can be tough for the general apps to do. Here are some steps in the workflow for doing 3D and/or VFX work:

  • 3D Animation/Visual Effects
    • Model
    • Rigging
    • Layout
    • Animation
    • Simulation
    • Tracking
    • UV Layout
    • Texture Painting
    • Shading
    • Lighting
    • Rendering
    • Compositing

Lots to do and lots of applications to do each part. First the general animation applications: Autodesk (Maya, 3dsmax, Softimage), Cinema4D, Lightwave, Modo, Blender, Houdini. There are others, but these get most of the press. Each can do everything listed below except for compositing (some can’t composite). For compositing just look at the Visual Effects step from the live-action list above and add Blender, Softimage, and Houdini.

  • 3D Animation/Visual Effects
    • Model – General 3D app, ZBrush, Mudbox, Modo, FormZ, Rhino, Inventor, Vue (for landscapes), Silo
    • Rigging – General 3D app + scripting, Motion Builder
    • Layout – General 3D app, Motion Builder, Katana
    • Animation – General 3D app, Motion Builder, in-house tools
    • Simulation – General 3D app, Naiad (recently assimilated into the Autodesk collective), realflow, Houdini, FumeFX, in-house tools
    • Tracking – PFTrack, Syntheyes, Blender, Mocha Pro, boujou
    • UV Layout – General 3D app, Headus
    • Texture Painting – Mari, Bodypaint, Photoshop, Krita, GIMP, etc.
    • Shading – Renderman, OSL, other renderer shading languages
    • Lighting – General 3D app, Katana, Keyshot
    • Rendering – Mental Ray, Vray, Renderman, Arnold, Maxwell, Luxrender, Appleseed (in early stages), 3Delight, Krakatoa, etc.
    • Compositing – After Effects, Digital Fusion, Nuke, Blender, Houdini, Softimage

A boatload of apps here… Moving data between these applications can be very difficult and some pipeline apps like Realflow and renderers require extra costs to render on more than one computer at a time. At facilities that use multiple pipeline tools and/or multiple general 3D apps there are usually Technical Directors (TDs) that create custom tools in Python and/or Perl or a native scripting language like MEL to efficiently move data between apps. This extra technical support is usually beyond the means of small shops, individual artists, and institutions so they tend to feel the pain when going between applications. How are things changing to help these problems?

  • Since Autodesk owns half these apps they have worked on getting data between each app easier – just have to upgrade each year to wait for better workflows
  • Large studios like ILM and Sony Imageworks have released software as open source to make data interchange easier. Projects include OpenEXR, Alembic, and OSL. Other open source projects like the Bullet physics simulator have been integrated into open source and commercial applications.
  • General 3D apps are getting better at what they do so extra funding for pipeline apps are not as necessary except in extreme situations. See the additions of sculpting to Cinema4D, physics simulators in Maya and Houdini, renderers like Cycles and smoke simulators in Blender.
  • Prices are falling and competition is good. The Pixel Farm recently lowered their costs, Luxology’s Modo continues to get better while keeping its price lower than all other commercial general 3D programs. New lower pricing structures for the higher-priced commercial applications seem to come out each year – or bundling like Autodesk does.
  • Open source solutions exist, such as Blender, Inkscape, Krita, Luxrender, Ardour, etc.

Do you really need the pipeline apps? Ask yourself a few questions, such as, can I afford it beyond my regular tools? will I use it regularly (ROI)? do I have a problem that can only be solved with one of these apps? do they play well with my existing apps? do you need to purchase a specific app for a specific position on your team, such as texture painter or colorist? Hopefully you will not find yourself saying no to being able to afford them, but saying yes to having a problem that can only be solved with one of these apps, but don’t be surprised when it happens. Also look for hidden costs, such as render licenses, limited OS support, and annual license fees. Consider leasing specialty apps when that option is available. More than anything – consider the need at hand and choose the tools that can get the work done at the level of quality you expect. Just because there is a specialty tool out there does not mean that it is the only thing that can do the work. BTW, I’ll try to add some links and costs in a followup post.

A little Python project

I don’t program enough. Simple python programming opens up so many opportunities for pipeline tools and extending graphics applications like Blender, Maya, Cinema4D (latest versions), Lightwave (latest versions), Nuke, Hiero, and several asset and job management systems. Python is also being pushed as the main programming language on the Raspberry Pi, which is sitting next to me wanting some attention…

After Europa wrapped I started a programming to-do list that included a tool that checks for missing or bad rendered frames. I was rendering on a lot of different machines for that project and had thousands of frames to manage. I wasn’t using any render management system so there was nothing checking to make sure a frame got rendered properly except me scanning lists of files. The software that runs the cluster at school has no ability to make sure a frame got rendered properly and at home I use Blender’s simplistic network rendering ability of creating an empty frame (to show that the file exists and a free computer can work on the next frame) and then replacing the empty file when the rendered file is finished (BTW, there is a network rendering manager in Blender, but I don’t use it – yet). If a computer fails to complete a rendering there will be a blank frame left behind. A blank file will be left over if a computer fails on the school’s cluster (Callisto) too because I usually setup the files at home (turn on the “placeholder” and no “overwrite” options). While I didn’t have a lot of problems with broken files and unfinished sequences, I felt like I was vulnerable to issues that would be difficult to fix at the last minute – some frames took about 40 minutes to complete on an 8-core/16GB RAM computer.

Over the last couple of days I carved out some time and wrote a python script that looks for both missing files in a sequence and files with zero (0) bytes and then it writes out a report to tell you what’s missing or has zero bytes. It works by typing something like this in the command line:

python badFrames.py /path/to/rendered/frames 4 0 240 /path/to/logs

  • /path/to/rendered/frames = unix-style path to the directory that has the frames. In OSX, /Volumes/Media/Europa/06 (or something like that)
  • 4 = frame padding. The number 4 shows that a frame name will end with 0000, 0001, 0002, etc. This assumes number + extension, such as file0001.exr. The filename can be anything as long as the last 4 (or whatever) characters are numbers.
  • 0 = start frame
  • 240 = end frame (correct/expected end frame, not what’s actually in the directory)
  • /path/to/logs = (optional) the script creates a text file that is saved in this path. If a path is not here then it will save the log file in the same directory as the frames

There is only a little error checking going on so it could break if there are extra files in the directory, such as text files that are listed before the frames (sub-directories/folders are fine though), or you give it a path that does not exist. If people ask, I will add more error checking so it gives actual error messages. Right now it will give you a help message if something obvious is wrong. I might make a GUI for it too – more on that later.

BTW, this will not work on Windows without a few tweaks, but will run on OSX and Linux fine. I am hard coding some slashes in paths that work on unix-based systems only. It does require Python 2.7x or 3.x due to my print statements. I can change them so it works with 2.6 or earlier if you want.

Grab it here (temporary link)

Tools – software Ep. 1

Episode 1: The Beginning

A little history. This is episode 1 after all…
I got a Windows 3.1 PC in the winter of 1992. Previously I had owned several Commodore computers and my father had an XT, but I did not get serious about computing until I had that 486DX33. I got some games from a friend, but other than Risk and Wolfenstein, I didn’t really care about them. What I really wanted to do was make images and use the computer to help document my theatrical scenery and lighting designs.

I farted around with whatever software I could afford from our local bookstore until 1994, when I was able to go to USITT for the first time and I picked up a coupon for a price break on Strata3D for DOS. After discussing it with my mother, she agreed to pay for the software and I got to work with my first professional program (or so I thought). It turned out sucking pretty bad and I barely learned a thing. I also tried POV ray and was able to render one frame at 800×600 after 13 hours of compute time.

A little less than a year later I got an undergraduate grant and purchased Caligari Truespace right as v.2 came out. It changed my life. I was finally able to get work done and it was great. I also got XCAD, which was a short lived, but excellent 3D CAD program. I built a portfolio and senior thesis with these tools and it was great. In the last year of college I was also working on Premiere and Photoshop on the Mac at school as well as getting introduced to warez, which would get me through grad school and the early days of my freelance professional career (and my expertise with 3D Studio, 3ds max, After Effects, Combustion, Maya, and Photoshop).

I learned a lot over those years as far as tools, pipelines, and the industry was concerned. I was actually living the evolution of high-end graphics on the desktop thanks to the warez and being a student and later a professor at Purdue University. The persistent problem though, was the price of those tools. In the early days the software could cost into the 10s of thousands of dollars. Today the top programs still cost several thousands of dollars even though they are practically all owned by the same company (not all, but the top 3 in 3D). The cost of software and alienation of the user has led me to pay much more attention to open source projects, such as Blender, MyPaint, GIMP, Inkscape, and open programming languages, in particular, Python.

Blender is a great model for how to make an open source project work. There are several talented developers, a core governing body that is well organized and funded, and many devoted users. In 2007 I was approached to work on a very large film project and I was using Maya at the time. I had to decide whether to invest in Maya professionally or go another way. Blender had recently implemented a nodes system for compositing and other major developments were on the way. I had peeked in on Blender a few times in the past, but had not taken it seriously. After a month of creating one thing in Maya and being able to recreate it in Blender I decided to commit to Blender for 3D animation. I have not looked back since.

Though it is not perfect, nor is its features as complete or as broad in many cases compared to commercial products like 3ds max, Maya, Softimage, or Cinema4D, I find it to be professional quality, fast, reliable, and I am addicted to its fast and active development community.

I am also constantly opening my eyes to open communities and projects. Open standards, open courseware, open source between visual effects studios, and the exchanging of ideas.

Episode 1 was a little history and point of view. The next episodes will include topics such as “professional” tools, interoperability, and looking at the whole filmmaking pipeline. With each topic I will support my POV on commercial/proprietary software, the cost of tools, and how it all fits into the business of the Indy artist/freelancer.

— Scott