LRHS Projection Mapping – Animation Experiments

I animate and render what the projector will playback and then project that animation back on the facade model, that has a similar texture to the real building, to simulate what it will look like on site.

The first animation has three statues moving their arms. After starting the rendering process I went for a walk (for those new to the blog that’s what the name means render + walk because you can’t do much else on the computer while rendering). It occurred to me that when this is projected onto the building, the statue arms will be quite a distance from the actual statue due to the facade’s depth. This isn’t much of an issue when looking at the building from front-center especially near the projector, but off-axis I felt like it may suck.

So I rendered a view off-axis to check.

I didn’t like it for two reasons. One, my original hypothesis was correct and the arms are pretty far away. This is an issue for about a third of the crowd thanks to the trees that force the audience towards the center of the viewing area, but I still don’t like it. The other reason is that any illumination on the actual statues makes them stand out as statues so I feel like we won’t be able to really remove them like I hoped. The side view does look cool even without illumination on the sides of the pillars and arches. It’s possible to project onto them too, but beyond this project’s budget.

So I created a new animation. This is better in terms of making it so the statues are seen when I want them to be seen. However, there is a moment when I have the statue “niches” rise up behind them, but it’s too late, they can already be seen. The lesson is that as parts of the building are highlighted or animated they need a strong silhouette – subtlety will be lost as soon as there is any light on them.

I’ve left the exterior lanterns, doors, and windows their natural color, which is dark, on the projection model for now. It is our goal to cover those with a material that reflects light better.

Here’s a fun experiment… A little bit of depth shading on a blueprint.

blueprint

Geek stuff warning

When I was preparing the model to simulate the projection on the building I found that some of the proportions of the statues were off by too much to let go. Thanks to some new photos I took of the building I had more modeling work to do to get it right. I had to spend some time moving parts of the statues around until they properly aligned with the real statues. I also tweaked the building, windows, and doors a little. Was a one step forward, two steps back moment, but it looks a lot better now and I have a lot more confidence in the projection.

The animations above were 750 frames each. Rendering them and then rendering the projection simulation was 4500 frames. Plus some re-rendering sections after deciding to make some tweaks. I use two computers to render. One is a Retina iMac and the other is a custom-built Linux/Windows PC. The iMac renders using its CPU (4 CPU cores/8 hyperthreaded cores) and the PC renders using two Nvidia GPUs. In some cases the PC can render four or more frames for every one the iMac can render because the GPU acceleration is so great.

Unfortunately/fortunately the Blender Cycles developers have been working hard on the GPU acceleration including, BTW, developers at AMD working on making it so Cycles is not limited to Nvidia GPUs. I say unfortunately because on one of the animations I found the PC Cycles render was crashing every 40 frames or so. It’s a sad morning when you see that the faster computer hadn’t been rendering for the last 6+ hours…

I don’t have time to troubleshoot the issue. It’s a mix of Blender/Cycles and Nvidia software and it’s not that bad in the grand scheme of things. To deal with it I decided to dust off a python script I wrote several years ago for a compute cluster we had at UCA. It created a job script for the distributed computing software. I was able to simplify it quite a bit and have it spit out a shell script (like a batch file for you Windows weirdos) that I could run so that Blender would render each frame as a new job rather than one job rendering all of the frames. Essentially it changes this one line that I manually type in a terminal:

blender -b blendfile.blend -a
(this tells blender to start without a UI to save resources and then render the animation based on the project’s settings)

To this listed in a shell script that I start by typing render.sh:

blender -b blendfile.blend -f 1
(render frame 1 based on the project’s settings and then close Blender)
blender -b blendfile.blend -f 2 (then render frame 2)
blender -b blendfile.blend -f 3 (then render frame 3)

Works like a charm. I could make the python script do a lot more tricks, but for now this is nice.

Last, Blender has a simple method of allowing multiple computers to render the same animation without using a render management application. Set the output to not overwrite and to make placeholders. A computer will look for frame 1 in the folder where the rendered images are saved (the output folder) and if it sees it then it will look for frame 2, etc. When it finds a frame that hasn’t been rendered it will create placeholder image, render, and replace the placeholder with the finished image. Each computer can claim a frame as they go, which is nice since one computer renders so much faster than the other. After Effects works this way too if you use multiple computers to render.

Since I’m not using a management system there is no check to make sure a frame actually gets rendered properly so I also wrote a python script back in the day that looks for frames with zero bytes to tell me if there were some bad frames. I might automate that with my other script, but I don’t want to dedicate the time to that right now. The macOS Finder does a nice job of listing “zero bytes,” which stands out in a list, or listing by size, so I’ve manually deleted bad frames too. To render those bad ones after deleting I just run the first command with the “-a” to find missing frames and render.

3D Printing – my journey begins

I’ve been interested in 3D printing for quite a while. I was introduced to it back in the early 2000s when there were only two printers at Purdue. The stereolithography printer was over $100K and made some incredible parts (I only saw the machine and its output – I never did anything with it). The desktop printer was $30K and not near as cool, but I got a chance to work with it a little.

Fast forward 11 years or so and 3D printing is possible at much lower prices. I started paying attention to the products coming out or getting going on Kickstarter. The prices were a bit high, but there were some lower-cost printers on the horizon. I followed Makibox for over a year, waiting for the A6 to see the light of day. It was only $350 shipped and the output looked good. Once it started to ship I checked out the forums to see what owners were saying. I was not impressed.

I decided to keep researching and came up with some ideas for how to consider choosing a printer:

  1. Price – Printers are expensive – I have to be able to afford it, but you tend to get what you pay for.
  2. Printing materials – The two most popular materials are ABS and PLA plastics. They both have strengths and weaknesses I won’t get into in this post. PLA seems to be the most popular now. It does not require as many features on the printer as ABS. It’s also possible to print nylon, PET, wood-based plastics, and other thermoplastics.
  3. Open source vs. proprietary software and control – Most desktop printers are derived from reprap open source printers, but some are completely turn-key from a vendor. The reprap-based printers can use several different software available online and can be tweaked by the user, but require more technical savvy. Turn-key printers tend to be all proprietary, which may restrict them to certain materials and capabilities, but may be considered easier to use.
  4. Print size – How big of an object can the printer make?
  5. User community and user comments – Read what users say about the product. How easy is it to use? How does it hold up? etc…
  6. Vendor/Manufacturer – What do users think of them? How long have they been around? Where are they in the world? What kind of support do they offer?
  7. Number of extruders – Most printers have one extruder, which means it can print with one material and color at a time. Some have two extruders, which means that they can print with either two different colors or material types (or both of course).

After researching for months I came up with what I believe is the perfect combination of the above – the Makergear M2.

  1. Price is on the higher side of the average for desktop printers. I purchased the “kit” version, which lowered the price considerably.
  2. Prints ABS and PLA. ABS requires a heated bed, which is not available on many printers (including Makerbot’s newest printers).
  3. Uses open source electronics and software. They have an option for commercial software if you want an easier start up experience. This means that using other materials is a possibility. Each material has different extruding needs (temperature, speed…), which can be tweaked with software choices.
  4. Large print area – 8″x10″x8″
  5. I was really impressed by what I was reading on the forums and reviews.
  6. Makergear is in Ohio and most of the hardware is made in that region!
  7. Single extruder, which is fine for now. Makergear has said that a dual extruder is in the works and will be field upgradable.

Once I opened up the box I realized that the “kit” should really be called “partially assembled.” I had watched videos online and read stories by kit buyers and I was expecting a lot of work. Makergear assembled the hard parts and it only took me ~5 hours to completely assemble it – taking my time and double-checking every step. The next day I calibrated it (leveled bed and set z-stop) and sent a test print. It worked great!

Unpacking the M2

Unpacking the M2

M2 Printing Test Bracelet

M2 Printing Test Bracelet

 

 

 

 

 

 

 

 

 

Next, I thought I would do something a little harder – Yoda. I did not really know what I was doing, but I jumped in. I decided to go with Repetier Host for software and use some M2 presets I found online. Unfortunately, those presets were not good for Yoda. It was a disaster – I knew it would happen because I am learning, but ouch! It was printing nearly a solid object and half way through it slid off from the center of the bed, but kept trying to print.

M2 Bracelet and Bad Yoda Print

M2 Bracelet and Bad Yoda Print

The next day I did some more research and figured out how to print Yoda hollow and make sure the PLA fan turned on at the right times (to handle the overhangs better – like his chin). I also decided to go with painter’s tape on the bed rather than heating it. The print was fast and amazing! I’ve printed a few of other things since Yoda and everything has been great.

Yoda Begins on M2

Failed Yoda Begins on M2

Yoda in Process on M2

Good Yoda in Process on M2

Finished M2 Printed Yoda

Finished M2 Printed Yoda


I’ve made two simple objects of my own design so far and printed them with no issues. In the next posts on 3D printing I’ll mention modeling software and the process involved in printing one’s designs.

Adobe Alternatives

Oliver Peters recently posted an article about building a non-Adobe suite of tools. It showed up in the regular email newsletters I get and my colleague, Joe Dull, had some thoughts about it as well. I get where Oliver is coming from by keeping the list of Adobe alternatives to a single suggested suite, but there are a lot more options out there. My students asked for a list a while back so I came up with this (Mac-centric with a sprinkle of Linux). BTW, I do not list products from The Foundry or any of the other great color grading apps because they are generally very expensive – Smoke just barely made the list at $195/month subscription:

Picture Editing

Motion Graphics – Visual Effects

  • Apple Motion – perpetual with free upgrades. Better at motion graphics than visual effects
  • Autodesk Smoke – quick effects and node-based effects editor
  • Hit Film – similar to After Effects in UI layout
  • Blender – FOSS. node-based effects editor

Color Correction and Grading

Sound Editing

2D Graphics

  • Pixelmator – perpetual. Raster and vector graphics and photo editing
  • GIMP – FOSS. Most Photoshop-like in terms of features
  • Inkscape – FOSS. Vector graphics. Most like Illustrator
  • Corel Painter – perpetual. Raster painting with some effect filters
  • Krita – FOSS. Strong painting tools and photo editing filters
  • MyPaint – FOSS. Raster painting only. Don’t be fooled by the cheesy name – great painting tools
  • Sketch – perpetual. Vector and raster. Often used for UI design for apps and web
  • iDraw – perpetual. Vector illustration
  • Lots of others and some are even online and run in a browser. Blender fits here too, but limited in painting tools ATM and effects are done through the node compositor

Utilities

  • Apple Compressor
  • Red Giant BulletProof and PluralEyes
  • Apple Automator
  • Roxio Toast for DVD authoring
  • Various plugins and helper apps as needed

There are lots of options to build a “suite.” Consider some of the reasons to pick your applications here. Oliver Peter’s suggested suite is good overall and could be enhanced by BulletProof if you shoot with a DSLR and/or GoPro. If you need to truly author DVDs then there is Toast or DVD Styler or an old copy of iDVD or DVD Studio Pro (assuming you already had a license). So much depends on your budget, what you expect from a UX, interoperability, and what features in applications you really use.

An all-FOSS suite is a little tough on the picture editor and the FOSS community knows it. I think KDEnlive is the most like commercial editing apps. If you want to build a Linux ecosystem, but are ok with paying for some software then Lightworks is your editor.

Personally, I long for an uber-app rather than a suite. See Smoke, Hit Film, and the upcoming Nuke Studio for examples. Blender is one, but like most of these integrated apps, it does not feature strong audio editing tools.

Pipeline Tools

The impetus for this post comes from two sources. One was an interview I watched on fxguide.com with the head of the Discreet Logic products at Autodesk (Smoke, Flame, and Lustre – kind of), who talked about Flame and Smoke being finishing tools rather than pipeline tools. He was talking about Nuke and how in many ways it is not comparable to Flame because Flame/Smoke are much more than shot-by-shot applications. Instead they are about seeing your whole project and bringing it to a close with titles, complex effects, color correction, etc. at realtime speeds so clients can be in on the process. So it got me thinking about what apps are used in the pipeline and which apps can be used to complete a whole project (animation or film). The other source was the recent news about Luxology and The Foundry merging. These companies create very different products and there are some pretty cool opportunities with the merger. One thing that came up, however, in interviews with users of their products is that The Foundry’s products are very expensive and out of reach for small studios and Luxology’s products are very inexpensive and easy to use. I was really surprised by the candor of John Knoll (ILM vfx supervisor) and Tim Crowson (Magnetic Dreams, Nashville) about how The Foundry’s tools are expensive pipeline tools that are hard to use and how Luxology’s Modo is low-cost, easy to use, and creates really high-quality output. John Knoll also seemed to say that The Foundry could learn a thing or two about software design from Luxology. Stu Maschwitz has also remarked about how there is a move towards easier to use software over never-ending software options that are hard to understand and rarely used. Even the new version of Maya, which is famous for its ability to make the easiest thing take as many mouse clicks as possible in multiple editors, has simplified its interface to make everyday tasks easier. For years I have argued that it should not be so hard to get work done in software applications. I always thought it was funny coming from me because I like a lot of complexity and I am very technically savvy, but I was finding that it was complexity for complexity’s sake or it was a leftover from the app starting out as a pipeline or in-house tool that grew to a more generalized application. Cinema4D is really popular with motion graphics artists because it is relatively easy to use and can produce high-quality imagery with a complete toolset. Blender is free, but it is shrugged off often because it’s free – how can it be good if you aren’t paying thousands of dollars for it?

There are a lot of software applications out there for creating digital content. Depending on your projects, production team size, level of quality, technical expertise, need for sharing data with other artists, GUI preferences, budget, and probably several other reasons you will find yourself looking at all the products out there and trying to decide how you should invest your overhead funds.

Two general options are out there for developing a pipeline. One is to keep everything under one application and the other is to break the process out to separate software tools. The primary reason to use one tool is simplicity – keeping all of your assets and workflow together in one environment. The main reason to break up the pipeline with separate tools is to use the best features of different applications to raise your production value as much as possible (assuming that each tool helps complete a task better than anything else).

An example process that can be done by one or many applications:

  • Live action post
    • Edit
    • Audio mix and sweetening
    • Color Correction
    • Visual Effects
    • Final compilation for delivery

Live action post from the edit to final delivery can be done in one application, such as Final Cut Pro, Avid MC/Symphony, Premiere Pro, and Smoke 2013 (among others, but these tend to be the ones that everyone likes to talk about). It is possible for one editor/artist or a small team to produce content at a high-level of quality in just one of these applications. However, for projects that require complicated sound mixing, extensive color correction, and/or complicated visual effects, a single application may not have all of the tools to do the job at the expected quality. Here’s how this would look if it were broken out into individual pipeline tools:

  • Live action post
    • Edit – Final Cut Pro, Premiere Pro, Avid MC or Symphony, Smoke 2013, etc.
    • Audio mix and sweetening – Pro Tools, Logic, Ardour, Audition, Nuendo , etc.
    • Conform – Hiero, Resolve, Scratch, Baselight, etc.
    • Color Correction – Resolve, Speedgrade, Scratch, Lustre, Apple Color (discontinued), Baselight, Magic Bullet plugins, etc.
    • Visual Effects – After Effects, Nuke, Digital Fusion, Flame, Smoke 2013, etc.
    • Final compilation for delivery – Avid DS and Symphony, Smoke

Using several applications creates some issues that the post-production team will have to deal with. The first is the ability to easily move data between these applications via XML/EDL/OMF/etc. or exporting from one and importing into another. This can be a headache if the applications fail to move data properly. There are also possibilities for data or image quality loss. The other big issue is cost. If the artist/freelancer, small shop, or institution has to maintain licenses of all of these tools then it can get expensive quick. Looking for bundles, suites, or free software often look better than purchasing from different vendors for each application.

3D animation and visual effects is even worse. There are several general animation and compositing applications that can do very high-quality work, but there are also pipeline tools that can offer better solutions to specific problems that can be tough for the general apps to do. Here are some steps in the workflow for doing 3D and/or VFX work:

  • 3D Animation/Visual Effects
    • Model
    • Rigging
    • Layout
    • Animation
    • Simulation
    • Tracking
    • UV Layout
    • Texture Painting
    • Shading
    • Lighting
    • Rendering
    • Compositing

Lots to do and lots of applications to do each part. First the general animation applications: Autodesk (Maya, 3dsmax, Softimage), Cinema4D, Lightwave, Modo, Blender, Houdini. There are others, but these get most of the press. Each can do everything listed below except for compositing (some can’t composite). For compositing just look at the Visual Effects step from the live-action list above and add Blender, Softimage, and Houdini.

  • 3D Animation/Visual Effects
    • Model – General 3D app, ZBrush, Mudbox, Modo, FormZ, Rhino, Inventor, Vue (for landscapes), Silo
    • Rigging – General 3D app + scripting, Motion Builder
    • Layout – General 3D app, Motion Builder, Katana
    • Animation – General 3D app, Motion Builder, in-house tools
    • Simulation – General 3D app, Naiad (recently assimilated into the Autodesk collective), realflow, Houdini, FumeFX, in-house tools
    • Tracking – PFTrack, Syntheyes, Blender, Mocha Pro, boujou
    • UV Layout – General 3D app, Headus
    • Texture Painting – Mari, Bodypaint, Photoshop, Krita, GIMP, etc.
    • Shading – Renderman, OSL, other renderer shading languages
    • Lighting – General 3D app, Katana, Keyshot
    • Rendering – Mental Ray, Vray, Renderman, Arnold, Maxwell, Luxrender, Appleseed (in early stages), 3Delight, Krakatoa, etc.
    • Compositing – After Effects, Digital Fusion, Nuke, Blender, Houdini, Softimage

A boatload of apps here… Moving data between these applications can be very difficult and some pipeline apps like Realflow and renderers require extra costs to render on more than one computer at a time. At facilities that use multiple pipeline tools and/or multiple general 3D apps there are usually Technical Directors (TDs) that create custom tools in Python and/or Perl or a native scripting language like MEL to efficiently move data between apps. This extra technical support is usually beyond the means of small shops, individual artists, and institutions so they tend to feel the pain when going between applications. How are things changing to help these problems?

  • Since Autodesk owns half these apps they have worked on getting data between each app easier – just have to upgrade each year to wait for better workflows
  • Large studios like ILM and Sony Imageworks have released software as open source to make data interchange easier. Projects include OpenEXR, Alembic, and OSL. Other open source projects like the Bullet physics simulator have been integrated into open source and commercial applications.
  • General 3D apps are getting better at what they do so extra funding for pipeline apps are not as necessary except in extreme situations. See the additions of sculpting to Cinema4D, physics simulators in Maya and Houdini, renderers like Cycles and smoke simulators in Blender.
  • Prices are falling and competition is good. The Pixel Farm recently lowered their costs, Luxology’s Modo continues to get better while keeping its price lower than all other commercial general 3D programs. New lower pricing structures for the higher-priced commercial applications seem to come out each year – or bundling like Autodesk does.
  • Open source solutions exist, such as Blender, Inkscape, Krita, Luxrender, Ardour, etc.

Do you really need the pipeline apps? Ask yourself a few questions, such as, can I afford it beyond my regular tools? will I use it regularly (ROI)? do I have a problem that can only be solved with one of these apps? do they play well with my existing apps? do you need to purchase a specific app for a specific position on your team, such as texture painter or colorist? Hopefully you will not find yourself saying no to being able to afford them, but saying yes to having a problem that can only be solved with one of these apps, but don’t be surprised when it happens. Also look for hidden costs, such as render licenses, limited OS support, and annual license fees. Consider leasing specialty apps when that option is available. More than anything – consider the need at hand and choose the tools that can get the work done at the level of quality you expect. Just because there is a specialty tool out there does not mean that it is the only thing that can do the work. BTW, I’ll try to add some links and costs in a followup post.

A little Python project

I don’t program enough. Simple python programming opens up so many opportunities for pipeline tools and extending graphics applications like Blender, Maya, Cinema4D (latest versions), Lightwave (latest versions), Nuke, Hiero, and several asset and job management systems. Python is also being pushed as the main programming language on the Raspberry Pi, which is sitting next to me wanting some attention…

After Europa wrapped I started a programming to-do list that included a tool that checks for missing or bad rendered frames. I was rendering on a lot of different machines for that project and had thousands of frames to manage. I wasn’t using any render management system so there was nothing checking to make sure a frame got rendered properly except me scanning lists of files. The software that runs the cluster at school has no ability to make sure a frame got rendered properly and at home I use Blender’s simplistic network rendering ability of creating an empty frame (to show that the file exists and a free computer can work on the next frame) and then replacing the empty file when the rendered file is finished (BTW, there is a network rendering manager in Blender, but I don’t use it – yet). If a computer fails to complete a rendering there will be a blank frame left behind. A blank file will be left over if a computer fails on the school’s cluster (Callisto) too because I usually setup the files at home (turn on the “placeholder” and no “overwrite” options). While I didn’t have a lot of problems with broken files and unfinished sequences, I felt like I was vulnerable to issues that would be difficult to fix at the last minute – some frames took about 40 minutes to complete on an 8-core/16GB RAM computer.

Over the last couple of days I carved out some time and wrote a python script that looks for both missing files in a sequence and files with zero (0) bytes and then it writes out a report to tell you what’s missing or has zero bytes. It works by typing something like this in the command line:

python badFrames.py /path/to/rendered/frames 4 0 240 /path/to/logs

  • /path/to/rendered/frames = unix-style path to the directory that has the frames. In OSX, /Volumes/Media/Europa/06 (or something like that)
  • 4 = frame padding. The number 4 shows that a frame name will end with 0000, 0001, 0002, etc. This assumes number + extension, such as file0001.exr. The filename can be anything as long as the last 4 (or whatever) characters are numbers.
  • 0 = start frame
  • 240 = end frame (correct/expected end frame, not what’s actually in the directory)
  • /path/to/logs = (optional) the script creates a text file that is saved in this path. If a path is not here then it will save the log file in the same directory as the frames

There is only a little error checking going on so it could break if there are extra files in the directory, such as text files that are listed before the frames (sub-directories/folders are fine though), or you give it a path that does not exist. If people ask, I will add more error checking so it gives actual error messages. Right now it will give you a help message if something obvious is wrong. I might make a GUI for it too – more on that later.

BTW, this will not work on Windows without a few tweaks, but will run on OSX and Linux fine. I am hard coding some slashes in paths that work on unix-based systems only. It does require Python 2.7x or 3.x due to my print statements. I can change them so it works with 2.6 or earlier if you want.

Grab it here (temporary link)

Open Source Software (OSS) and Linux

I really like open source software and the Linux-based operating system (enough said, right?). To cut to the chase, I would go all Linux and OSS except for two issues:

1. The Apple ecosystem. I love it (I have a Mac Pro, a Macbook Pro, an iPhone, an iPad, and I am the system admin of my program’s OSX server and workstations) and don’t like the idea of not using it. Although now that iOS is not intimately linked to iTunes for installations and syncing, it is a little less of an issue since my gadgets can live without a desktop computer.

2. My work. At the university and with my colleagues in the professional world we use Apple and Adobe products. More and more I find myself sharing project files instead of just rendered footage so using the same tools is very important. In my classes I teach with both commercial and OSS so it is key that I support them. This particular issue is essentially impossible to negate and will keep me a non-Linux user for my work for a long time to come.

Having said that, I plan for my next personal projects to be open and use OSS completely. The next two projects are a projection mapping project and a 3D project – hopefully they will start in January.

I like Linux-based OSs because they are free, easy to install, easy to maintain, secure, and there are lots of applications that are good enough to get work done at a high quality. I have played around with several distributions and have recently settled back to Ubuntu. Gnome 2.x was great and there was a great distro called Linux Mint that supported it, but Gnome 3 brought some major changes that are not quite mature enough for me yet. Ubuntu made some equally radical changes with its Unity system, but I have found that I prefer it so far. There are KDE-based distros out there that are really good too, but KDE is so much like Windows I find it overwhelming.

Back in the day I loved Windows (95, NT 4, 2K, XP) and hated Mac OS <=9. I liked the complexity and didn’t mind re-installing the computer every 6 months or so. Nowadays I prefer the elegance and simplicity of OS X and Unity and know that I have a terminal I can go to whenever I need some complexity. There is also no registry to deal with. The registry alone is a good reason to never use Windows again.

Open source software has been great to me for the last few years. I can do graphics, office, Internet, and programming work and be happy with the results and only pay for hardware. In the case of Blender I even keep up with the changes done on the code on a nearly daily basis. Going OSS is a lot less expensive and the tools are really catching up to their commercial counterparts. I would prefer to discuss using OSS professionally in a later post though.

Enough for now…