LRHS Projection Mapping – Animation Experiments

I animate and render what the projector will playback and then project that animation back on the facade model, that has a similar texture to the real building, to simulate what it will look like on site.

The first animation has three statues moving their arms. After starting the rendering process I went for a walk (for those new to the blog that’s what the name means render + walk because you can’t do much else on the computer while rendering). It occurred to me that when this is projected onto the building, the statue arms will be quite a distance from the actual statue due to the facade’s depth. This isn’t much of an issue when looking at the building from front-center especially near the projector, but off-axis I felt like it may suck.

So I rendered a view off-axis to check.

I didn’t like it for two reasons. One, my original hypothesis was correct and the arms are pretty far away. This is an issue for about a third of the crowd thanks to the trees that force the audience towards the center of the viewing area, but I still don’t like it. The other reason is that any illumination on the actual statues makes them stand out as statues so I feel like we won’t be able to really remove them like I hoped. The side view does look cool even without illumination on the sides of the pillars and arches. It’s possible to project onto them too, but beyond this project’s budget.

So I created a new animation. This is better in terms of making it so the statues are seen when I want them to be seen. However, there is a moment when I have the statue “niches” rise up behind them, but it’s too late, they can already be seen. The lesson is that as parts of the building are highlighted or animated they need a strong silhouette – subtlety will be lost as soon as there is any light on them.

I’ve left the exterior lanterns, doors, and windows their natural color, which is dark, on the projection model for now. It is our goal to cover those with a material that reflects light better.

Here’s a fun experiment… A little bit of depth shading on a blueprint.

blueprint

Geek stuff warning

When I was preparing the model to simulate the projection on the building I found that some of the proportions of the statues were off by too much to let go. Thanks to some new photos I took of the building I had more modeling work to do to get it right. I had to spend some time moving parts of the statues around until they properly aligned with the real statues. I also tweaked the building, windows, and doors a little. Was a one step forward, two steps back moment, but it looks a lot better now and I have a lot more confidence in the projection.

The animations above were 750 frames each. Rendering them and then rendering the projection simulation was 4500 frames. Plus some re-rendering sections after deciding to make some tweaks. I use two computers to render. One is a Retina iMac and the other is a custom-built Linux/Windows PC. The iMac renders using its CPU (4 CPU cores/8 hyperthreaded cores) and the PC renders using two Nvidia GPUs. In some cases the PC can render four or more frames for every one the iMac can render because the GPU acceleration is so great.

Unfortunately/fortunately the Blender Cycles developers have been working hard on the GPU acceleration including, BTW, developers at AMD working on making it so Cycles is not limited to Nvidia GPUs. I say unfortunately because on one of the animations I found the PC Cycles render was crashing every 40 frames or so. It’s a sad morning when you see that the faster computer hadn’t been rendering for the last 6+ hours…

I don’t have time to troubleshoot the issue. It’s a mix of Blender/Cycles and Nvidia software and it’s not that bad in the grand scheme of things. To deal with it I decided to dust off a python script I wrote several years ago for a compute cluster we had at UCA. It created a job script for the distributed computing software. I was able to simplify it quite a bit and have it spit out a shell script (like a batch file for you Windows weirdos) that I could run so that Blender would render each frame as a new job rather than one job rendering all of the frames. Essentially it changes this one line that I manually type in a terminal:

blender -b blendfile.blend -a
(this tells blender to start without a UI to save resources and then render the animation based on the project’s settings)

To this listed in a shell script that I start by typing render.sh:

blender -b blendfile.blend -f 1
(render frame 1 based on the project’s settings and then close Blender)
blender -b blendfile.blend -f 2 (then render frame 2)
blender -b blendfile.blend -f 3 (then render frame 3)

Works like a charm. I could make the python script do a lot more tricks, but for now this is nice.

Last, Blender has a simple method of allowing multiple computers to render the same animation without using a render management application. Set the output to not overwrite and to make placeholders. A computer will look for frame 1 in the folder where the rendered images are saved (the output folder) and if it sees it then it will look for frame 2, etc. When it finds a frame that hasn’t been rendered it will create placeholder image, render, and replace the placeholder with the finished image. Each computer can claim a frame as they go, which is nice since one computer renders so much faster than the other. After Effects works this way too if you use multiple computers to render.

Since I’m not using a management system there is no check to make sure a frame actually gets rendered properly so I also wrote a python script back in the day that looks for frames with zero bytes to tell me if there were some bad frames. I might automate that with my other script, but I don’t want to dedicate the time to that right now. The macOS Finder does a nice job of listing “zero bytes,” which stands out in a list, or listing by size, so I’ve manually deleted bad frames too. To render those bad ones after deleting I just run the first command with the “-a” to find missing frames and render.

Advertisements

LRCHS – Projection Mapping – 1st Post

The 90th anniversary of the opening of the Little Rock Central High School building and the 60th anniversary of the Desegregation Crisis are coming September 18-25, 2017. It will be a week of activities that commemorates the anniversaries and culminates in an event that features a projection mapped animation on the façade of the high school building.

This first blog post is about a major milestone for the animation, which is a completed virtual 3D model of the facade including its four statues. Now that the model is complete we can finally get to work. The majority of the animation we create will be based on the architectural structure of the facade. I can’t believe February is almost over! It took me over a week longer than I expected to finish this phase of the project due to distractions including an illness that caused horrible headaches as well as external issues and projects and some personal goals beyond the projection mapping project. Hopefully the headaches are past – I can manage the rest.

Here’s the basic model:

4statues

We can add lighting that can make it appear as if we’ve hung actual lights near the building:

spotlights

We can also play around (this is just a test and not final imagery):

lightjade

And add stuff:

1927

Here’s what it should look like at the campus. We intend to add some lighting around the central facade as well.

projectiontest

The Facade

The limestone part of the high school’s main entry has several nice 1920s Art Deco details and is sculptural in nature with deep set doors and windows and jutting pedestals for the four statues. I still need to add the letters for the statues. We will hopefully be able to temporarily cover the windows and doors so they won’t be so dark. We will also need to cover the lanterns so they will reflect the projections.

dsc00432

Ambition, Personality, Opportunity, and Preparation

When facing the building the four statues from left to right are Ambition (male), Personality (female), Opportunity (female), and Preparation (male).

I’ve been told that the four statues were “ordered from a catalog” and not unique to the building project. Their body styles are reminiscent of Michelangelo sculptures with their long muscular arms and Greek facial features. Preparation must have been the sculptor’s version of David – see his contrapposto stance, physique, lowered right arm (holding a scroll in this case), raised left arm holding a book instead of a sling, and a left-facing gaze.

ch-interior_110 512px-27david27_by_michelangelo_jbu0001

Their dress is based on ancient Greek Chiton. The sculptural style is “wet drape” where the cloth clings to the skin to reveal the figure’s body underneath. This is most obvious in Preparation with his torso that practically looks bare, and you can see it in Opportunity as well. I modeled these statues by starting with nudes so I could get the wet drape look right.

I think later blog posts will go on another website dedicated to this project. Geeky stuff will stay on this blog though.

Geek Stuff (most of you will want to skip this)

I modeled the facade by building basic geometric shapes and aligning them to a photograph I took last summer. I actually got most of this model finished by last fall. In January I added the smaller details and lanterns.

The statues were very time consuming and I knew they would be… I downloaded a few nude “base models” from Blendswap, which are designed to be a starting place for creating a character. For the females, I used the body of one and the hands and head of another. After splicing them together I pushed and pulled and extruded faces, edges, and vertices to make them match the sculpture. I also used sculpting tools to smooth and guide areas of the model. The models are considered low-poly, which makes them easy to animate and handle in the 3D software. When they are rendered they are smoothed using Pixar’s subdivision surface technology. It turns a blocky mess of polygons into flowing garments.

For the capes I essentially started with a line and extruded it and moved it to create the overlapping folds. For smaller details I just cut the larger polygonal faces into smaller ones that I could then push, pull, and sculpt into their final form.

Once a model seemed ready to go I aligned it with the main photo of the facade. I had closeups of the statues to do most of the work, but since the photos were taken from below, the proportions were not accurate so aligning with the main photo was key to getting them the correct overall size. Because of the proportion issues and a number of other things, I modeled them just looking at my photos rather than trying to align them to photos in the 3D viewport, which is common for character design.

While modeling, the virtual statue is standing in a T-pose. I used a T-pose because we will most-likely apply some custom motion capture animation and our motion capture system (Perception Neuron) requires a T-pose to start. Another common starting point for a character model is an A-pose, which is more relaxed, but not a good idea for our purposes.

After getting the proportions correct I added a skeleton to the model. The skeleton is based on the needs of the motion capture system. The model is bound to the skeleton so whenever I move a bone, the model with deform with it. I used the bones to pose the model to match the statues. I actually animated the movement so I could go back to the T-pose easily as well as test the model deformations as the bones moved. Some of the dress is not driven by the skeleton at the moment. That will come later via cloth simulations.

opportunityposing

I modeled the statues this way because I knew we would be animating them and they needed a structure that would support animation. A more accurate alternative to modeling by eye would have been to scan the actual sculptures. Scanning could be done via LIDAR, but would have been prohibitively expensive. Or, it can be done with lots of photographs from multiple angles via photogrammetry. Shooting the sculptures with a drone and extracting frames from the video would have been a way to get the images needed.

The upside to scanning would be a very accurate model, but there are downsides. One is that the scan would have to be retopologized, which can be time intensive, to make it animatable. Another is that the models would not have a backside and the arms would be stuck to the bodies so they would need hand modeling to create the back and make the arms free. I would have been up for these things had they been scanned last fall. Unfortunately they are 22 feet above the ground so logistically it is not a trivial issue to get to them.

From here it is a matter of lighting, creating cool surface materials, animating the statues, opening the doors, or whatever else we come up with. Even things that don’t directly change the facade, such as showing a photo, will be rendered against the virtual facade so the photo will appear to interact with the building.

Blender

screenshot

I used Blender to do all of this work. It is just a joy to use. Some things that came in handy (these aren’t necessarily unique to Blender BTW):

  • Use photos as a background in the camera viewport to help create a 3D environment that is similar to the size of the actual building
  • Changed one of my 3D panels into an image viewer so I could have a photo of a statue up at all times.
  • The Shift Key – I use a Wacom Intuos 4 Medium when working with graphics software. It has a bad habit of moving during a click or not really making the mark you tried because it was so small. When changing a parameter in Blender (practically no matter what it is), you can hold down the Shift Key while doing it and it will increase the accuracy of the parameter by not allowing it to change drastically no matter how much you move the stylus. I can make big movements to do small changes. BTW, some graphics programs do have a similar function, just not all…
  • Matcaps – haven’t really used them before, but they make modeling organic forms much easier. They allow you to customize how the model is shaded in the viewport so you can see the curved surfaces easier.
  • Proportional Editing – Used when moving a vertex or small group of vertices and wanting surrounding vertices to move with them, but not as much. Super helpful when making proportion changes or needing to move parts of the model around to accommodate the posed body. Especially useful is the “Connected” mode where it will only move vertices connected to the one you are moving rather than ones that are just nearby. You can also change the falloff to control how the other non-selected vertices will change. BTW, this works on more than just vertices, just using that as an example.
  • Subdivision Surfaces – Blender can show the subd effect while editing the model either by showing the base model and the smoothing separately or by bending the base model’s edges along the surface of the smoothed model. This really helps know how the changes of the low resolution model will change the smoothed model.
  • Solidify modifier – I made the capes a single polygon thickness and used this modifier to give it dimensional thickness. When sending the models out to Jim and Jonathan, who use Cinema4D and Maya, I will “Apply” this effect to make the geometry permanent.
  • Cycles with two GPUs – it’s so fast! To do test renderings and make the images in this blog post it is amazing how fast Cycles can be. The images here took about a minute and a half to render each one. It’s also crazy easy to make objects into light sources. I do most of the work on my iMac and then switch over to my Linux computer for rendering.

Radium Girls – Set and Projection Design

It’s been five years since I’ve designed a theatrical production with UCA Theatre. My last design was The Bacchae that was both a set and a projection design project. This time around it’s Radium Girls and again I designed the physical scenery and projected imagery. Radium Girls was directed by my colleague, Chris Fritzges.

About Radium Girls

From wikipedia – “The Radium Girls were female factory workers who contracted radiation poisoning from painting watch dials with self-luminous paint at the United States Radium factory in Orange, New Jersey, around 1917. The women, who had been told the paint was harmless, ingested deadly amounts of radium by licking their paintbrushes to give them a fine point; some also painted their fingernails and teeth with the glowing substance.

Five of the women challenged their employer in a case that established the right of individual workers who contract occupational diseases to sue their employers.”

The play, by D.W. Gregory, tells this story through one of the girls, Grace Fryer, and the president of the U.S. Radium Corporation, Arthur Roeder.

Design Process

The design team, which was made up of myself and theatre faculty and students, met several times to discuss the play including what the story means and what our production goals were. One of the big goals scenically was to include projected imagery. The main reason for projections was that the play has many scenes in different locations and it shouldn’t be staged with a lot of traditional scenery. The thought was that projections could quickly change and help inform the audience of where the different scenes were taking place. Another overall goal was to use scenery that was abstract and allowed for interesting staging, such as multiple platforms at different heights, rather than being realistic looking. Realism is best used for costumes and properties (props) – the things that are closest to the characters want some authenticity, while the playing space can be more abstract or symbolic.

Chris started the process of developing the design by discussing different themes he saw in the story. The following are a few of the larger themes:

  • The Corporation vs. the Worker
  • Masculine vs. Feminine
  • Science vs. Business
  • Fighting time
  • The media

Some visual themes/motifs included clocks, gears, and flowers.

Design Influences

The next step in the process was to do some research. The play’s time period was the 1920s and it recounts actual events so the team, including a student dramaturg (one who is dedicated to researching the play in detail and making his research available to the rest of the team), looked for pictures and articles about the girls, Marie Curie, the U.S. Radium Corporation, radium products and research, and general 1920s trends in clothing, art, and architecture.

I was ultimately most influenced by the work of Hugh Ferriss, the U.S. Radium plant, and timepieces of the era.

U.S. Radium Corporation plant and dial painters

Set Design

Sometimes the set design will just come to me and I quickly work on about three variations of an idea. Not for this play. Instead, I drew sketches of several different ideas and shared them with the design team. The gear and clock influences are a thread throughout the ideas as well as the factory windows, which are referenced in the play. What I was unsure of, was the actual projection surfaces – how integrated should they be into the playing spaces? Also, should we project flat on typical screens or consider other shapes for projection surfaces?

The sketches for the Radium Girls set design

The sketches for the Radium Girls set design

After looking at sketches for a couple of weeks, we decided that we liked three levels of platforms and that they should be round (more feminine shape, clocks, gears, radium symbol). We also worked out the size of each platform. The projection surface ended up taking a little longer, but we finally worked out a projection mapping-oriented wall that had an industrial skyline silhouette at the top. The projection mapping aspect of it was that the screen was not just one plane stretching across the back of the platforms. Instead, it was broken into multiple planes at different angles. Doors through the projection surfaces were the last pieces to go in.

Radium Girls set design front view

Radium Girls set design front view

Radium Girls set design side view

Radium Girls set design side view

We made some last-minute changes to the heights of the platforms for time and cost savings, which ultimately made the set work better. You’ll notice that the doors are above the platforms in the renderings because I was trying to show the change in height as fast as I could… Also, since it had been awhile since I had done a theatrical set, and I was preoccupied by the projected imagery, Shannon Moore, the theatre Technical Director, was instrumental in dealing with some finishing touches like steps and platforms on the upstage side of the set through the doors.

Lastly, I created a painter’s elevation for the platforms. Two platforms were clock faces and the third was a watch/industrial gear.

Painter's elevation

Painter’s elevation

The Set

The Set

pre-show-photo

Pre show and Intermission look

Projections

After the set design was done we moved onto the projection design. I primarily worked with Chris rather than working with the whole design team. The cast also had some input on projection ideas. Chris and I met three times to go through possible imagery for each scene. In the early meetings I discussed imagery ideas that were documentary-like. Imagery would be based on period photos, actual photos of the characters portrayed, newspaper clippings, etc. As we got into discussing the imagery and getting ideas from the cast I felt that the documentary idea wasn’t working with the production style and ideas. The final overall design concept was experiencing each location using either symbolic imagery and/or closeups of objects that would be in that particular location.

In the scenes that were in character’s homes I tried to focus on fireplace mantels because I wanted to feature some style of clock. I included enough clocks that Chris mapped out the time that should be on each clock face starting at 1:00 and going to 11:45.

Doors

The doors didn’t quite work with the concept of closeups and symbolism so I had to come up with a way to change the apparent scale of the spaces depicted in the imagery. During an early rehearsal I attended I saw the problem and came up with a solution almost immediately. I chose to use as much of the screen as possible to do the closeup objects, such as a fireplace mantel, and then change the scale around the door to make it more realistic. I used the scale of the objects and wallpaper pattern to show that if one were to really bend their head around what I created that they could rationalize the different sized objects. I imagined what a door across a room would look like if I were standing close to the fireplace. The fireplace objects would be large in my view and the door small due to its distance away from me.

There were a few places where I tweaked this concept. In the exterior porch of the Roeder home I chose to keep the door in scale, but the house’s siding and eve would be large and out of scale. In the health department I created oversized filing cabinets that dwarf the door. In Grace’s home both doors are used so I couldn’t use the same technique so I made the props, like hanging lights and the mantel clock oversized.

Technical Stuff

Figure 53’s Qlab was used to playback the imagery on an iMac. A VGA signal was sent to two 4000 Lumen projectors at 1920×1080 pixel dimensions. Both projectors got the same image so they were overlapping each other to increase the overall brightness. Qlab was used to warp the image to counteract the warping from the angled screens (projection mapping!).

Blender was used for almost all of the imagery. I used as many pre-modeled objects as possible to save time. There are some recurring scenes with two newspaper reporters and most of those images were created in Photoshop. I used two computers concurrently to stay productive. My main computer is an iMac and I used it to do the modeling and setup lighting and materials in Blender as well as Photoshop work. I then moved over to an older Linux computer I have with two Nvidia graphics cards. Blender’s Cycles renderer can be accelerated using Nvidia cards (AMD cards are almost ready to accelerate too BTW) so I finalized the shading and lighting and did final renders with it.

Oh yeah, I also made some tables for the show

Radium Girls Tables

Radium Girls Tables

Final Thoughts

The show’s overall production quality was amazing. The set, projections, costumes (Designed by Shauna Meador), lighting, props, sound, and performances went together so well. We often talk about a unified production, but sometimes there is one element or another that just doesn’t seem to fit. Not in this case. The show looked really good and was well directed and performed. I can be very critical especially of my own work so I am surprised at how good I feel about the work.

There were problems of course.

  1. I started making the images way too late. I literally did 85% of the images in the last weekend before it opened (it was UCA’s fall break so that last weekend was several days…).
    1. There were 50 images – the most I’ve made for a single show
  2. Because I was so late I didn’t give Chris very many opportunities for feedback. I think he was happy with my work overall, but we should have been able to work together more.
  3. I wanted some of the imagery to be animated, such as spinning newspapers, smoke or dust in the air, subtle movements of objects, etc. There were no animations.
  4. We either started our whole process a little late or took too long to design the set – maybe both. Construction on the set should have started at least a week earlier than it did.
  5. The way I setup the projectors was lame. They were sitting on an angled board in the theater’s 2nd catwalk. Because they were not locked down by any kind of rig they had to be touched every night to make sure they were aligned to each other.
  6. The projectors were not perfectly aligned. Cheap projectors don’t have the tools to do fine adjustments aligning the images of multiple projectors so I got it as close as I could. The image looked out of focus toward the bottom left side (as seen by the audience) and overall had a soft look due to the slight mismatch.
    1. A workaround would have been to send individual signals to the projectors and used Qlab to do the final alignment by giving each projector a custom warping. Instead, I sent a signal to one projector and used the loop-thru to get the signal to the other projector. Sending two signals would have meant using a different computer too.
  7. The projections needed to be brighter. Dr. Greg Blakey, the lighting designer, did a lot of last-minute changes to the lights to try to keep as much illumination off the screen as possible. The only way we could have gone brighter would have been renting a large-venue projector (10K or greater Lumens) and that would have blown the budget unfortunately.

Some of the projections:

The images below are a mix of photos and actual projection images. The photos are untouched jpegs from the camera. When I have more time I’ll work on the raw images. The screen in these photos looks a little darker than it actually was live.

This slideshow requires JavaScript.

Stereo3D – Depth Grading

In the spring of 2011 I co-taught a stereo3D filmmaking class. During those few months I was really on top of what it took to create convincing S3D images. Since then I had not made another S3D image until last month. Though for over a year I have been slaving away on an S3D film, I have only looked at it in terms of a left image and a right image – making sure that the green screen removal, roto, color, and rendered virtual set was correct for both.

It seems I lost some of the knowledge I had attained in the spring of 2011 and even before in 2009 when I created an S3D animated music video. I had decided that it was correct to not converge parallel rendered stereo pairs. I had also forgotten how to view anaglyphs in terms of knowing what parts of the image were in front of the screen (“window”) and what was behind. Luckily, Chris Churchill noticed that the stereo grading on Europa was not going well and took it upon himself to fix it because what I was saying was wrong. My head was in animating, roto, patch work, and other compositing issues, rather than actually making the 3D part of the S3D film. To get my head back in it I decided to make the following images to help me get it right and make a record of it;)

Most S3D CGI creators I have read, watched, or listened to said that they prefer creating stereo pairs in parallel, which means that the left and right cameras are looking straight ahead. After the imagery is rendered it is then converged, which generally means that the images are shifted in the x direction until an object or area in depth has no parallax, or left/right offset. The converged object/area is essentially at the plane of the movie screen (the “window”). If a parallel pair is not converged then all of the depth is in front of the screen (window), which can cause eye-strain over time.

  

Click on the images to see them full sized. Parallel Cameras (first) and Converged Cameras (second). Notice that red is on the left in front of the convergence plane (screen) and cyan is on the right in front of the convergence plane.

 

The parallel image (first) shows that all of the depth is in front of the window/screen and the blue ring is on the screen (essentially converged). As you view the image you will probably feel some discomfort because the ring is covering parts of the image that are supposed to be in front of it. As you view the converged image (second) it should feel more comfortable because the foremost teapot is converged so the ring and the teapot are at the screen and everything else is behind it. Also notice that the red and cyan offset images change sides horizontally – red is on the right for objects past the screen and on the left for objects in front of the screen. These are fundamentals of making stereo3D images and should not be forgotten!

Another issue with S3D images is keeping control of how much x-shift (parallax) is needed to produce the final image. If the left and right cameras are too close together there is little resulting parallax and little stereo effect. Cameras too far apart create a problem called hyper-stereo, where the stereo effect is not realistic because the cameras are proportionally farther apart than our own eyes. Hyper-stereo can be seen by the large offset images of red and cyan, and with glasses on, the effect can be easily seen by moving one’s head side-to-side (image seems to jiggle with your head movement). If the cameras were too far apart then it may be impossible to get the parallax close enough for the viewer to fuse the image at all depths. It is best to not photograph a very deep scene with objects close to the cameras. For CGI, the fix is simply to put the cameras closer together. For live-action, the fix may be to move the convergence point into the scene more and bring some of it in front of the window (screen). For some live-action shots the fix may be to do a 2D to 3D conversion. Live-action scenes with virtual sets or all animated scenes get around this problem by having different camera inter-axial distances for different depths of the shot. Backgrounds can have a closer inter-axial distance than the foregrounds to avoid excessive parallax.

  

The first image shows the result of parallel cameras with no convergence – you may have a hard time getting it to fuse due to the wide parallax. Notice how the red/cyan offsets are less in the background than the foreground – as humans look into the distance we eventually don’t see two points-of-view of an object and our vision becomes monocular (takes about 50′ of depth). Since we focus on one object at a time with our eyes we don’t notice the double vision (inability to fuse that part of our view) in the foreground when looking at another object very far away. Unfortunately, this does not work for S3D imagery – the entire image must fuse properly and back to an earlier point, the image must have some convergence, otherwise everything is in front of the screen. The second image converges on the foreground teapot. Notice the red band on the left – I shifted the right image 144 pixels of a 1280 wide image to get the convergence. Overall it works, but it takes a little time to fuse the background because of the wide parallax (x-shift). In the third image, to make it a little easier to fuse the stereo effect I moved the convergence to between the first and second teapot, which brings the first teapot in front of the screen (window). This is a decent compromise when dealing with hyper-stereo pairs. BTW, to get rid of the red band you either have to scale the imagery up, which is common for live-action footage, or you have to render wider images than your final output width.

An alternative to all this shifting around is to shoot/render with converged cameras. The fundamental problem with converged cameras is that once shot or rendered, the imagery cannot be re-converged and due to some mild keystoning, the stereo pair can be harder to work with doing post visual effects with serious tools, such as the Foundry’s Occula.

WordPress did not do me any favors with crunching these images. Unfortunately, you may see some ghosting caused by heavy compression.

Last point: Never get your stereo pairs swapped. Left for right and right for left can leave you scratching your head because in some cases, especially shallow scenes, the stereo effect will seem to work, but just not as cool as you expected. If your Supervisor/TD accidentally swaps them a year before you actually do any stereo depth grading and don’t notice then dock his pay at least;)