In Night at the Museum: Secret of the Tomb, Ben Stiller reprises his role as night guard Larry Daley, and sets out on an international odyssey to restore the powers of a magical Egyptian tablet that brings the museum exhibits to life. Without the tablet’s sorcery to sustain them, the various characters who have become Larry’s friends will return to their former lifeless states.
Shaun Levy returns as director for this third film in the popular series, with Erik Nash taking on the role of production visual effects supervisor. Under Nash’s guidance, the task of breathing life into the film’s many and varied characters fell to a collective of visual effects facilities scattered as far across the globe as Larry and his companions.
VFX supervisors from four of these facilities – MPC, Digital Domain, Cinesite and Method Studios – are here to share their experiences on the film, in this extended VFX Q&A.
MPC – VFX Supervisor, Seth Maury
How many shots did MPC deliver for the show, and how was the work divided between the different offices?
MPC delivered 250 shots in total. I was personally working on the show for about ten months.
The way MPC works is that all asset work – models, texture, rigging and so on – is centralised in London and Bangalore. From look-dev onwards, most of the work then happens in Vancouver. On Night at the Museum: Secret of the Tomb, Bangalore also animated some sequences and lit some shots; MPC are bringing up the Bangalore division to be able to do those things.
How did you get involved with the show?
I got the call back in February 2014, when I was finishing up working on Maleficent. I was really interested in the show, because I like doing character work. Also, my eight year-old son loves the movies. So I said, “I’ll do it!”
So you were primarily involved with character work?
Yes. The first time I met the production VFX supervisor, Erik Nash, he asked, “How’s your fur pipeline?” It kind of cascaded from there.
From the previous movie, we recreated animals including an antelope, a boar, a moose, a mammoth, an ostrich, a rhino and an oryx. For the British Museum scenes, we had a whole host of statues that come to life. Before I came on the movie, the production art department had worked with the British Museum to see what pieces they had that we could make digital versions of. They had some cool little turtles, and some small ceramic elephants that we made life-sized. We also made these pretty metal peacocks – for those we started with the shape of one of the museum pieces, then brought in our own colour and texture.
In revisiting the original animals, did you repurpose existing assets or create new ones?
We were given the final look-dev turntable models of some of the original characters. We also took a bunch of stills from the previous two movies for reference. A lot of times, that was a good starting point, but essentially we started from scratch, because of the way our character pipeline works and the way we do fur. Technologically speaking, a couple of years between movies is quite a long time. The modelling’s done in Maya and ZBrush. Then we use bunch of in-house plugins that run off Maya.
Did the sculptural nature of the British Museum creatures bring a different set of challenges?
Yes. The first thing we did was faithfully re-create the actual museum pieces, because we didn’t know how much latitude we had per asset. As we went along, we established which ones we could change.
One challenge was that a lot of the pieces – the ceramic elephants, for instance – were just a couple of inches long. When we scaled them up to life-size, we realised the textures didn’t look quite right. So there was a lot of translation from small to large. Erik was great about that. We would show him work in progress – an early model, then stuff at 50% and 75%. He might look at the feet on the turtles, for instance, and say, “Look at the way those feet are carved – if they’d been carved by hand at this size, the edge would be sharper.”
What about the animation? Presumably a ceramic elephant moves in a different way to a flesh-and-blood elephant?
A lot of the style came from the first two movies. If something was made of a material like ceramic or metal or jade, the director, Shaun Levy, definitely wanted it to move as if it were made out of that material. It’s a fine line between making that work and not having it look like bad animation!
So, for example, while the ceramic elephant moves relatively stiffly, he can swing his trunk and swish his tail, just not with the amplitude or speed that a real elephant would do it. When he takes a step, his foot has a little squash to it, but not as much as a real elephant. So you get a subtle impression of all those things you’re used to seeing in a real animal.
So do your CG models have the same skeleton and musculature as their flesh-and-blood counterparts?
By default, our rigging team put ribcages inside these creatures, so that when muscles fired or when they took a big breath you would see the impression of ribs. I would have to say, “No, actually we don’t want that, because they don’t have that inside.” It was a delicate balance: putting in a certain amount of realism, but taking out what wouldn’t actually be there.
Which sequence gave you the most challenges?
At one point in the show, there are five or six shots of the monkey, Dexter, doing an acrobatic aerial silks routine, hanging from two strips of fabric suspended from the ceiling. We created a digital monkey for that. We worked on the sequence pretty much from the beginning of the project to the end.
On-set, they had the cutest little monkey called Crystal. In the first two movies, they’d also used a digital monkey for some medium and wide shots. When we started out, we had the same agenda. We were told: “We’re going to have a digi-double, but you won’t see it in close-up.” But we decided to future-proof the asset, just in case we needed a tight shot, because it’s easier to build it high-res in the beginning than to go back and upgrade everything later.
And did those tight shots actually materialise?
Yes. Sure enough, a couple of those shots ended up being a medium to tight shot on our digital face.
We’d done a photo acquisition of Crystal, and our CG supervisor, Mo Sobhy, worked with look-dev to make the digital double work, refining the textures and so on. It’s that fine line again – mostly in the animation: “How would a real monkey do it versus a human? How can we bring monkey traits to it, but still have it look graceful?” For example, a monkey doesn’t point its toes when it’s hanging upside-down.
Can you talk about the planetarium sequence, where all the constellations come to life?
They built a beautiful planetarium set at Burnaby, near Vancouver – I haven’t seen a set that big in quite a while. They built the first floor of the planetarium up to about sixteen feet, and put a huge translight around the outside to re-create the New York environment . Our first job was to top up the set to the roof.
Hanging from the roof is a huge sphere – it looks kind of like Saturn – and then there’s a circular walkway that goes up around the outside, with all the constellations displayed. We re-created that, minus the support structures that hold the sphere up. I feel like we did a really good job with that – it’s pretty seamless.
The next thing we had to do was to bring these five constellations to life: Leo Major, Leo Minor, Orion, Scorpio, and Cancer. As the show opens, these characters made of stars kind of float up in the higher regions of the planetarium and do a little performance.
How did you go about breathing life into what’s essentially just a pattern of stars?
The first reference we got from Erik was a picture of the Orion nebula. He said, “I like the essence of this. See what you can make of it.” So we started playing with modelling some characters, and got the effects team working on what this thing might look like. Is it transparent? Semi-transparent? Do I see nebulae swirling inside? Does it have a hard shell? How well can I read each form, or is it just made up of stars? That went round and round for a while, because it really could be anything.
We put a few different ideas forward, and what we got back from Shaun Levy was: “I really want it to be made of stars.” So Erik and I worked back and forth refining that idea. We ended up with a character that’s pretty transparent, but has a subtle surface impression of nebulae and galaxies. It’s pretty cool, actually.
Given that Night at the Museum: Secret of the Tomb is the third film in a series, did it feel as if you were covering old ground?
It’s funny, it didn’t feel like ground we’d covered before. That’s the great thing about Erik and Shaun – those guys were up for fresh ideas and bringing either a new look to something, or having characters act in a different way. I think that permeated the whole project. We didn’t have to just dogmatically match what had been done before.
For me personally, I had a lot of fun with the Dexter acrobat sequence. I’d just come off Maleficent, which has a very atmospheric look, so for the Dexter sequence I enjoyed playing with light beams and spotlights to bring atmosphere to the shots, which I don’t think we’d seen in other parts of the movie. Plus, at the end of the day it’s just silly, and I know my son’s going to laugh at it!
Cinesite – VFX Supervisor, Zave Jackson
What was Cinesite’s involvement in the show?
We were tasked with the enhancement of the golden tablet – both look-dev and execution – showing the progression of its corrosion and the green glowing edge detail. We also created the golden glowing tablet shots. To do that, we referenced the first two Night at the Museum films to get an idea of the series’ vibe. We also looked at time-lapse footage of various metals corroding, and explored imagery of rust patterns and different types of tarnished metals. We delivered 87 shots over the course of about five weeks.
What happens during the corrosion sequence?
As the damaged tablet starts to corrode, the magic keeping the museum exhibits alive starts to die. There’s a close-up shot of the golden tablet on the wall of the museum, in which we see the corrosion texture crawling over its surface. The camera flies down on to the surface of the tablet to an almost macro-photography distance, then follows the decay as it advances. This establishes that the tablet is “dying” from this disease-like corrosion.
How did you go about creating the corrosion shots?
We began by building an accurate CG model of the practical tablet used on set: a golden slab with nine square sections engraved with Egyptian hieroglyphics. The squares rotate on spindles, like an ancient abacus. We knew we’d need to get extremely close to the model, seeing the fine scuffing of the polished gold surface and the more complex detail of the corroded parts, so we went through a detailed texturing process. Additional texturing work was required for the corroded version.
Next we created the intricate, organic movement of the corrosion’s green, glowing edge as it progresses over the tablet’s surface. The animated leading edge was created in 2D, using fractal noise tools. We turned the resulting animation into masks that were UV-mapped back on to the geometry. We also used the masks to displace the geometry in the 3D render, giving volume and depth to the edge of the corroded section of tablet.
For shots which didn’t need to be animated, our solution was to object-track the tablet used in the shot. We created a predetermined set of directional lighting passes, texture passes and utility passes based on a generic lighting set up and rendered using V-Ray. It was then up to the compositors to balance the lights to make the new corroded tablet fit into each shot.
Did working on Night at the Museum: Secret of the Tomb teach you anything new?
We learned that, given a body of shots that were quite similar and by using the right methodology, it’s possible to deliver a fairly large number of shots with just one person handling the lighting, and a small compositing team.
Method Studios – VFX Supervisor, Chad Wiebe
How did you get involved with the show, and how many shots did Method Studios deliver?
I’d worked with Method’s VFX producer, Genevieve West, on several Fox films prior to this, so we were confident we’d be the right fit for the Fox team, including the production VFX supervisor, Erik Nash. We delivered 209 shots, over an eight-month post schedule, with a crew of around 70 people.
Watch Method Studios’ VFX breakdown reel from Night at the Museum: Secret of the Tomb:
What was your main contribution to the film?
Our animation supervisor, Erik de Boer, has extensive experience with creature animation – he worked on Life of Pi and The Maze Runner – so we were interested in continuing to build our animation and creature repertoire. For that reason, we focused on the sequence where the lion statues in London’s Trafalgar Square come to life, and the scene in which Larry Daley (Ben Stiller) and his gang battle Xiangliu, a nine-headed snake creature. We also contributed shots involving Jed and Octavius (Owen Wilson and Steven Coogan), including the scene in the ducts, and where they’re watching kittens wrestling on YouTube.
How did you go about bringing the lions of Trafalgar Square to life?
During the London shoot, we were able to photograph the actual lion statues in many different lighting and weather conditions. This gave us texture and modelling reference, which proved immensely helpful. Together with what was shot on the day using the production’s Red cameras, that got us off to a fast start in building the lion assets.
What was your approach to animating these huge statues?
It was an interesting challenge. We wanted to respect their original scale and weight, but for shots where they chase a flashlight and wrestle, they also needed to have a kitten-like appeal and playfulness. So we keyed off the YouTube kitten clip, as well as a ton of real lion footage. To preserve the sculpted look of the animals, we skipped dynamic and harmonic skin sims for a more controlled blendshape approach in the face and body. By doing this, we matched the look established in the previous movies.
The lions’ manes needed a custom deformation toolset, created by James Jacobs and his creature team. By simulating underlying strips of geometry representing the mane, we were able to have sections sliding over top of each other and driving the deformations of the single mesh, creating more complex deformations than traditional weighting would have allowed.
Did you take a similar approach with the Xiangliu?
There was a physical build of the nine-headed snake coiled in a sleeping pose. This was scanned and used as the basis for our CG build. It was also used in-shot, which proved to be extremely useful lighting reference.
From an animation standpoint – as well as visual – we wanted a creature that not only stayed true to its statuesque origin, but also fully reflected the real-world attributes that make snakes such amazing creatures.
How did you choreograph the action?
The sequence was extremely well-thought out – we were provided post-vis which was very close to what the director wanted from a blocking standpoint. Using this as reference, our animators were able to move full steam ahead with a clear target, pushing out some amazing animation in a very short amount of time.
From there, our lighting team got a very clear indication of where our creatures would be in the environment. This was a big plus, due to the challenges associated with trying to light nine reflective, tubular shapes all competing for screen space. For example, we noticed very early on that the most minor position change of one snake would have a dramatic impact on the lighting and reflections of all the others.
Animating nine heads simultaneously must have been quite a challenge.
The key for the snake animation was finding the balance between managing the shapes with animation controls, and maximising the usability of the rig for the animators – making sure it didn’t become too heavy or counter-intuitive. We wanted strong, broad shapes with solid, serpentine motion, and a real sense that the heads were driven and lifted all the way back from the tail, which was connected to the sculpture’s base. Our creature TD, Paul Jordan, created a very successful rig for this that allowed us to design the overall motion on one level, and to twist and sculpt on several lower layers.
Tell us more about the rig.
The big challenge was creating a rig which would allow extreme poses and actions without compromising the look of the snake or overly distorting its geometry. We also had to come up with a way to have the snakes’ scales behave in a realistic manner. Paul developed a follicle system that would allow each scales to slide over the next. That way, they stayed as true as possible to their original size and shape.
Did you bring anything new to the miniature characters of Jed and Octavius?
One interesting new bit of technology we employed was “focus stacking”, which created a much more realistic marriage between Jed and Octavius and their full-size environment.
When shooting macro-photography with traditional cameras, you typically have a very narrow depth of field. When compositing actors, this can typically result in a visual mismatch. For Night at the Museum: Secret of the Tomb, we created our background plates by shooting footage in which we racked focus from a locked-off camera. We then compiled all the in-focus areas into one image. This allowed us to do something that’s traditionally not possible in miniature photography: create an infinite depth of field, and then adjust it to taste.
By this process of focus stacking, we were able to control and match the two opposing elements, allowing us to make the backgrounds look as though they’d been shot through tiny cameras in scale with our miniature characters. It also allowed us to preserve all the great detail you get from shooting real-life objects instead of building giant props, which can create its own visual challenges.
Digital Domain – VFX Supervisor, Lou Pecora
How many shots did Digital Domain deliver, and which sequences do they feature in?
Digital Domain executed around 340 shots. I started on the show after principal photography had wrapped in late summer, so we did this volume of work between August and November.
We were lucky enough to get the Escher Tablet Pursuit sequence, the Pompeii diorama eruption, the Greek statue encounter, and a smattering of complex 2D split screens with double the Stiller!
Let’s begin with the volcanic eruption. How close to reality did you stick?
For Pompeii, we studied lots of real volcano eruption footage to figure out which parts we could keep and which we had to lose. For instance, most of the real oozing lava we studied didn’t flow nearly fast enough to be as threatening as the sequence required, but the eruptive elements spewing out of the caldera were more true to life.
Many films have used the splitscreen trick to duplicate actors. How did you go about twinning Ben Stiller?
For the “Ben and Ben” splitscreen work, we went back and watched Multiplicity. There’s some phenomenal work in there – work that definitely still holds up today.
The scenes where there are multiple Michael Keatons in the shots at the same time that don’t interact were really good; however, two shots in particular stood out to me. The first is where Michael Keaton’s character takes a cigarette out of a clone’s mouth, who then blows smoke back at him. The second is the scene where the original Doug keeps refilling clone #4’s glass with Coke. These shots where they interact with each other, and interact with the same objects, were the most impressive, so that’s what we aimed for.
We have a few places where Ben’s Larry character and his Laa character touch each other – patting each other on the back, touching faces, and so on. That really sells the effect, and it definitely was as difficult to execute as you would imagine.
Can you describe the Escher Tablet Pursuit sequence?
The whole process of bringing this piece to life was one of the more interesting challenges I’ve faced in my career. In the sequence, three live-action characters end up in the M.C. Escher lithograph called “Relativity”. In this world of multiple gravities, a chase ensues with the characters all trying desperately to lay their hands on the tablet.
How did you set about transferring M.C. Escher’s mind-bending artwork on to the screen?
We studied the original artwork and other Escher pieces to make sure that the etch treatment we put on the photographic elements was authentic to his style. Taking a static piece of lithography and turning it into a cinematic experience brought all kinds of things into consideration – things one would never think about. For instance, how would we deal with motion blur, depth of field, and moving shadows? These were all design details that required lots of trial and error.
Then, based on solid previs from Proof, our VFX supervisor, Swen Gillberg, shot all of the action on a small greenscreen stage in Burnaby, Vancouver.
How did you make your CG set look like the original lithography artwork?
We came up with complex shaders that dynamically resized the pattern density based on factors such as distance to camera and size in frame. Our CG supervisor, Tim Nassauer, devised a very clever way to achieve this by using multiple interlocking patterns that would increase the density and fidelity of the etch lines as the surface moved closer to camera.
Studying the Escher artwork taught us that shadows aren’t just darker areas due to light being occluded, but rather a more dense arrangement of the lithographic pattern that’s present in surrounding areas. This led us to render a full set of the denser pattern to be revealed through shadow mattes; these were either generated in the scene, rotoscoped or keyed from the plate photography. Our compositing department, supervised by Francis Puthanangadi, was ultimately tasked with blending these various density passes to attain the appropriate line density and fidelity.
It sounds like a highly technical challenge. Did you also have to make a lot of artistic judgement calls?
VFX always involves a balance of technical acumen and artistry. This sequence definitely provided its share of technical challenges, but most of the journey was spent making artistic decisions and creative calls.
The etch treatment applied to the photography, for example, was very finicky and sensitive, depending on how large or small something was in frame. There was no real formula for it, only artistic instincts on what looked right: pattern angles; how many different sections we would need; how fine or coarse the pattern should be; how heavily the effect should be printed in. These would all have to be dialed in on a case-by-case basis, sometimes animating them throughout a shot if we were going from wide to close or vice versa.
Did the same apply to more conventional effects such as motion blur?
Dealing with motion blur, depth of field, and so on is usually very straightforward. In this case, however, we had to make a lot of artistic calls on how they should be handled. Do we put the etch on before the defocus and let the lines get blurry? Do we let the etch treatment get motion blurred, or do we put the motion blur on before the treatment? There were many variables and decisions to make along the way and our comp lead, Brian Rust, and look-dev artist, Joe Spano, ran through the gamut of these variables – and many more – to achieve what ultimately became the look for the sequence.
For the most part, there’s some semblance of reality that one can use as a metric to see if an effect looks “right”. The design-oriented nature of this work made that more difficult, as we had to rely so much more on our far less concrete artistic intuitions. In the end, that was what made it so satisfying as it all came together.
What new personal challenges did this show set for you?
The bulk of my experience is in compositing – I was a compositing supervisor for years before becoming a VFX supervisor. So I never had much exposure to animation – particularly character animation. This show gave me a lot more experience in that arena.
Working closely with our animation director, Phil Cramer, and animation lead, Frankie Stellato, I was exposed to both traditional keyframe hand-animated work, as well as Digital Domain’s new cutting edge Direct Drive facial animation system. This was a fantastic learning opportunity for me that I took full advantage of.
I was extraordinarily lucky to be a part of the fantastic team that put this film together. Across the entire spectrum – the executives and internal team at Fox, director Shawn Levy, production VFX supervisor Erik Nash, the entire team at Digital Domain, and our counterparts at other VFX facilities – I was lucky enough to get to work with old friends … and make some new ones as well. For me, the most important part of a show is the team of people I’m working with on a daily basis. We spend most of our waking lives at work in this business, and long weeks away from home, and to be surrounded by people who are supportive, collaborative, creative and downright fun to be around makes going to work every day a thoroughly enjoyable and satisfying experience.
- Night at the Museum: Secret of the Tomb official website
- Digital Domain
- Method Studios
- Autodesk Maya
- ZBrush by Pixologic
Special thanks to Jonny Vale, Karl Williams, Adam Brown, Melissa Knight, Ellen Pasternack and Rita Cahill. Night at the Museum: Secret of the Tomb photographs TM and © 2014 Twentieth Century Fox Film Corporation. All rights reserved. Not for sale or duplication.