“Pixels” – VFX Q&A

Pixels - Cinefex VFX Q&A

It’s game over for humanity. Well, it is by the end of Patrick Jean’s 2010 short film Pixels, in which a swarm of 8-bit videogame characters escapes from a trashed TV and proceeds to turn first New York City, then the entire planet, into multi-coloured cubes.

Now, Jean’s original concept has been expanded into a full-length feature directed by Chris Columbus. Sharing its title with its progenitor, Pixels stars Adam Sandler and Michelle Monaghan … not to mention a host of re-spawned retrogamer favourites like PAC-MAN, Donkey Kong and Q*bert.

Ironically, in order to create the desired old-school look for the movie’s digital characters, the filmmakers needed to deploy state-of-the-art visual effects. Heading the effects team were production VFX supervisor Matthew Butler and VFX producer Denise Davis. The majority of the effects shots being assigned to Digital Domain and Sony Pictures Imageworks, with nine other VFX companies playing supporting roles.

One of the biggest challenges faced by this extended visual effects team was how to level-up the 1980s game characters to become fully three-dimensional entities. The solution involved discarding traditional flat pixels, and instead constructing the characters using 3D cubes known as “volume pixels – or “voxels”, for short.

So how exactly were the voxelised characters of Pixels brought to life? To find out, we spoke to key artists at Digital Domain, Sony Pictures Imageworks, and a number of the other vendors who joined forces to craft the pixels of Pixels.

"Pixels" including visual effects by Trixter

Sony Pictures Imageworks – VFX supervisor, Daniel Kramer

How did Sony Pictures Imageworks get involved with Pixels?

Lori Furie from Sony Pictures invited us to bid on the work. I met with Matthew Butler and Denise Davis to talk about the challenges, and Matthew and I hit it off pretty quickly – we had similar ideas about how to approach the look of the characters. I was on the show for about a year, which included some on-set supervision for Imageworks’ portion of the work. In December 2014/January 2015 we started getting our first plate turnovers, so actual shot production lasted about 5­6 months.

What was the scope of your work?

We delivered about 246 shots in all, though some of those didn’t make it into the final cut. The bulk of our work was during the chaotic sequences towards the end of the film, where the aliens unleash all the videogame characters on to the streets of Washington D.C. We had to develop a large number of characters for that – 27 in all – as well as the alien mothership.

We also handled the shots in Guam, when the Galaga characters first arrive, as well as the digital White House and White House lawn extensions. And we were responsible for all the Q*bert shots, some of which we shared with Digital Domain.

For "Pixels", Sony Pictures Imageworks digitally re-created a number of hard-to-access Washington D.C. locations, including the White House and its surroundings.

For “Pixels”, Sony Pictures Imageworks digitally re-created a number of hard-to-access Washington D.C. locations, including the White House and its surroundings.

Describe your relationship with the director and production-side VFX team.

I worked very closely with Matthew, both on-set and during shot production, meeting several days a week. Fortunately, he was close by at Digital Domain, which is only about a 15-minute drive from Imageworks. We generally reviewed work in person, with only the occasional cineSync session.

Chris Columbus worked from his offices in San Francisco, and had daily reviews with Matthew and team over a high-speed connection to Digital Domain. It was a very slick system – the VFX production team in Playa del Rey could stream full 2k content to Chris’s projector, and Chris could stream Avid media back. When we had shots to review, I would head to Digital Domain with Christian Hejnal, our Imageworks VFX producer, and review our shots directly with Chris and Matthew.

Matthew and Denise were really great about including Imageworks as a peer in the process, so I was able to present work directly to Chris and hear his notes first-hand. That really tightened up the feedback loop.

This slideshow requires JavaScript.

Did you take visual cues from the original 2010 short film by Patrick Jean?

We studied Patrick’s short quite closely for inspiration. His short is really charming, and a lot of that charm comes from the very simple shapes and silhouettes of his characters. We quickly learned that over­detailing the characters destroyed what made the original game concepts so engaging, and so we always worked toward keeping the characters as low-res as possible, with just enough voxel resolution to read the animation clearly.

For each game, John Haley, our digital effects supervisor, was generally able to find original sprite sheets and YouTube videos of gameplay for the team to reference. We’d use the sprite sheets for modelling inspiration, and then Steve Nichols, our animation supervisor, would study the gameplay, working in as many elements as possible into our characters’ motion.

Watch Patrick Jean’s original short film Pixels:

What challenges did you face when translating the 2D game characters into 3D?

The 3D “voxel” look was already established in Patrick Jean’s short, but there are many ways to go about voxelising a character, and to determine how those voxels track to the animation.

For example, should we model the characters with voxels directly, or build them procedurally? Should voxels be bound to the character like skin, or should characters move through an invisible voxel field, only revealing the voxels they intersect? This latter solution – “re-voxelisation” – is akin to rasterising a 2D game character on a CRT: as the sprite moves through screen space, the static pixels on the screen fire on and off.

Which solution did you favour?

Chris and Matthew liked the notion that the characters would re­voxelise as they moved – it felt more digital. But our first attempts at a pure, static voxel field posed a few problems.

First, it proved impossible to control the orientation of a voxel relative to the character’s orientation, as the two were independent. On one frame, the voxel faces might be perpendicular to features on the character’s body; but after the character turns, those same voxels might have turned relative to the same feature. This made it difficult to keep the characters on-model.

Another issue was that even very small motions caused the whole character to re­voxelise as it intersected different parts of the static field, which was distracting.

The last big issue revealed itself in lighting. If the voxels were static, and simply turned on and off as the character moved, they never changed their relationship to the set lighting. This made it difficult to shape our characters and make them feel believably integrated. So, while we really liked the idea of a static field, in practice there were too many issues.

Since the static field option wasn’t working out, what did you opt for instead?

We ended up using a hybrid approach, parenting smaller voxel fields to different parts of a character’s body. So, one field might be tracked to the face, another to the chest, another to the upper arms, and so on. These fields moved independently with the rotations and translations of the skeleton. Any deformation – like squash and stretch – would cause re­voxelisation in that region. This calmed down the re­voxelisation to a pleasing level, gave us more control on how voxels were orientated to the characters’ features, and fixed our lighting issues by allowing voxels to rotate through space.

This slideshow requires JavaScript.

With that decided, how did you then go about building and rigging the character models?

For most characters, we would build a smooth-skinned model with a simple rig. Our FX department, headed up by Charles­Felix Chabert, would build a procedural Houdini network to break up the character into sub-voxel fields.

Even though the characters looked quite simple, they were actually really heavy, with a solid volume of cubes, each cube with bevelled edges. The polygons added up fast! For large scenes with hundreds of characters, we quickly learned that the voxelisation process could take days to complete. Much of our further development was therefore about optimising the workflow. Our final pipeline ended up passing a single point per cube to the renderer, and instancing the cubes at render time.

What approach did you take with the lighting?

Chris Columbus didn’t want the characters to feel plastic. That said, there’s a lot of charm in keeping the characters simplistic and blocky, as seen in the original Pixels short. Chris, Matthew, and Peter Wenham, the production designer, came up with the idea of “light energy”, whereby the cubes emit light from an internal source. This allowed the cubes to retain a simple geometric shape, while still showing hints of complexity burning though the surface.

How did that work for scenes in bright sunlight?

Consider a light bulb outside in the sun – the internal light needs to be incredibly bright to be visible, and once you get there you’ve lost all shape on the object. That makes it really difficult to integrate it into the scene. After much trial and error, we settled on having only a subset of the cubes emit light at any one time. We also animated that attribute over time. This allowed the environment light to fall more naturally on the dormant voxels, thus anchoring the objects into the scene and giving good contrast against the lit voxels.

SPI-Pixels-Qbert-Build-01

Which was the most difficult character to develop?

Q*bert took the most effort. He’s really the only game character who needed to act and emote with a broad range. We started by pulling as much of the original game art as possible. The in­game sprites are incredibly low-res, but there’s a lot of detail in the original cabinet artwork – that was our main source of reference for features and proportions.

With basic approval on the smooth model, we moved on to voxelisation in Houdini. The first versions used a single voxel size for the whole body, but we quickly found that we needed more detail in areas like the eyes and brows, and less in areas like the skull. Each feature of Q*bert was dialled to get just the right voxel size and placement. Most of our trial and error in learning how to voxelise our characters happened during Q*bert’s development.

A number of techniques were used to soften the angular appearance of Q*bert's voxel building blocks, including multiple voxel sizes, and transparent outer layers revealing smoother shapes beneath.

A number of techniques were used to soften the angular appearance of Q*bert’s voxel building blocks, including multiple voxel sizes, and transparent outer layers revealing smoother shapes beneath.

Q*bert is very round and cute. Did the blockiness of the voxels fight against that?

When we presented our first Q*bert lighting tests to Matthew and Chris, we had applied a simple waxy plastic shader to the model. Chris felt our shading treatment was too close to Lego. With all the hard cube edges he said, “It looks like it hurts to be Q*bert!” This comment sent us on a long journey to figure out how to make a character built from hard, angular cubes look cute and soft.

We ended up doing literally hundreds of tests, adjusting all aspects of the model and shading to find a combination that would work. We introduced light energy to the interiors and edges of the cubes, dialling a pattern to control where and when cubes would emit light. We layered voxels at different scales into the interior of Q*bert, and adjusted the transparency of the top layer to reveal the depth.

We also introduced some of the underlying round shape of the smooth model into the cube shading – this allowed us to rim and shape Q*bert with softer graduations of light. The combination of all of these tweaks – and a lot of elbow grease by our look-dev team – finally found a look Chris and Matthew liked.

What approach did you take with Q*bert’s animation?

Animation for Q*bert was a lot of fun, with cartoon physics and lots of opportunities for gags. In one early test, we only allowed Q*bert to move through a scene by jumping in alternating 45° rotations, just like the videogame. We really liked this idea in theory, but in practice it wasn’t that interesting. Instead, Q*bert transitions from hopping to walking and running, varying his gait in a more natural way.

See Q*bert and a host of other videogame characters in this Pixels featurette:

How did you tackle the big action scenes towards the end of the movie, when the videogame characters are trashing Washington D.C.?

One of our more difficult shots was the attack on the Washington Monument, which opens our “D.C. Chaos” sequence. The camera tracks a group of Joust characters to the monument, and we circle the action as they begin to destroy it. The difficult part was the location – we’re flying right over the National Mall, next to the White House and Capitol Building. This is a strict no­fly zone. So, with no way to get the background plate in-camera, we knew we would need to create a photoreal 2½D matte painting of the whole area.

What reference were you able to get of the area around the Washington Monument?

We started with previs from the team headed up by Scott Meadows. This gave us the exact angles of D.C. we needed to plan for. We were also able to get a permit to fly a helicopter around the outside perimeter of the National Mall to acquire reference. We earmarked about four key locations where we could hover and acquire tile sets to use in our reconstruction.

In practice, none of these locations was really ideal – Homeland Security just wouldn’t allow us to get close enough to the monument. So, in addition to the helicopter footage, we acquired panoramas and hundreds of stills of the monument and surrounding buildings by walking up and down the Mall. We were also able to go inside the monument and capture stills through the top windows.

In order to show the destruction of the Washington Monument, Sony Pictures Imageworks created a 360° panoramic matte painting of the surrounding environment, to act as a backdrop for a digital model of the monument itself.

In order to show the destruction of the Washington Monument, Sony Pictures Imageworks created a 360° panoramic matte painting of the surrounding environment, to act as a backdrop for a digital model of the monument itself.

How did you then create the digital environment?

Once we had all the reference back at Imageworks, we started building a simple model of the area. We 3D-tracked stills from our helicopter shots, adding them to the model as needed. Jeremy Hoey, our matte painter, had the difficult task of bringing all these sources together to create one seamless, 360°, 2½D matte painting of Washington D.C. as seen from the monument location.

What about the Washington Monument itself?

We built a 3D, photoreal version of the monument, which needed to be destroyed using a mixture of voxel shapes and natural destruction. As each Joust character strikes the monument, the area local to the hit is converted to large voxel chunks, with light energy spreading from the impact point. Then, as the destruction continues, large cube chunks begin to loosen and fall away from the monument.

We found that keeping the scale of the chunks really large looked a lot more interesting and stylised – smaller voxels started to look too much like normal destruction damage. FX artist Ruben Mayor designed all the sims in Houdini for the destruction shot, and Christian Schermerhorn did an excellent job compositing a completely synthetic shot to create a very photographic feel.

This slideshow requires JavaScript.

How do you feel about your work on Pixels, looking back?

At first blush, Pixels seems like a simple job, because the characters look so simple. Nothing could be further from the truth! Each process had to be invented, and every shot required a complicated Houdini process to deliver the data to lighting. I underestimated the number of challenges we would encounter, many of which we just couldn’t predict until we dived in. I’m really proud of what the team was able to accomplish.

What’s your favourite retro videogame?

That’s a tough question! I played most of these games as a kid, plus numerous computer games on my Apple. For arcade machines, I really liked Pole Position and Zaxxon. On my Apple, I loved the original Castle Wolfenstein and Karateka.

"Pixels" including visual effects by Digital Domain

Digital Domain – VFX supervisor, Mårten Larsson

How did Digital Domain get involved with Pixels?

Patrick Jean, the creator of the original short, was actually represented by Digital Domain’s production company, Mothership, for a short time. After the feature film had been attached to Chris Columbus, his team reached out to Matthew Butler, senior VFX supervisor at Digital Domain, to work on the film with him.

I started doing tests for the show in October 2013, and we delivered our last shots in June 2015. Most of our shot production happened between September 2014 and June 2015.

Which sequences did you work on?

The bulk of our work was three sequences: PAC-MAN, Centipede and Donkey Kong. We created the characters, along with anything they interacted with and the environments they were in. We also did a few shots of people dissolving into voxels and reassembling from voxels, as well as some characters from other sequences.

Many of the Donkey Kong shots in "Pixels" deliberately use angles and compositions inspired by the original gameplay, as in this shot by Digital Domain.

Many of the Donkey Kong shots in “Pixels” deliberately use angles and compositions inspired by the original gameplay, as in this shot by Digital Domain.

How much contact did you have with the director?

We worked very closely with both Chris Columbus and Matthew Butler. Chris was based in San Francisco, so after the shoot we mainly interacted with him over video-conference. The production-side VFX supervisor and producer had space here at Digital Domain for the duration of the show, so we worked very closely with them.

What aspect of the show’s visual effects did you find most challenging?

I’d say the trickiest part was how to translate 2D characters into fully 3D characters that move in a physically plausible way, while still trying to retain the spirit of the simple video games that we have all come to know and love.

Take Donkey Kong, for example. He has a very iconic look that was fairly easy to match when seen from the front, in a pose that matches the videogame. But, when you start looking at that same model from a three-quarters angle, it looks less like the game. Add motion to that. and you’ll end up in poses that he never does in the game.

One of the big challenges was to keep Donkey Kong looking consistently like the original game sprite, even when seen from multiple angles in three dimensions.

One of the big challenges was to keep Donkey Kong looking consistently like the original game sprite, even when seen from multiple angles in three dimensions.

How did you solve this problem?

There was no real silver bullet to solve it. We basically tried to get Donkey Kong looking as close to the game as possible, using iconic poses, and trying really hard to not show him from angles that were too different from the game.

The characters in Pixels are quite visually complex, compared to the 8-bit originals. Was that deliberate?

The characters needed to be made out of boxes – or voxels – to resemble the pixels from the original low-res games. In order to make the characters look complex and more real, we added a lot of detail and pattern to both the individual boxes and the overall character. The idea is that, even though they’re made out of simple voxels, they are actually aliens with very complex technology. This approach also gave us an excuse to add some light-emitting energy, and make things look cooler and more interesting.

This slideshow requires JavaScript.

Early on, we ran into the issue of reflections in flat surfaces. If you look at PAC-MAN, he is a sphere. If you build that sphere out of boxes, your brain tells you that you are looking at a sphere, but you are actually seeing flat, mirror-like reflections on the surface. It looks really strange. We got around this on all the characters by blending some of the pre-voxelised model normal into the normal on the voxels.

Were the animators working with the basic, smooth-skinned characters, or with the final voxelised versions?

The animators worked with the pre-voxelised character, but had the ability to turn on the voxelisation process to check poses and facial expressions when needed. A lot of attributes were also transferred from the skinned character across to the voxels, tracking with its movement and sticking on the per-frame-generated voxelised version.

So the voxels were only generated after the animation was complete?

Yes – all things voxelising went through our FX department, and were passed on to lighting for rendering. We also had setups for the characters to go straight from animation to lighting via an automated voxelisation system. But, anytime we needed to do anything special for a character in terms of how it voxelised, the FX department picked up the animation publish of the regular skinned character and generated a new voxelised version for lighting.

Digital Domain's Centipede character went through a number of design iterations, and includes many features inspired by the 1980s arcade game artwork.

Digital Domain’s Centipede character went through a number of design iterations, and includes many features inspired by the 1980s arcade game artwork.

How did you develop the look of the Centipede character?

Centipede started as a design that resembled an actual centipede. From there, it was tweaked to look much more like the art on the side of the arcade game, with claws for feet, a lizard-looking face and eyes, and snake-like teeth. We went with this look for a while, using different sizes of voxel to capture the smaller features.

How did it progress from there to the final design?

After a few rounds of testing looks and poses, we got a note from Chris – he thought the character had lost some of the simple charm of the game. At that point, we went back to a design much closer to the previs model. We still incorporated some features from the arcade game art look – for example, we made the eyes similar to the artwork, but didn’t put the pupils in. We also used the sharp teeth and the claws. We ended up with a character that looks mean, but is still similar to the game. I think where we landed in the end is a very successful mix.

"Pixels" including visual effects by Digital Domain

What happens in the PAC-MAN sequence?

PAC-MAN is terrorising the streets of New York, and being chased by our heroes in “ghosts” – Mini Coopers with special equipment that can kill PAC-MAN. The approach for this sequence was to use CG for PAC-MAN (obviously), and also for most of the things he interacted with. As much as possible, the rest would be shot practically.

For the PAC-MAN sequence, Digital Domain combined their CG character with practical effects and vehicle stunts shot on location in Toronto.

For the PAC-MAN sequence, Digital Domain combined their CG character with practical effects and vehicle stunts shot on location in Toronto.

Tell us about the PAC-MAN location shoot.

The Mini Coopers were practical cars driven by the stunt team in downtown Toronto, which was made to look like New York. Since PAC-MAN is essentially a giant yellow light bulb in the middle of a street at night, we knew that he would throw out a lot of interactive light. To help with this, a Mini Cooper was rigged with big yellow light panels and generators on the roof, and used as a stand-in for PAC-MAN. The car also had a paint pole on the roof, with an LED light up top, to show the height of PAC-MAN and help with the framing of shots.

The decision was taken to sink PAC-MAN a little into the ground, bringing his mouth closer to street level and thus making it easier for him to bite down on his prey.

The decision was taken to sink PAC-MAN a little into the ground, bringing his mouth closer to street level and thus making it easier for him to bite down on his prey.

For some shots, the interactive light car was only used for lighting reference, and in other cases it was present in the filmed plate. Where the timing of the car didn’t work with what Chris wanted PAC-MAN to do in the animation, we ended up painting out both the car and the light it threw. Overall, it was very helpful to have as a reference though, and in some shots that was all the interactive light we used in the shot.

Practical light rigs were used on location to simulate the yellow glow cast by PAC-MAN. In many shots this was enhanced, or replaced entirely, with digital interactive lighting.

Practical light rigs were used on location to simulate the yellow glow cast by PAC-MAN. In many shots this was enhanced, or replaced entirely, with digital interactive lighting.

Did you do 3D scans of the street environment?

Yes – we did lidar scans of most environments to help our modelling and tracking departments. We knew we’d be modelling a lot of geometry that would need to line up very closely to buildings and parked cars in the plates, in order to get the reflections and interactive light from PAC-MAN to look believable.

In the end, we modelled a bit more than we we’d originally planned, but the interactive light helps so much to sell that PAC-MAN is really in the scene, since it’s pretty obvious that PAC-MAN is fake, no matter how good we make him look!

The lidar was also very helpful in creating geometry for collisions with all the voxels we threw around when PAC-MAN bites cars and objects. Our general rule was that anything he bites turns into voxels, while anything he bumps into is destroyed as if he was a normal earthbound object. Of course, we broke that rule a lot, and did whatever we thought looked more interesting.

This slideshow requires JavaScript.

How did you create the glowing trail left by PAC-MAN?

PAC-MAN had both a heat trail and a misty-looking energy trail. These were generated and rendered by the FX department.

The reason for the trail was to tie PAC-MAN into the ground a bit better. Because he had to be able to bite objects at ground level, and because we had restrictions on how much he could open his mouth, we ended up having to sink him into the ground. If we hadn’t, his lower lip wouldn’t have reached the ground, and he wouldn’t have been able to bite anything that touched the ground.

It looked a bit odd to just have him truncated at ground level and not have him influence the ground at all, so the trail element was added. I think it adds to PAC-MAN’s look. With all the white and warm lighting in the city, and with PAC-MAN being yellow, it was nice to get some other colours in there – that’s why we went with a slightly blue colour on the trail.

Some of Digital Domain's PAC-MAN shots required fully digital vehicles and destruction effects, as seen in this partially rendered animation breakdown.

Some of Digital Domain’s PAC-MAN shots required fully digital vehicles and destruction effects, as seen in this partially rendered animation breakdown.

How do you feel about your work on Pixels?

I’m very proud of the work our team created. It’s a cool mix of really odd characters and, in my opinion, cool effects. PAC-MAN looks like something alien that we haven’t seen before. The end result is quite unique, and different from most movies out there. I like that.

What’s your favourite retro videogame?

We had a whole bunch of arcade games on set for the sequence when our young heroes are in an arcade hall in the ’80s. They all worked, so I played a lot of them. I think the highlights for me were Missile Command and Q*bert. Another highlight was meeting Professor Iwatani, the creator of PAC-MAN, and getting a photo standing next to him in front of a PAC-MAN arcade game!

Watch Eddie Plant (Peter Dinklage) battle PAC-MAN in this Pixels featurette:

Trixter – VFX supervisor, Alessandro Cioffi

When was Trixter invited to work on Pixels?

Simone Kraus, CEO and co-founder of Trixter, had been following the project from L.A. since the early days of pre-production, and we’d been looking forward to being a part of the show since then. Matthew Butler led us through the look and the vision on the VFX, with in-depth explanations on the intentions and the dynamics of every shot and sequence. We ended up doing some concept work, and what I call a “VFX cameo”.

How many shots were in your VFX cameo?

We worked on about 25 shots, in six different sequences. Along with some blaster effects and pixelated invaders, we also created the concept for Michael the Robot. We then built, rendered and integrated him into the lab sequence. Also worth a mention is the brief appearance of Nintendo’s Mario, for which we designed a three-dimensional version of the character, extrapolated out of the original game design. We rigged, animated and integrated him in one shot during the Washington D.C. attack sequence.

Was there much scope for creativity, or were you conforming to a look and feel that had already been established?

Mainly, our references were shots done previously by other vendors. However, for Michael the Robot, we referenced real mechanics, and for the Missile Command effect, the videogame itself.

Even though we had to blend into an already strongly-established aesthetic, Matthew always encouraged us to come up with alternatives for the look of this or that effect. In fact, although he and the director had extremely clear ideas on what they were after, Matthew left the door open to improvements or variations. I love these opportunities for such creative work.

Tell us more about Michael the Robot.

We created a CG extension of Michael’s glassy head – which contains complex, hi-tech internal mechanisms and electronics. From an initial 2D concept, we developed a 3D digital concept model which served as our base for approval.

To integrate Michael into the live-action plates, we had to do intricate match-moves, in order to ensure that his movement and action would fit seamlessly. We added procedural animation to the mechanism inside the glass dome in order to achieve a fluid animation, using several lights constrained to procedural animation. Lighting and rendering was done through The Foundry’s Katana and Solid Angle Arnold.

What’s your favourite retro videogame?

Definitely Missile Command. I have spent innumerable coins on it!

"Pixels" including visual effects by Atomic Fiction

Atomic Fiction – VFX supervisor, Ryan Tudhope

Tell us how Atomic Fiction came to be part of the Pixels team.

We had been following Pixels since it was announced, and really wanted to find a way to help out on such a ground-breaking project. Having known Denise Davis and Matthew Butler for some time, we knew the visual effects team was top-notch, and wanted to collaborate with them in any way we could. Fortunately, they had a small sequence that was a great fit for us.

What did the sequence involve?

We came on board fairly late in the project’s schedule, and our involvement was on only a few shots. The work we did was really fun and complex, however, centering on several sequences requiring dialogue-capable CG face replacements of Ronald Reagan, Tammy Faye and Daryl Hall – in the story, they are all avatars of the alien invaders. All three characters leveraged Atomic Fiction’s digital human pipeline, which we’ve utilised on several projects, including Robert Zemeckis’ upcoming The Walk.

How long did you spend working on the shots?

Because of how involved the shots were from an asset and animation standpoint, our schedule spanned approximately four months. We interacted mainly with Matthew Butler. He is an amazing supervisor to work with, always on point with his feedback, and great at inspiring the team!

Were you familiar with Patrick Jean’s 2010 short film?

We’re huge fans of the original short, and love the fact that its popularity led to the development of this feature film. It’s a great Hollywood success story. While our digital human shots didn’t directly relate to the work in Patrick’s original, it was nonetheless a great inspiration for our team.

The cast of "Pixels"

Describe how you created these celebrity avatars.

Our mission was to add our visual effects work to VHS-quality captures from the 1980s. So, in contrast to other projects, our Pixels artists were starting from scratch, with no scan data, textures or on-set HDRIs of these celebrities. This required us to find a wide variety of photographic reference to sculpt, texture and light each celebrity by eye.

It was a really fun challenge, to set aside technology for a moment and just discuss what looks right (or not) about a digital model. It’s fun to find solutions in spite of limitations like these – it gets back to the art of it all.

Atomic Fiction’s CG Supervisor, Rudy Grossman, led a lean team that included Mike Palleschi, senior modeller, and Julie Jaros, lead animator. Because we all knew exactly what our shots would need and what dialogue was required, we were extremely efficient during the face-shape modelling and rigging phase. Our asset work and shot work was happening concurrently, as we were effectively evaluating both modelling and lighting in the same takes. Tom Hutchinson led the charge on look development and lighting, and Jason Arrieta on compositing.

What’s your favourite retro videogame?

That’s easy: Moon Patrol!

Adam Sandler in "Pixels"

Before flashing up the GAME OVER screen on this Q&A, there’s just time to check in with the other vendors who helped bring Pixels to the screen.

Storm Studios delivered a 20-shot sequence showing India’s Taj Mahal being attacked by characters from Atari’s Breakout. Storm’s VFX supervisor was Espen Nordahl (who named Super Mario Bros as his favourite retro videogame). Having determined that the graphics from the original Breakout were a little too rudimentary, Nordahl’s team incorporated design elements from later iterations of the game. This allowed them to give the characters more shape, yet retain the old-school pixelated look. A digital Taj Mahal asset was built, after which the Storm crew ran rigid body destruction simulations in Houdini. Additional effects were layered in, showing sections of the building turning into voxels, adding “light energy” to the voxels at the impact points, and creating holes in the structure where the balls bounced off.

“I’m very proud of the work we did,” Nordahl commented. “This was our first big international show, and I would like to thank Sony and Matthew Butler for trusting us with such a complex sequence.”

At shade vfx, a team of CG and compositing artists was tasked with creating the illusion that the stars of Fantasy Island – Ricardo Montalban and Herve Villechaize – were delivering a congratulatory message from space. Led by VFX supervisor Bryan Godwin, shade’s team reconstructed full-CG versions of both actors from archival photographic reference and clips from Fantasy Island. Even though the task only required new lip-sync to match the newly-recorded dialogue, it was necessary to animate the entire cheek structure, eyelids and even nose to react correctly to the newly structured phonemes.

One More VFX, with Emilien Dessons supervising, worked on the Donkey Kong arcade duel sequence in which young Eddie Plant duels young Sam Brenner. Their goal was to re-create accurate Donkey Kong game graphics using MAME (Multiple Arcade Machine Emulator) software, as well as to re-create the high score motion graphics.

As a point of interest, Johnny Alves, Matias Boucard and Benjamin Darras of One More VFX were also executive producers on Pixels (Patrick Jean was a 3D supervisor at One More VFX when he made the original short).

At Pixel Playground, VFX supervisor Don Lee oversaw the production of a number of shots involving 2D compositing and 3D tracking, greenscreen, the integration of game footage into arcade games and a variety of cosmetic fixes.

Further VFX support was provided by Lola VFX, Cantina Creative and The Bond VFX.


Watch the trailer for Pixels:

Special thanks to Steven Argula, Rick Rhoades, Tiffany Tetrault, Franzisca Puppe, Geraldine Morales, Lisa Maher, Benjamin Darras and Kim Lee. “Pixels” photographs copyright © 2015 by Sony Pictures Entertainment and courtesy of Sony Pictures Imageworks and Digital Domain.

Ant-Man and Movie Miniaturisation

Marvel's "Ant-Man"

Small is big. If you’re in any doubt of that, check out Marvel’s Ant-Man, the latest in a long line of movies in which ordinary human beings are reduced to the size of bugs.

In Ant-Man, con-artist Scott Lang (Paul Rudd) dons a special suit charged with sub-atomic particles, which causes him to shrink to near-microscopic proportions. Drastically diminished, Rudd faces the considerable challenges posed by an ordinary world magnified to extraordinary proportions. Good job the side-effects of the miniaturisation process imbued him with super-strength.

You’ll be able to read the complete story of the visual effects of Ant-Man in the next issue of Cinefex magazine, available to preorder now. To whet your appetite, this article includes exclusive insights from Jake Morrison, Ant-Man’s visual effects supervisor, about the challenges involved in creating Marvel’s latest – and littlest – screen hero.

Watch Scott Lang try out his miniaturisation suit for the first time in this clip from Marvel’s Ant-Man:

Before hearing from Jake, however, we’re going to take out our magnifying glasses and examine some of the other movies which have delighted in pitting pocket-sized heroes against teeny-tiny villains.

Starting Small

In the 1936 film The Devil-Doll, escaped convict Paul Lavond (Lionel Barrymore) creates a pair of tiny assassins to take vengeance on his former business associates. The film uses an impressive box of photographic tricks to plant its miniature killers into their oversized world, including the traveling matte process – state of the art at the time – used to optically “cut-out” actors and insert them at reduced size into regular live-action scenes.

Four years later, the still more ambitious Dr. Cyclops was released. In the film, a group of scientists are shrunk to a height of just fourteen inches by the nefarious Dr. Alexander Thorkel (Albert Decker). Directed by Ernest B. Schoedsack, Dr. Cyclops was photographed in Technicolor and boasts oversized sets, giant props – including a huge, mechanically-operated human hand – all backed up with an dizzying array of split-screen and glass shots designed to combine regular actors with optically enlarged backgrounds.

"Dr. Cyclops" advertisement from the May 1940 issue of "Screenland"

“Dr. Cyclops” advertisement from the May 1940 issue of “Screenland”

Meticulous planning was vital in the production of the miniature scenes for Dr. Cyclops, which were described in the April 1940 edition of American Cinematographer as being made “with a slide rule and a blueprint”:

“Each scene was carefully mapped out. Scale drawings showed where every prop, every piece of furniture, every object in the scene was situated. The special effects experts, the cameramen and Schoedsack worked out formulas to determine the exact spot the camera would be placed, and the exact position each actor would take. They even figured on how high the lens had to be from the floor, how far the camera was from the player, how far the player from a table, for example, and whether the player had to be standing on a level floor or on a sloping one. All this was necessary to create the ‘little people.’”

Grant Williams stars as Scott Carey in the 1957 film "The Incredible Shrinking Man" – photograph copyright © 1957 by Universal Pictures.

Grant Williams stars as Scott Carey in the 1957 film “The Incredible Shrinking Man” – photograph copyright © 1957 by Universal Pictures.

What Makes Teeny Look Tiny?

Many of the tropes associated with subsequent miniaturisation movies are already evident in Dr. Cyclops: the use of high camera angles to increase the sense of diminished size; scaling up ordinary furniture to create mountains out of the mundane; the repurposing of sewing needles and scissors into weapons. Not to mention the ever-present threat of huge and terrifying predators, namely household spiders and domestic cats!

All these dangers and more are faced by Scott Carey (Grant Williams) in the 1957 film The Incredible Shrinking Man, just as they are by the accidentally abbreviated youngsters in Honey, I Shrunk the Kids, made over thirty years later in 1989. Both of these movies boast beautifully executed special and visual effects, yet both battle constantly with one of the major challenges facing all makers of miniaturisation movies: how do you make a small world look really small?

One of the keys is the effective use of depth of field. It’s also one of the hardest things to get right. So what is depth of field?

Put simply, depth of field is the degree to which a camera lens can hold both the foreground and background of an image in focus at the same time. For example, if you point a camera at a landscape, pretty much everything you see is likely to be pin-sharp, from the wildflowers at your feet to the mountains on the distant horizon. Such an image therefore has a big depth of field.

Use that same camera to photograph an ant, however, and the instant you get the insect in focus, everything behind and in front of it will immediately go blurry. In other words, the depth of field just turned shallow.

Now, replace the ant with a tiny human being, and you’ll see that in order to make a miniaturisation movie look truly realistic, your cast should be surrounded not by a crisply rendered set, but a sea of blurs.

A stop-motion ant, animated by David Allen, gives a ride to the miniaturised youngsters in this shot from "Honey, I Shrunk the Kids".

A stop-motion ant, animated by David Allen, gives a ride to the miniaturised youngsters in this ILM shot from “Honey, I Shrunk the Kids”.

The actors were pulled along on track-mounted mechanical saddles. A split-screen was used to combine the live-action with the stop-animated ant.

The actors were pulled along on track-mounted mechanical saddles. A split-screen was used to combine the live-action with the stop-animated ant.

Unfortunately, re-creating shallow depth of field during a regular shoot on an ordinary soundstage is next to impossible – pick any shot from one of those early movies and you’ll see that, for the most part, everything is in focus. Yet without that shallowness, the nagging sense remains that what you’re seeing is nothing more than full-sized performers prancing through Brobdingnagian sets.

Luckily for filmmakers, there’s a “Get Out of Jail Free” card. The laws of physics decree that the smaller a camera, the greater its depth of field (another simplification, but the principle holds). In other words, if you were somehow able to discard that bulky RED Epic or IMAX camera and use a teeny-tiny camera instead (operated by an equally teeny-tiny crew, of course) you’d find your depth of field extending to the horizon once more.

In other words, the pin-sharp look of films like Dr. Cyclops and The Incredible Shrinking Man can be thoroughly justified by allowing the filmmakers this single, simple conceit:

“Honey, we shrunk the camera, too!”

Tuck Pendleton's miniaturised submersible pod approaches the optic nerve of his hapless host, Jack Putter. In this ILM shot, the eyeball is a six-foot plexiglass dome and the pod is a motion-controlled bluescreen miniature.

In “Innerspace”, Tuck Pendleton’s miniaturised submersible pod approaches the optic nerve of his hapless host, Jack Putter. In this visual effects shot by ILM, the eyeball is a six-foot plexiglass dome and the pod is a motion-controlled bluescreen miniature.

Little Wonders

Miniaturisation movies aren’t just about little people using the contents of a sewing box to fend off marauding arachnids. Release in 1966 and 1987 respectively, Fantastic Voyage and Innerspace see their protagonists first miniaturised, then injected into an environment far stranger – yet in reality no less mundane – than an everyday bedroom or backyard.

Yes, we’re talking about the interior of the human body.

Whether it’s showing Cora (Racquel Welch) fighting off hostile antibodies, or Lt. Tuck Pendleton (Dennis Quaid) struggling to navigate his micro-submersible through the raging rapids of a turbulent bloodstream, any film set in the squishy interior of a living human being is duty-bound to serve up some startling imagery. No surprise, then, that both of these films rely heavily on special and visual effects (indeed, both won Academy Awards for their intracorporeal visuals).

For a shot of simulated fat cells in "Innerspace", ILM technicians filled balloons with a concentrated Jello solution and photographed them upside-down in a water-filled tank.

For a shot of simulated fat cells in “Innerspace”, ILM technicians filled balloons with a concentrated Jello solution and photographed them upside-down in a water-filled tank.

While Fantastic Voyage uses spectacular sets and gigantic props to conjure up its interior bodyscapes, Innerspace showcases the ingenuity of the visual effects team at Industrial Light & Magic. Relying heavily on practical models, VFX supervisor Dennis Muren had clear views on the way the miniature scenes would be shot. Interviewed in issue 32 of Cinefex, he described his championing of “a hand-held look, as though they were being photographed by tiny little cameramen with tiny little cameras.”

The Innerspace team also had to deal with the daunting challenge facing all those who try to visualise a miniature world for the cinema: the need to balance realism with dramatic effect. As ILM effects cameraman John Fante remarked in the same Cinefex article:

“We were always fighting the battle between the reality of the story – which is that the object is tiny – and the reality of our situation, which was that in order for it to be believable it had to be on a much larger scale. After looking at some footage of the pod, [Dennis Muren] said, ‘It looks small.’ And I said, ‘It’s supposed to look small.’ Then Dennis said, ‘Yeah, but it doesn’t look like it was ever big.’ So it became a matter of perspective. The ‘microscopic’ submersible pod had to look more majestic than, say, a tiny little blood cell bouncing around.”

In other words, it isn’t just about showing an audience a miniature world. It’s about making them believe it.

Night at the Museum: Secret of the Tomb - Jed & Octavius VFX by Method Studios

Jed and Octavius (Owen Wilson and Steve Coogan) create a miniature apparatus capable of working a computer keyboard in “Night at the Museum: Secret of the Tomb”

Night at the Museum and Ant-Man

As long as little people continue to fascinate filmmakers, visual effects artists will continue to develop new ways to put them on the screen. Among the most popular characters in the Night at the Museum movies are the unlikely double act of compact cowboy and Lilliputian legionnaire, Jed and Octavius.

Instead of falling back on the “teeny-tiny film crew” conceit, the Night at the Museum films present their undersized characters as having been photographed with conventional cameras, thus raising the perennial problem of how to replicate the shallow depth of field characteristic of macro-photography.

For the third film in the series, the problem was tackled afresh using an innovative technique called “focus stacking”. Focus stacking allows for infinite adjustment of a shot’s depth of field … after the scene has been shot. In an interview for this blog, Chad Wiebe, visual effects supervisor at Method Studios, explained:

“When shooting macro-photography with traditional cameras, you typically have a very narrow depth of field. When compositing actors, this can typically result in a visual mismatch. For Night at the Museum: Secret of the Tomb, we created our background plates by shooting footage in which we racked focus from a locked-off camera. We then compiled all the in-focus areas into one image. This allowed us to do something that’s traditionally not possible in miniature photography: create an infinite depth of field, and then adjust it to taste.”

Corey Stoll plays the teeny-tiny villain Yellowjacket in Marvel's "Ant-Man"

Corey Stoll plays the nefarious Yellowjacket in Marvel’s “Ant-Man”

For Marvel’s Ant-Man, the latest big movie to explore a miniature world, an enormous amount of research was undertaken in order to develop the right methodology for putting the film’s bug-sized hero on screen. Central to this were decisions on how to achieve a balance between obeying the laws of camera physics and exercising artistic licence.

Speaking to Joe Fordham of Cinefex, visual effects supervisor Jake Morrison revealed how some of these decisions were reached:

“We experimented with a lot of different cameras, and a lot of different optics. From the beginning, we were trying to see how practically we could get small cameras into awkward places and shoot. We did a lot of research into macro-photography, and then ended up realising that what we needed to do was decide what was the right amount of depth of field to complement the action, on a shot-by-shot basis.

“While there are formulae to define the characteristics of macro-photography – and we adopted that physical process for a lot of the movie – there were definitely moments where if we took a 35mm camera down to Ant-Man’s level, the background would be so out of focus that you actually couldn’t see anything in there at all, and the audience wouldn’t know what they were looking at. We’d end up in a world where we were constantly throwing focus from foreground to background, and in our early tests that was actually quite jarring. We decided, because we spent so much time in the macro-world with everything behind Ant-Man out of focus, we wanted to make our out-of-focus backgrounds look beautiful.”

Watch Scott Lang dodging bullets in this clip from Marvel’s Ant-Man:

Don’t forget – the full story behind the visual effects of Ant-Man will be published in the fall edition of Cinefex magazine, available to preorder now.

Small is Bigger Than Ever

Stories about miniaturisation have been around for a long, long time. In 1726, Jonathan Swift chronicled the extraordinary voyages of Lemuel Gulliver, which sees his satirical novel’s titular hero first meeting the diminutive denizens of the land of Lilliput, then enduring his own comparative reduction when he enters the giants’ realm of Brobdingnag.

Gulliver’s Travels contains a good deal of philosophising on the importance of size, exploring the many metaphors inherent in the theme of miniaturisation. Years later, in his 1905 work Three Thousand Years among the Microbes, Mark Twain planted his miniaturised protagonist Huck inside the body of a tramp specifically to have him spout forth on the human condition.

Based on the novel by Richard Matheson, The Incredible Shrinking Man is a compelling commentary on humankind’s attempts to maintain both its identity and independence in an increasingly overwhelming world. At its climax, the film delivers the most powerful miniaturisation moment in the history of cinema. As Scott Carey finally dwindles to the point of vanishment, he declares in a moving soliloquy:

“I looked up, as if somehow I would grasp the heavens: the universe, worlds beyond number, God’s silver tapestry spread across the night. And in that moment, I knew the answer to the riddle of the infinite … that existence begins and ends in man’s conception, not nature’s. And I felt my body dwindling, melting, becoming nothing. My fears melted away, and in their place came acceptance. All this vast majesty of creation, it had to mean something. And then I meant something, too. Yes, smaller than the smallest, I meant something, too. To God, there is no zero. I still exist!”

But fear not – miniaturisation doesn’t have to be heavy-going. Take the shrunken heroes of Edwin Pallander’s 1902 novel The Adventures of a Micro-Man – they spend most of their time stranded in a domestic garden dodging bugs.

Does that plotline sound familiar? If so let’s not worry overmuch. Let us simply enjoy the time we spend in these diminutive realms, and accept the universal truth which permeates all those tales in which pint-sized protagonists square up against gigantic odds:

Small worlds make for some pretty big adventures.


I’ve mentioned just a handful of miniaturisation movies in this article. Is one of them your favourite? Which teeny-tiny film is the biggest in your eyes?

“The Incredible Shrinking Man” photograph copyright © 1957 by Universal Pictures. “Honey, I Shrunk the Kids” photographs © 1989 by Buena Vista Pictures Distribution, Inc. “Innerspace” photographs copyright © 1987 by Warner Bros, Inc. “Ant-Man” photographs copyright © 2015 by Marvel Entertainment.

“Ted 2” – VFX Q&A

"Ted 2" animation and visual effects by Tippett Studio and Iloura

In 2012, unsuspecting movie audiences were introduced to Ted, the animated teddy bear star of Seth MacFarlane’s irreverent comedy of the same name. Ted was a hit at the box office, rapidly gaining status as one of the biggest-grossing R-rated comedies of all time.

Now, the potty-mouthed plush toy is back in the sequel, Ted 2. The new film sees Mark Wahlberg reprising his role as Ted’s buddy and original owner, John Bennett, and chronicles the bawdy bear’s attempts to prove that he has the same legal rights as a regular human being … while amusing and offending pretty much everyone he encounters along the way.

Key to the success of both films was the convincing creation of Ted himself. For the sequel, as for the original movie, animation and visual effects duties were divided between Tippett Studio and Iloura, with additional support from Weta Digital, and with Tippett Studio’s Blair Clark fulfilling the role of production VFX supervisor.

Tippett Studio delivered around 600 shots of Ted, including a break-in sequence, a scuba dive, and a parody of John Candy’s singing performance with Ray Charles in Planes, Trains and Automobiles. Iloura delivered approximately 1,000 animation shots during their nine-month assignment.

For this exclusive roundtable Q&A session, Cinefex brings together insights from the following key artists at Iloura and Tippett Studio:

  • Eric Leven, Visual Effects Supervisor, Tippett Studio
  • Glenn Melenhorst, Visual Effects Supervisor, Iloura
  • Colin Epstein, Senior Compositor, Tippett Studio
  • Brian Mendenhall, Animation Supervisor, Tippett Studio
  • Jeff Price, VFX Editor, Tippett Studio
  • Howard Campbell, Lead Technical Director, Tippett Studio
  • Niketa Roman, PR Specialist, Tippett Studio

Now, without further ado, let’s talk teddy bears!

"Ted 2" animation and visual effects by Tippett Studio and Iloura

On the original Ted, the visual effects load was shared between Iloura and Tippett Studio. Is that how it worked on the sequel, and was the original team an automatic choice for the production?

GLENN MELENHORST: The old team were definitely brought back together for Ted 2. On Ted, we shared Ted around fifty-fifty with Tippett. This time around, Iloura completed twice the number of shots as we did for the first film, sharing the rest with Tippett and a small portion with Weta.

ERIC LEVEN: I think there’s always a hope that you’ll be asked back, but you can never assume it’ll be so – ask Boss Films about Ghostbusters 2! There’s always love and friendship in Hollywood, but there’s always money, too. So we had to bid for the work and meet their price. Certainly Tippett and Iloura had a leg-up, but it was never a given.

GLENN MELENHORST: We were approached from the start to be part of Ted 2, and it was down to our availability and the bid. Seth was so happy with the work on the first film it’s no surprise he wanted us and Tippett back on this next adventure. As with the first film, Blair Clark was show VFX supervisor. He is an awesome supervisor from Tippett and he really understands both Seth’s aesthetic and how we work here at Iloura. As a company with a strong creature animation focus, we have always felt culturally aligned to Tippett and really enjoy working with them.

"Ted 2" animation and visual effects by Tippett Studio and Iloura

How closely involved was Seth MacFarlane with the visual effects process?

ERIC LEVEN: I think it’s fair to say that Seth IS Ted in many respects, so he was instrumental in Ted’s performance. He not only provided motion capture, voice acting, and video reference, but also detailed animation performance notes. In addition, he always made sure that Ted stayed on-model.

GLENN MELENHORST: Seth was closely involved in reviewing our dailies from start to finish. Given his background in 2D animation, his focus was more on the subtleties of the animation than the nuts-and-bolts FX side of things, while Blair made sure the work between the studios matched and all the technical aspects of the job were on track.

Jumping back briefly to the first film, how was the original Ted character designed and rigged to give him that authentic teddy bear look?

GLENN MELENHORST: The original Ted was based on some rough designs from Seth. When it came to making the actual model, we collected several teddy bears and studied them to make sure our asset had all the right seams and panels. We had to make sure the whole way from model to rigging to grooming that we were creating a plush toy and not an animal or cartoon character.

ERIC LEVEN: The model and rig are generally pretty simple. The face rig is a bit more complex to make sure that we can get the correct range of expressions from Ted. What makes his face challenging is that he can’t move his eyes, his nose doesn’t change shape, and he only uses his eyelids for a very occasional blink. We rely much more on the eyebrows and mouth but we – and Seth – are constantly watching to make sure his face isn’t too misshapen or otherwise off-model.

GLENN MELENHORST: We also put a lot of irregularity and asymmetry into him so he would feel like a toy who’d been around for thirty-odd years and been beaten up a bit.

This slideshow requires JavaScript.

Did you develop a new Ted for the sequel, or did you just bring the old bear out of storage?

GLENN MELENHORST: I guess from a production perspective, and an audience expectation standpoint, Ted needed to be his old lovable self. But, as you can imagine, every film teaches you things about animation and pipeline, and after the first film we were keen to reinvent some of our workflows and rendering tech to make shots easier to turn out, as well as to step up the look a notch.

ERIC LEVEN: We were re-treading old ground in the sense that this is a different story about the same characters. Mark Wahlberg is back, and Ted is back. We don’t want a different Mark Wahlberg, and we don’t want a different Ted.

GLENN MELENHORST: Another thing to keep in mind is that software and hardware continue to march forward, speeding things up and allowing us to discard some of the cheats and workarounds and use a more unified raytrace lighting pipeline. In terms of animation, the process was much the same, although this time we went into the show already understanding much of the nuance that Seth was after for the character of Ted.

"Ted 2" animation and visual effects by Tippett Studio and Iloura

Specifically, what changes did you make to the digital teddy bear for Ted 2?

GLENN MELENHORST: We made a few nips and tucks, mainly just tidying a couple of things up which were bugging us in the first film, such as messy hair around the lips which interfered with lip-sync. We reworked his facial rig slightly to make him a little easier to animate, and also his hands were reworked slightly to allow him to articulate a bit more easily.

ERIC LEVEN: The real changes we made at Tippett Studio involved upgrading our shading model to allow for more realistic fur lighting. Because of this, we didn’t need to futz with Ted too much in the comp, and he looked basically correct right out of the render. On Ted, there was a lot more playing with the look in the comp, which was obviously not ideal. Tippett Studio also transitioned to Katana on Ted 2 – that provided a great deal more flexibility for the TDs, and much faster turnaround of lighting tests and changes.

HOWARD CAMPBELL: We used environmental images captured on-set to mimic subtle variations in light and colour. This really improved our ability to integrate Ted into the scene and tie him in with the actors. We saw real leaps in details such as fur quality and reflections, but were still able to maintain the look and feel of the bear from the previous film. It’s the same bear, only better.

GLENN MELENHORST: The biggest changes we made at Iloura were improvements in our rendering technology. In Ted, we used a hybrid pipeline of raytracing in V-Ray and REYES in 3Delight. This time around, we still opted for a hybrid approach but put more emphasis on 3Delight, which we now also used for raytracing. We also completely reworked our cloth pipeline, using Marvelous Designer to build as well as simulate our clothing. This gave a very robust and realistic result, even when Ted was doing outrageous actions such as in the main title dance routine.

Watch the featurette Ted 2: A Look Inside:

Animated characters have benefited hugely from improved flesh and muscle simulations. Is that of any use when you’re animating a stuffed toy?

ERIC LEVEN: For a teddy bear, we didn’t need any muscle sims. What brought the model to life was the use of cloth simulations to get the right shape and weight of his body beneath the fur.

GLENN MELENHORST: We did do a full cloth solve to simulate the effect of his body being made of fabric. This meant if Ted bent over or twisted his body, you would get creases and folds appearing.

Were there any special requirements for Ted’s fur?

GLENN MELENHORST: On the original Ted, we had a scene with Ted in the bath. That required custom groom and shaders. This time, there were no real special requirements for his fur … other than to look real!

ERIC LEVEN: The effects model was basically the same on this film. There were new costumes to deal with – a raincoat, a scuba suit, a hooker outfit – that provided their own aesthetic challenges, but nothing technically new.

"Ted 2" animation and visual effects by Tippett Studio and Iloura

Stuffed toys often pop up in horror films – there’s something inherently creepy about seeing them come to life! How did you get around this problem?

GLENN MELENHORST: I think dolls in horror films are either motionless or spin their heads slowly with creepy music, which makes them seem more threatening. Ted moves more or less like a human, and he talks and cracks jokes, which is enough to give him empathy.

ERIC LEVEN: Our very first test of Ted for the first movie DID look creepy! We wanted Ted to appear worn and ragged – the way you might imagine a thirty year-old stuffed animal would become. So we had a few rips that had been sewn up and a big ratty cloth patch that covered a hole. His fur was matted and his expression was just generally pissed off, even when he was joking. Looking back, all these things made Ted look pretty creepy. The key was to make him look like a more ordinary stuffed animal that wasn’t quite so raggedy. Worn and old, yes, but not dirty. We also played his expression more sardonic and less angry. That helped us relate to him more instead of being repulsed.

"Ted 2" animation and visual effects by Tippett Studio and Iloura

What about Ted’s eyes? They’re really just blanks. How do you stop them looking dead?

COLIN EPSTEIN: It would be fair to say that Ted’s eyes got more attention per shot than just about any other element – except perhaps his fur. Since we’re so trained to focus on someone’s eyes to glean emotion, we did everything we could to make his eyes not just look physically real, but to really add to the intent of the scene. Every choice was based on that goal, whether he needed an angry glare, a stoned vacancy, or a mischievous twinkle.

GLENN MELENHORST: Naturally, having no whites in the eyes means Ted has to turn his whole head to make eye-lines work. With no moving parts to his eyes, we had to achieve sidling glances, stares, astonishment, and so on, with body language and subtle eyebrow animation. It’s very much like Snoopy or other characters who have dots for eyes: the tilt of a brow can really communicate a lot. With a character like Ted, you can have him just stare, completely motionless, and if you add welling music and a slow push in, the audience will conclude that he’s sad. You animate your intention, and the audience fills in the blanks.

How important is the way the eyes reflect their surroundings?

COLIN EPSTEIN: We had reference from a “stuffy” for almost every shot, but we only used that as a starting point. Each sequence environment got its own reflection set-up for Ted’s eyes, using set data and images, and then we’d manipulate that based on the feel his eyes needed. We’d often remove reflected details that were really there but proved visually distracting. For instance, the Comic-Con sequence was predominantly lit with a grid of lights on the stage ceiling. But accurately reflecting those rows of bright pinpoints in Ted’s eyes fought with his eyeline and cluttered things up visually, so we played those down. A key look note throughout the show was to make sure Ted’s eyes looked like old plastic, instead of brand new and shiny, or biologically alive. The practical bear had very clean and shiny plastic eyes, so we used them mostly for reflection and kick position reference.

"Ted 2" animation and visual effects by Tippett Studio and Iloura

How did you achieve the “old plastic” look in the renders?

COLIN EPSTEIN: Ted’s eyes were heavily treated in the comp passes. Since we knew going in that adjusting Ted’s eyes was always going to need specific attention, the comp scripts had a standard set-up that let the compositors control just about every aspect of each eye separately. A stock row of blurs, colour grades and transforms was there in Nuke when a shot was started. If the kicks landed smack in the centre of Ted’s eyes because of the angle of his head, it often gave him a creepy look we called “devil eyes”. In those cases, we would push the kicks off-centre in the comp to keep him appealing. Another issue was that Ted is modelled wall-eyed, so kicks and reflections in one eye would look drastically different in another. This sometimes made it hard for him to look like he was focused on something specific. So elements in his eyes would be pushed around a bit to counter that. Little adjustments like these happened throughout the show to strengthen Ted’s performance.

For the original Ted, Seth McFarlane performed scenes off-camera in a motion capture suit, with a stuffy used as a placeholder in shot. Was the same methodology used for Ted 2?

GLENN MELENHORST: Yes, it was very much the same this time around. Seth’s mocap setup – called Moven – provided us with reference for Seth’s upper body, principally his arms and the angle of his head. The system gave no facial feedback, or legs or body motion. We used the mocap as a basis for keyframing, adjusting the data to fit Ted’s body and modifying it where needed.

JEFF PRICE: For each camera angle, we would receive both a stuffy pass – with the VFX supervisor acting out the scene with the stuffed bear – as well as a greyball pass. We could then compare the stuffy size and placement in each shot with our animated Ted, as well as the light and shadows with the greyball in the scene.

GLENN MELENHORST: Whereas Seth was able to shoot a sideways glance, Ted needed to turn his whole head to look sideways. Often, therefore, the mocap was used only as reference. Combined with captured video, it gave the animators all they needed to understand Seth’s intention for the shot.

JEFF PRICE: As was the case with the first Ted movie, Seth McFarlane’s mocap performance generally served as a rough first animation pass, with our animators altering and fine-tuning Ted’s movements into a believable performance.

"Ted 2" animation and visual effects by Tippett Studio and Iloura

Animators are also actors. Is it important to “cast” the right animator for a particular shot?

GLENN MELENHORST: Animators are definitely actors. That’s something I believe quite strongly. We find a lot of animators who have skilled-up through online training courses, or through some of the bigger animation studios, tend to have learned what I can only describe as “Chaplin-esque 1920’s silent era acting” – overly expressive, hyper-extended, vaudevillian extremes. This has a place in many children’s films, but is not so useful when animating Ted. One of the best animation reels I ever saw was a study of a guy reading a newspaper. The subtlety was so well-observed – I loved it. Ted required that sort of observation, because his performance was so stripped back that every tiny key mattered.

BRIAN MENDENHALL: We spend a lot of time trying to keep consistency between performances. It is very important to the believability of the character as a whole. The worst note you could ever get from the director is to say that he wants to see the animator from one shot handle the animation on another (which did happen once). We don’t want the client to ever think there are any individuals behind the scene. I’d like to think you can’t spot the difference when you watch the movie. Maybe I can – but not you!

GLENN MELENHORST: More importantly than spotting differences between animators, we worked to not let the audience detect differences in Ted between studios. Matching renders and the quality of light and integration is one thing, but continuity in animation across vendors is another beast all together. Naturally, at Iloura we can tell a Tippett shot because we didn’t do it – but I hope it’s a harder task for the audience."Ted 2" animation and visual effects by Tippett Studio and Iloura

What was the most difficult scene you had to work on involving Ted?

GLENN MELENHORST: One of the greatest challenges was a shot where Ted fights a goose. The shot was turned over in the last two weeks of delivery. We had a simple goose asset provided, but we had to rig, animate and feather the goose in short order – no small feat. Also, it was a lengthy shot. We accomplished it by dividing the animation between animators and splicing the acting together. As well as it just being a difficult shot to turn around, Seth was still actively blocking out ideas on the shot during the last week. That meant we needed to run everything in parallel as much as we could, building our lighting pipeline on blocking animation, re-rendering as new animation was fed through the pipe, and compositing with whatever was at hand, just to get the thing done.

And the most complex?

GLENN MELENHORST: The title sequence of the film centres on a Busby Berkeley-style dance number with Ted out front of a hundred or so dancers. The main set piece – an oversized plywood wedding cake – needed clean-up and roto to remove screws and scuffmarks, so we spent time making it look pristine. The same was true of the dance floor, which was mirrored but scuffed, and while we didn’t clean it out entirely, we did remove a lot of the visual clutter while retaining the complex reflections of the dancers. This extended to painting out camera rigs and light rigs, and digitally extending the curtains and floor.

As well as cleaning up the environment, did you also have to edit any of the dancers’ performances?

GLENN MELENHORST: In the last shot of the sequence, the camera tracks back and up as the dancers form a perfect triangle. The girl on the furthest right didn’t hit her mark, and so the back right point of the triangle was a mess. One of our 2D guys, Johnathan Sumner, spent about a month reconstructing the shot by moving the girl, restoring all the arms, legs and flowing gowns that were originally across her, filling in the gaps left by her being slid sideways, and fixing the reflections.

Describe how you got Ted dancing in the opening number.

GLENN MELENHORST: Ted’s animation for the sequence had three real influences. Firstly, we matched mocap provided by the choreographer for the sections where they knew what they wanted. Secondly, there were times when we needed Ted to copy the dancers behind him and dance in sync. Thirdly, totally custom animation was needed when Ted was not the focus of the shot, when he needed to backflip or tumble or act in a way not possible to capture, or simply when the choreographer left it up to us to come up with the performance. We blocked the dance, refining and revising as the choreographer and Seth edited and revised the sequence.

One member of our animation team, David Ward, has a strong background in dance and really took to the sequence. It was a perfect fit, and he and the others on the sequence turned out some really great animation. It was not enough simply to match the dancers or use the choreographer’s mocap, because Ted is so small. We needed to exaggerate his silhouette, poses and timing, snapping up large gestures to help him really pop.

Our lighting team had the task of not only matching the studio lighting, but also creating believable reflections in the scuffed floor. They had to make Ted glow as a bright dress passed him, or darken him when he was shadowed by the passing dancers – we rotoscoped many of the dancers to isolate the jungle of legs. The studio lighting also changed colour and intensity through the shots, all of which all needed to be tracked and accounted for. The compositing team had the arduous task of interleaving Ted behind all the dancers, blending shadows and putting the final spit and polish on the shots. Overall it was a unique sequence and a great team effort."Ted 2" animation and visual effects by Tippett Studio and Iloura

We’ve heard a lot about Ted himself. Are there any memorable VFX shots that don’t feature the bear?

GLENN MELENHORST: The opening shot of the film is a cosmic zoom from the Universal logo orbiting the earth down into a stained glass window of a church in Boston. The shot was one of the first to be turned over and one of the last to be finished. As in the first film, we wanted the camera move to be a single unbroken shot, with no wipes or cheats between takes as is common in these things.

How was the cosmic zoom shot put together?

GLENN MELENHORST: Satellite and aerial photography was sourced and purchased, and our FX lead, Paul Buckley, began to block the camera move along with our match-move department. As the camera approached the city, we decided on what to build in 3D and what to leave as 2D matte painted elements. Eventually, we settled on building ten city blocks of Boston, along with some outlying skyscrapers. This build included all the vegetation, cars, pedestrians, kids on swings, debris in gutters, and so on. Many months of modelling, texturing animation and matte painting went into the shot, and we are pretty proud of how it turned out.

This slideshow requires JavaScript.

Thanks to everyone for contributing to this Q&A. Any closing thoughts on Ted 2?

GLENN MELENHORST: Well, I do want to say what a pleasure it was for us at Iloura to work with Seth and Blair again. Iloura and Tippett are like sister companies – we really enjoy working together. Seth is always so complimentary about our work, and it is amazing how much that enthuses everyone in every department to put in a huge effort. Likewise, Blair was always so generous with his praise and support, and we all count him as an honorary Ilourian!

NIKETA ROMAN: I know it’s been said, but we at Tippett Studio really want to emphasise what a strong director Seth MacFarlane is and how great it is to work with him. He’s got a background in animation and a very strong vision of who Ted is as a character. He speaks the language of an animator and communicates what he wants very clearly. That clarity and ease of collaboration is really what makes Ted so successful … it’s also just plain fun to work on a movie where you spend so much time laughing. We had a great time!

Special thanks to Niketa Roman, Fiona Chilton, Simon Rosenthal, Ineke Majoor and Anna Hildebrandt. “Ted 2” photographs copyright © 2015 by and courtesty of Universal Pictures, Media Rights Capital, Tippett Studio and Iloura.

“Maggie” – VFX Q&A

Maggie - VFX Q&A with Aymeric Perceval of Cinesite

Buy a ticket for a movie about zombies starring Arnold Schwarzenegger, and you could be forgiven for thinking you’re in for two hours of high-octane, undead action.

Buy a ticket for Maggie, however, and what you’ll get instead is a contemplative independent feature in which Arnie exchanges muscles for melancholy in a dramatic role as devoted father Wade Vogel.

As the world is gripped by a viral epidemic which gradually turns its victims into flesh-eating monsters, Vogel rescues his infected daughter, Maggie (Abigail Breslin), from a clinical “execution” at the hands of the authorities. Devoting himself to Maggie’s care, Vogel is forced to witness her steady decline … and accept the dreadful truth of her eventual fate.

Maggie is directed by Henry Hobson, with Ed Chapman in the role of production VFX supervisor. A number of key visual effects scenes were handled by Cinesite, under the supervision of Aymeric Perceval.

Cinefex spoke to Perceval about the super-subtle digital makeup techniques used to enhance the look of Maggie, and the challenges of creating the broken-down city environments seen at the start of the film.

Arnold Schwarzenegger and Abigail Breslin star in "Maggie"

How did you come to be involved with Maggie?

I had been involved with the bidding team as compositing supervisor, when I was asked to have a look at Maggie and come up with a methodology. Henry and I quickly found ourselves on the same frequency and, because it was mostly 2D-orientated, I was appointed as Cinesite’s VFX supervisor.

This was my first VFX supervisor credit, so they paired me with senior VFX producer Diane Kingston to make sure the house would not burn down! At times I also had the input of VFX supervisors Andy Morley (who also jumped in as CG Supervisor) and Simon Stanley-Clamp.

How closely did you work with the director, Henry Hobson?

We worked very closely. We had bi-weekly cineSync sessions with Henry and Ed to catch up on the work, and emails were flying every day for all the extra questions. It was a very constructive and positive atmosphere. Henry’s extra requests made so much sense within the grand scheme of things that we went with the flow and tried to accommodate as much as we could. Even though there was a lot to do in so little time, everybody managed to keep it very smooth, and I’m massively thankful for that.

What was the scope of Cinesite’s work on the show?

The shots can be split into two bodies of work. The first was applying a “Dead World” look to 49 shots. The second was 81 shots that involved zombifying Maggie and other characters.

In total, 60 artists helped to deliver these 130 shots over a period of two months. The work mostly happened in London, but our Montreal office gave us a very helpful hand with prep and tracking, as the first few weeks were quite a rush. We had been asked to deliver final versions of the 70 most complicated shots in only one month for the Toronto Film Festival. However, Lionsgate picked up the distribution of the film and this deadline was cancelled.

Cinesite enhanced location footage with visual effects to create 49 "dead world" shots for "Maggie" – original plate photography.

Cinesite enhanced location footage with visual effects to create 49 “dead world” shots for “Maggie” – original plate photography.

This final composite "dead world" shot includes digital fire and atmospheric effects.

The final composite “dead world” shot incorporates digital fire and atmospheric effects.

What part do the Dead World shots play in the film?

The movie starts with establishing shots of Maggie wandering at night in the streets of an abandoned Kansas City. When she gets arrested, her father Wade travels out of the countryside to bring her back home. Because the rest of the film is focused on the characters, it was very important to make sure the universe of Maggie was clearly defined by the end of this establishing sequence. The worst of the disease has passed and the world has been left in a state of decay and abandonment.

How did you go about creating the Dead World shots?

Because every shot is happening in a different place, we decided to use a 2½D approach. Photoshop matte paintings were projected in Nuke on to cards, sky domes and basic geometry which we modeled with Maya. We created the cameras with 3DEqualizer, using Google Maps as much as we could as we had very little information from the set.

Environment lead Roger Gibbon and his team destroyed houses, burned cars, tagged walls, killed every hint of life and re-invented locations. They also added abandoned football fields, rusty water towers, empty flyovers, dusty debris on the road and a Kansas City skyline. This allowed us to build a Dead World library, which we then re-used in other shots to make maximum use of the small budget.

This slideshow requires JavaScript.

How were the digital matte paintings composited into the production plates?

The DMP was passed to comp, who would integrate them into the cleaned-up scans using roto and additional elements from our SFX library. One of the most complex shots needed us to burn an entire field. Senior compositor Dan Harrod did a brilliant job using fire and smoke on cards, and Nuke particles for the embers.

The team also worked on establishing a palette and defining a look for the digital intermediate, which turned the beautiful green landscapes of the original photography into darker, more ominous tones.

Maggie-Cinesite-Interchange-2

There’s a fully CG shot looking down on a freeway interchange. Did you do many shots like it?

No, this is the only one.

Can you describe what part it plays in the sequence?

As Wade drives closer to Kansas City, Henry needed a final shot to mark the transition between the empty highway and the decayed urban area. So he had the idea of using some altitude and showing the desolation from above.

We first approached this shot thinking of degrading an actual area of Kansas City – we would have rebuilt it from satellite views. But we quickly moved towards mixing different areas so as to be sure the transition idea was conveyed.

How did you build up the shot?

Our DMP artist Marie Tricart blocked out the view and the core additional elements. We then projected these on basic geometry in Nuke to work out the camera and the length of the shot. Once blocked, we started layering the interchange, introduced multiple levels and heights to make the parallax interesting.

While Marie was adding more elements, changing, refining and destroying areas in Photoshop, modeller Michael Lorenzo added geometry to project on to. Finally, senior compositor Ruggero Tomasino gave the shot some extra 2D love by adding atmospheric elements – like flying newspapers (Nuke particles) and an animated 2D truck which the production team had shot.

This slideshow requires JavaScript.

Okay, let’s talk zombies. How does the disease affect people in Maggie’s world?

The zombification is a very slow process. Weeks pass by between the moment Maggie gets bitten on the arm and when she finally loses control. Henry’s idea was to show the disease spreading from the wound by a network of dark veins gradually covering Maggie’s whole arm, her shoulder, her neck and finally her face. At the same time, her eyes would start to cloud, the skin around them would begin to rot, and dry scabs would appear around the veins.

The whole idea is more about disease than gore. With the visual aesthetic of the movie, it had to be nearly poetic. Not a word you often use on zombie movies!

Abigail Breslin as Maggie

How much of the zombification had been done using makeup?

Because we arrived on the project after filming, we had to start from what had been done on set. The makeup team had painted veins on Abigail and used liquid latex for the scabs and the rot. Henry was happy with how the first stages were showing in camera but he felt that the last ones needed more work.

What did you add to the makeup?

The budget and timing did not allow for CG skin, so we worked around the existing makeup. We used the painted veins as a base, adding more and covering them with multiple layers of bruises and scabs. About twenty displacement layers were sculpted in Mudbox layers, together with additional displacement using textures created by senior texture artist Laurent Cordier. This gave us a flexible and non-destructive approach. Throughout our work, we kept track of the spread of the disease so as to keep the chronology accurate.

Cinesite delivered 81 shots in which subtle visual effects were used to enhance the effects of the zombifying disease on Maggie and other characters.

Cinesite delivered 81 shots in which subtle visual effects were used to enhance the effects of the zombifying disease on Maggie and other characters.

There are lots of extreme close-ups of Maggie’s body, often with low depth of field. Was this particularly challenging for you when applying digital makeup?

That visual style was one of the reasons why we enjoyed the project so much. It is very intimate – it felt like fresh air to our greenscreen eyes! But yes, it was definitely challenging. Abigail’s skin was completely baby smooth, so the tracking team did an incredible job, frame by frame. Imagine all these close-ups of body parts subtly twitching, with a focus point constantly traveling … and no tracking markers!  As the disease progressed and more painted veins appeared on Maggie’s skin, this task became a little easier.

Close-ups of Maggie’s feet going in and out of focus, with their subtle twitching, were massively tricky to track, even with the existing makeup. Senior matchmover Matt Boyer came up with sets of distorted planes which proved very helpful, and matchmove lead Arron Turnbull and his team did a magnificent job.

For some parts, we created geometry from the on-set pictures. Our head of assets, James Stone, modeled Abigail’s head based on photos provided to us. Incidentally, when we did receive measurements, our digital model was only 2mm out!

Abigail Breslin in "Maggie"

How many stages of decay were there altogether?

In total, we ended up with five Maggies. “Maggie 0” is the Maggie we meet at the beginning of the movie. She has just a tiny bit of red, flaky skin around the eyes. We did no work on her. “Maggie 1” sees her iris starting to cloud, some veins appearing on her forehead and in her neck, and the redness around her eye getting stronger.

“Maggie 1” is the stage that was lacking the most consistency on set, and so required the most invisible work from us. The effects had to be soft and discreet – very difficult. Because we had all the reels in-house, we managed to create templates to grade the scan with textured irises, veins and marks, based on the shots we were working on as well as the ones we would not touch.

How did the decay progress beyond “Maggie 1”?

“Maggie 2” takes over after she starts eating meat. At that point, we had to start the transition to our final stage Maggie. We pushed the eye cataracts further and used the projected veins pattern of a “Maggie 2.5” stage as mattes to grade, distort and fake a gentle 3D effect on her skin.

“Maggie 2.5” was at first supposed to be a more advanced version of “Maggie 2” – just before she turns into a zombie – but it soon became obvious that at this stage we would have to start using CG, so it earned its own codename. To create it, Laurent simplified down the texture and displacement from “Maggie 3”.

So “Maggie 3” was the final stage?

Yes. “Maggie 3” was the properly CG-enhanced version which we designed as a last stage. Because CG skin wasn’t an option, we blended our V-Ray-rendered CG veins with the live action plate. The lighting had to be spot-on for the transition to work.

Did you pay particular attention to the eyes – those windows of the soul?

Yes, Maggie’s eyes go through a whole evolution over the course of the movie. We would often need to apply some effect to them at the same time as we would apply effects to her skin. We rigged the eyes of our digital model to help the match-move of the skin, so we ended up with usable animated geometry and UV-ed eyeballs to play with.

The eyes of “Maggie 0” have a soft white veil – this was achieved during the shoot with lenses, or in post with grading. We pushed the lens-work a bit further by making a cloudy layer appear within the iris of “Maggie 1” and adding more pronounced little red veins on the edges.

“Maggie 2” goes full-on cataract. These are the most visually surprising. The effect was achieved by mixing the eyeballs with a distorted version of themselves using a network of organic shapes. This allowed us to keep a good part of the performance. We then added some tiny localised grading to give it a warm and humid “yolk” effect.

“Maggie 2.5” and “Maggie 3” see the white cataract disappear. Hundreds of little black veins invade her eyeballs to make it look like the disease is now taking control over her body.

Cinesite tracked production footage of Abigail Breslin as Maggie, then used digital techniques to enhance the practical makeup effects used on-set. Left: original plate. Right: final composite.

Cinesite tracked production footage of Abigail Breslin as Maggie, then used digital techniques to enhance the practical makeup effects used on-set. Left: original plate. Right: final composite.

Tell us about the scene in which Maggie loses her finger.

After Maggie falls and cuts her finger, she starts bleeding a lot; because she doesn’t feel anything any more, she simply cuts it off. On set, they had used a finger prosthetic for the first shots. We started by removing the rig, cleaning up the junction with the hand and removing the wobble. Then we match-moved a cut, sliced a hole at the bottom and regraded the whole to make it look like the inside of the finger was getting soaked with blood. When she cuts it off, we went for a CG stump.

They had done a good job of hiding most of Abigail’s finger on set, but the amount and the dryness of the blood on her hand was inconsistent. So James and Laurent modelled the finger, and lighting and texture artist Peter Aversten created multiple layers of blood textures to aid continuity.

Did you also apply digital make-up to the other zombies seen in the film?

There are not that many of them, but Maggie is certainly not the only zombie in the movie, and they all required enhancement. The best ones to talk about are maybe Nathan and Julia – a father and daughter we meet in the forest just after Maggie cuts her finger. The makeup was good, but unfortunately it was not showing enough on camera. We painted veins over their faces and their hands and match-moved them. And just because we could, we also warped and regraded them to make them look much skinnier and way less healthy.

Maggie was your first project as VFX supervisor. Was it a big learning curve?

Well, because it was only two months, there was little room for error. Luckily, the Cinesite team has been very supportive. Thanks to their talent and positive approach, the ride has been a pretty good one. Hopefully, they’ve enjoyed it too!

Maggie is released on Blu-ray and DVD today, 7 July 2015.

Special thanks to Helen Moody. “Maggie” photographs copyright © 2015 Lionsgate and Lotus Entertainment.