“Pixels” – VFX Q&A

by Graham Edwards

Pixels - Cinefex VFX Q&A

It’s game over for humanity. Well, it is by the end of Patrick Jean’s 2010 short film Pixels, in which a swarm of 8-bit videogame characters escapes from a trashed TV and proceeds to turn first New York City, then the entire planet, into multi-coloured cubes.

Now, Jean’s original concept has been expanded into a full-length feature directed by Chris Columbus. Sharing its title with its progenitor, Pixels stars Adam Sandler and Michelle Monaghan … not to mention a host of re-spawned retrogamer favourites like PAC-MAN, Donkey Kong and Q*bert.

Ironically, in order to create the desired old-school look for the movie’s digital characters, the filmmakers needed to deploy state-of-the-art visual effects. Heading the effects team were production VFX supervisor Matthew Butler and VFX producer Denise Davis. The majority of the effects shots being assigned to Digital Domain and Sony Pictures Imageworks, with nine other VFX companies playing supporting roles.

One of the biggest challenges faced by this extended visual effects team was how to level-up the 1980s game characters to become fully three-dimensional entities. The solution involved discarding traditional flat pixels, and instead constructing the characters using 3D cubes known as “volume pixels – or “voxels”, for short.

So how exactly were the voxelised characters of Pixels brought to life? To find out, we spoke to key artists at Digital Domain, Sony Pictures Imageworks, and a number of the other vendors who joined forces to craft the pixels of Pixels.

"Pixels" including visual effects by Trixter

Sony Pictures Imageworks – VFX supervisor, Daniel Kramer

How did Sony Pictures Imageworks get involved with Pixels?

Lori Furie from Sony Pictures invited us to bid on the work. I met with Matthew Butler and Denise Davis to talk about the challenges, and Matthew and I hit it off pretty quickly – we had similar ideas about how to approach the look of the characters. I was on the show for about a year, which included some on-set supervision for Imageworks’ portion of the work. In December 2014/January 2015 we started getting our first plate turnovers, so actual shot production lasted about 5­6 months.

What was the scope of your work?

We delivered about 246 shots in all, though some of those didn’t make it into the final cut. The bulk of our work was during the chaotic sequences towards the end of the film, where the aliens unleash all the videogame characters on to the streets of Washington D.C. We had to develop a large number of characters for that – 27 in all – as well as the alien mothership.

We also handled the shots in Guam, when the Galaga characters first arrive, as well as the digital White House and White House lawn extensions. And we were responsible for all the Q*bert shots, some of which we shared with Digital Domain.

For "Pixels", Sony Pictures Imageworks digitally re-created a number of hard-to-access Washington D.C. locations, including the White House and its surroundings.

For “Pixels”, Sony Pictures Imageworks digitally re-created a number of hard-to-access Washington D.C. locations, including the White House and its surroundings.

Describe your relationship with the director and production-side VFX team.

I worked very closely with Matthew, both on-set and during shot production, meeting several days a week. Fortunately, he was close by at Digital Domain, which is only about a 15-minute drive from Imageworks. We generally reviewed work in person, with only the occasional cineSync session.

Chris Columbus worked from his offices in San Francisco, and had daily reviews with Matthew and team over a high-speed connection to Digital Domain. It was a very slick system – the VFX production team in Playa del Rey could stream full 2k content to Chris’s projector, and Chris could stream Avid media back. When we had shots to review, I would head to Digital Domain with Christian Hejnal, our Imageworks VFX producer, and review our shots directly with Chris and Matthew.

Matthew and Denise were really great about including Imageworks as a peer in the process, so I was able to present work directly to Chris and hear his notes first-hand. That really tightened up the feedback loop.

This slideshow requires JavaScript.

Did you take visual cues from the original 2010 short film by Patrick Jean?

We studied Patrick’s short quite closely for inspiration. His short is really charming, and a lot of that charm comes from the very simple shapes and silhouettes of his characters. We quickly learned that over­detailing the characters destroyed what made the original game concepts so engaging, and so we always worked toward keeping the characters as low-res as possible, with just enough voxel resolution to read the animation clearly.

For each game, John Haley, our digital effects supervisor, was generally able to find original sprite sheets and YouTube videos of gameplay for the team to reference. We’d use the sprite sheets for modelling inspiration, and then Steve Nichols, our animation supervisor, would study the gameplay, working in as many elements as possible into our characters’ motion.

Watch Patrick Jean’s original short film Pixels:

What challenges did you face when translating the 2D game characters into 3D?

The 3D “voxel” look was already established in Patrick Jean’s short, but there are many ways to go about voxelising a character, and to determine how those voxels track to the animation.

For example, should we model the characters with voxels directly, or build them procedurally? Should voxels be bound to the character like skin, or should characters move through an invisible voxel field, only revealing the voxels they intersect? This latter solution – “re-voxelisation” – is akin to rasterising a 2D game character on a CRT: as the sprite moves through screen space, the static pixels on the screen fire on and off.

Which solution did you favour?

Chris and Matthew liked the notion that the characters would re­voxelise as they moved – it felt more digital. But our first attempts at a pure, static voxel field posed a few problems.

First, it proved impossible to control the orientation of a voxel relative to the character’s orientation, as the two were independent. On one frame, the voxel faces might be perpendicular to features on the character’s body; but after the character turns, those same voxels might have turned relative to the same feature. This made it difficult to keep the characters on-model.

Another issue was that even very small motions caused the whole character to re­voxelise as it intersected different parts of the static field, which was distracting.

The last big issue revealed itself in lighting. If the voxels were static, and simply turned on and off as the character moved, they never changed their relationship to the set lighting. This made it difficult to shape our characters and make them feel believably integrated. So, while we really liked the idea of a static field, in practice there were too many issues.

Since the static field option wasn’t working out, what did you opt for instead?

We ended up using a hybrid approach, parenting smaller voxel fields to different parts of a character’s body. So, one field might be tracked to the face, another to the chest, another to the upper arms, and so on. These fields moved independently with the rotations and translations of the skeleton. Any deformation – like squash and stretch – would cause re­voxelisation in that region. This calmed down the re­voxelisation to a pleasing level, gave us more control on how voxels were orientated to the characters’ features, and fixed our lighting issues by allowing voxels to rotate through space.

This slideshow requires JavaScript.

With that decided, how did you then go about building and rigging the character models?

For most characters, we would build a smooth-skinned model with a simple rig. Our FX department, headed up by Charles­Felix Chabert, would build a procedural Houdini network to break up the character into sub-voxel fields.

Even though the characters looked quite simple, they were actually really heavy, with a solid volume of cubes, each cube with bevelled edges. The polygons added up fast! For large scenes with hundreds of characters, we quickly learned that the voxelisation process could take days to complete. Much of our further development was therefore about optimising the workflow. Our final pipeline ended up passing a single point per cube to the renderer, and instancing the cubes at render time.

What approach did you take with the lighting?

Chris Columbus didn’t want the characters to feel plastic. That said, there’s a lot of charm in keeping the characters simplistic and blocky, as seen in the original Pixels short. Chris, Matthew, and Peter Wenham, the production designer, came up with the idea of “light energy”, whereby the cubes emit light from an internal source. This allowed the cubes to retain a simple geometric shape, while still showing hints of complexity burning though the surface.

How did that work for scenes in bright sunlight?

Consider a light bulb outside in the sun – the internal light needs to be incredibly bright to be visible, and once you get there you’ve lost all shape on the object. That makes it really difficult to integrate it into the scene. After much trial and error, we settled on having only a subset of the cubes emit light at any one time. We also animated that attribute over time. This allowed the environment light to fall more naturally on the dormant voxels, thus anchoring the objects into the scene and giving good contrast against the lit voxels.

SPI-Pixels-Qbert-Build-01

Which was the most difficult character to develop?

Q*bert took the most effort. He’s really the only game character who needed to act and emote with a broad range. We started by pulling as much of the original game art as possible. The in­game sprites are incredibly low-res, but there’s a lot of detail in the original cabinet artwork – that was our main source of reference for features and proportions.

With basic approval on the smooth model, we moved on to voxelisation in Houdini. The first versions used a single voxel size for the whole body, but we quickly found that we needed more detail in areas like the eyes and brows, and less in areas like the skull. Each feature of Q*bert was dialled to get just the right voxel size and placement. Most of our trial and error in learning how to voxelise our characters happened during Q*bert’s development.

A number of techniques were used to soften the angular appearance of Q*bert's voxel building blocks, including multiple voxel sizes, and transparent outer layers revealing smoother shapes beneath.

A number of techniques were used to soften the angular appearance of Q*bert’s voxel building blocks, including multiple voxel sizes, and transparent outer layers revealing smoother shapes beneath.

Q*bert is very round and cute. Did the blockiness of the voxels fight against that?

When we presented our first Q*bert lighting tests to Matthew and Chris, we had applied a simple waxy plastic shader to the model. Chris felt our shading treatment was too close to Lego. With all the hard cube edges he said, “It looks like it hurts to be Q*bert!” This comment sent us on a long journey to figure out how to make a character built from hard, angular cubes look cute and soft.

We ended up doing literally hundreds of tests, adjusting all aspects of the model and shading to find a combination that would work. We introduced light energy to the interiors and edges of the cubes, dialling a pattern to control where and when cubes would emit light. We layered voxels at different scales into the interior of Q*bert, and adjusted the transparency of the top layer to reveal the depth.

We also introduced some of the underlying round shape of the smooth model into the cube shading – this allowed us to rim and shape Q*bert with softer graduations of light. The combination of all of these tweaks – and a lot of elbow grease by our look-dev team – finally found a look Chris and Matthew liked.

What approach did you take with Q*bert’s animation?

Animation for Q*bert was a lot of fun, with cartoon physics and lots of opportunities for gags. In one early test, we only allowed Q*bert to move through a scene by jumping in alternating 45° rotations, just like the videogame. We really liked this idea in theory, but in practice it wasn’t that interesting. Instead, Q*bert transitions from hopping to walking and running, varying his gait in a more natural way.

See Q*bert and a host of other videogame characters in this Pixels featurette:

How did you tackle the big action scenes towards the end of the movie, when the videogame characters are trashing Washington D.C.?

One of our more difficult shots was the attack on the Washington Monument, which opens our “D.C. Chaos” sequence. The camera tracks a group of Joust characters to the monument, and we circle the action as they begin to destroy it. The difficult part was the location – we’re flying right over the National Mall, next to the White House and Capitol Building. This is a strict no­fly zone. So, with no way to get the background plate in-camera, we knew we would need to create a photoreal 2½D matte painting of the whole area.

What reference were you able to get of the area around the Washington Monument?

We started with previs from the team headed up by Scott Meadows. This gave us the exact angles of D.C. we needed to plan for. We were also able to get a permit to fly a helicopter around the outside perimeter of the National Mall to acquire reference. We earmarked about four key locations where we could hover and acquire tile sets to use in our reconstruction.

In practice, none of these locations was really ideal – Homeland Security just wouldn’t allow us to get close enough to the monument. So, in addition to the helicopter footage, we acquired panoramas and hundreds of stills of the monument and surrounding buildings by walking up and down the Mall. We were also able to go inside the monument and capture stills through the top windows.

In order to show the destruction of the Washington Monument, Sony Pictures Imageworks created a 360° panoramic matte painting of the surrounding environment, to act as a backdrop for a digital model of the monument itself.

In order to show the destruction of the Washington Monument, Sony Pictures Imageworks created a 360° panoramic matte painting of the surrounding environment, to act as a backdrop for a digital model of the monument itself.

How did you then create the digital environment?

Once we had all the reference back at Imageworks, we started building a simple model of the area. We 3D-tracked stills from our helicopter shots, adding them to the model as needed. Jeremy Hoey, our matte painter, had the difficult task of bringing all these sources together to create one seamless, 360°, 2½D matte painting of Washington D.C. as seen from the monument location.

What about the Washington Monument itself?

We built a 3D, photoreal version of the monument, which needed to be destroyed using a mixture of voxel shapes and natural destruction. As each Joust character strikes the monument, the area local to the hit is converted to large voxel chunks, with light energy spreading from the impact point. Then, as the destruction continues, large cube chunks begin to loosen and fall away from the monument.

We found that keeping the scale of the chunks really large looked a lot more interesting and stylised – smaller voxels started to look too much like normal destruction damage. FX artist Ruben Mayor designed all the sims in Houdini for the destruction shot, and Christian Schermerhorn did an excellent job compositing a completely synthetic shot to create a very photographic feel.

This slideshow requires JavaScript.

How do you feel about your work on Pixels, looking back?

At first blush, Pixels seems like a simple job, because the characters look so simple. Nothing could be further from the truth! Each process had to be invented, and every shot required a complicated Houdini process to deliver the data to lighting. I underestimated the number of challenges we would encounter, many of which we just couldn’t predict until we dived in. I’m really proud of what the team was able to accomplish.

What’s your favourite retro videogame?

That’s a tough question! I played most of these games as a kid, plus numerous computer games on my Apple. For arcade machines, I really liked Pole Position and Zaxxon. On my Apple, I loved the original Castle Wolfenstein and Karateka.

"Pixels" including visual effects by Digital Domain

Digital Domain – VFX supervisor, Mårten Larsson

How did Digital Domain get involved with Pixels?

Patrick Jean, the creator of the original short, was actually represented by Digital Domain’s production company, Mothership, for a short time. After the feature film had been attached to Chris Columbus, his team reached out to Matthew Butler, senior VFX supervisor at Digital Domain, to work on the film with him.

I started doing tests for the show in October 2013, and we delivered our last shots in June 2015. Most of our shot production happened between September 2014 and June 2015.

Which sequences did you work on?

The bulk of our work was three sequences: PAC-MAN, Centipede and Donkey Kong. We created the characters, along with anything they interacted with and the environments they were in. We also did a few shots of people dissolving into voxels and reassembling from voxels, as well as some characters from other sequences.

Many of the Donkey Kong shots in "Pixels" deliberately use angles and compositions inspired by the original gameplay, as in this shot by Digital Domain.

Many of the Donkey Kong shots in “Pixels” deliberately use angles and compositions inspired by the original gameplay, as in this shot by Digital Domain.

How much contact did you have with the director?

We worked very closely with both Chris Columbus and Matthew Butler. Chris was based in San Francisco, so after the shoot we mainly interacted with him over video-conference. The production-side VFX supervisor and producer had space here at Digital Domain for the duration of the show, so we worked very closely with them.

What aspect of the show’s visual effects did you find most challenging?

I’d say the trickiest part was how to translate 2D characters into fully 3D characters that move in a physically plausible way, while still trying to retain the spirit of the simple video games that we have all come to know and love.

Take Donkey Kong, for example. He has a very iconic look that was fairly easy to match when seen from the front, in a pose that matches the videogame. But, when you start looking at that same model from a three-quarters angle, it looks less like the game. Add motion to that. and you’ll end up in poses that he never does in the game.

One of the big challenges was to keep Donkey Kong looking consistently like the original game sprite, even when seen from multiple angles in three dimensions.

One of the big challenges was to keep Donkey Kong looking consistently like the original game sprite, even when seen from multiple angles in three dimensions.

How did you solve this problem?

There was no real silver bullet to solve it. We basically tried to get Donkey Kong looking as close to the game as possible, using iconic poses, and trying really hard to not show him from angles that were too different from the game.

The characters in Pixels are quite visually complex, compared to the 8-bit originals. Was that deliberate?

The characters needed to be made out of boxes – or voxels – to resemble the pixels from the original low-res games. In order to make the characters look complex and more real, we added a lot of detail and pattern to both the individual boxes and the overall character. The idea is that, even though they’re made out of simple voxels, they are actually aliens with very complex technology. This approach also gave us an excuse to add some light-emitting energy, and make things look cooler and more interesting.

This slideshow requires JavaScript.

Early on, we ran into the issue of reflections in flat surfaces. If you look at PAC-MAN, he is a sphere. If you build that sphere out of boxes, your brain tells you that you are looking at a sphere, but you are actually seeing flat, mirror-like reflections on the surface. It looks really strange. We got around this on all the characters by blending some of the pre-voxelised model normal into the normal on the voxels.

Were the animators working with the basic, smooth-skinned characters, or with the final voxelised versions?

The animators worked with the pre-voxelised character, but had the ability to turn on the voxelisation process to check poses and facial expressions when needed. A lot of attributes were also transferred from the skinned character across to the voxels, tracking with its movement and sticking on the per-frame-generated voxelised version.

So the voxels were only generated after the animation was complete?

Yes – all things voxelising went through our FX department, and were passed on to lighting for rendering. We also had setups for the characters to go straight from animation to lighting via an automated voxelisation system. But, anytime we needed to do anything special for a character in terms of how it voxelised, the FX department picked up the animation publish of the regular skinned character and generated a new voxelised version for lighting.

Digital Domain's Centipede character went through a number of design iterations, and includes many features inspired by the 1980s arcade game artwork.

Digital Domain’s Centipede character went through a number of design iterations, and includes many features inspired by the 1980s arcade game artwork.

How did you develop the look of the Centipede character?

Centipede started as a design that resembled an actual centipede. From there, it was tweaked to look much more like the art on the side of the arcade game, with claws for feet, a lizard-looking face and eyes, and snake-like teeth. We went with this look for a while, using different sizes of voxel to capture the smaller features.

How did it progress from there to the final design?

After a few rounds of testing looks and poses, we got a note from Chris – he thought the character had lost some of the simple charm of the game. At that point, we went back to a design much closer to the previs model. We still incorporated some features from the arcade game art look – for example, we made the eyes similar to the artwork, but didn’t put the pupils in. We also used the sharp teeth and the claws. We ended up with a character that looks mean, but is still similar to the game. I think where we landed in the end is a very successful mix.

"Pixels" including visual effects by Digital Domain

What happens in the PAC-MAN sequence?

PAC-MAN is terrorising the streets of New York, and being chased by our heroes in “ghosts” – Mini Coopers with special equipment that can kill PAC-MAN. The approach for this sequence was to use CG for PAC-MAN (obviously), and also for most of the things he interacted with. As much as possible, the rest would be shot practically.

For the PAC-MAN sequence, Digital Domain combined their CG character with practical effects and vehicle stunts shot on location in Toronto.

For the PAC-MAN sequence, Digital Domain combined their CG character with practical effects and vehicle stunts shot on location in Toronto.

Tell us about the PAC-MAN location shoot.

The Mini Coopers were practical cars driven by the stunt team in downtown Toronto, which was made to look like New York. Since PAC-MAN is essentially a giant yellow light bulb in the middle of a street at night, we knew that he would throw out a lot of interactive light. To help with this, a Mini Cooper was rigged with big yellow light panels and generators on the roof, and used as a stand-in for PAC-MAN. The car also had a paint pole on the roof, with an LED light up top, to show the height of PAC-MAN and help with the framing of shots.

The decision was taken to sink PAC-MAN a little into the ground, bringing his mouth closer to street level and thus making it easier for him to bite down on his prey.

The decision was taken to sink PAC-MAN a little into the ground, bringing his mouth closer to street level and thus making it easier for him to bite down on his prey.

For some shots, the interactive light car was only used for lighting reference, and in other cases it was present in the filmed plate. Where the timing of the car didn’t work with what Chris wanted PAC-MAN to do in the animation, we ended up painting out both the car and the light it threw. Overall, it was very helpful to have as a reference though, and in some shots that was all the interactive light we used in the shot.

Practical light rigs were used on location to simulate the yellow glow cast by PAC-MAN. In many shots this was enhanced, or replaced entirely, with digital interactive lighting.

Practical light rigs were used on location to simulate the yellow glow cast by PAC-MAN. In many shots this was enhanced, or replaced entirely, with digital interactive lighting.

Did you do 3D scans of the street environment?

Yes – we did lidar scans of most environments to help our modelling and tracking departments. We knew we’d be modelling a lot of geometry that would need to line up very closely to buildings and parked cars in the plates, in order to get the reflections and interactive light from PAC-MAN to look believable.

In the end, we modelled a bit more than we we’d originally planned, but the interactive light helps so much to sell that PAC-MAN is really in the scene, since it’s pretty obvious that PAC-MAN is fake, no matter how good we make him look!

The lidar was also very helpful in creating geometry for collisions with all the voxels we threw around when PAC-MAN bites cars and objects. Our general rule was that anything he bites turns into voxels, while anything he bumps into is destroyed as if he was a normal earthbound object. Of course, we broke that rule a lot, and did whatever we thought looked more interesting.

This slideshow requires JavaScript.

How did you create the glowing trail left by PAC-MAN?

PAC-MAN had both a heat trail and a misty-looking energy trail. These were generated and rendered by the FX department.

The reason for the trail was to tie PAC-MAN into the ground a bit better. Because he had to be able to bite objects at ground level, and because we had restrictions on how much he could open his mouth, we ended up having to sink him into the ground. If we hadn’t, his lower lip wouldn’t have reached the ground, and he wouldn’t have been able to bite anything that touched the ground.

It looked a bit odd to just have him truncated at ground level and not have him influence the ground at all, so the trail element was added. I think it adds to PAC-MAN’s look. With all the white and warm lighting in the city, and with PAC-MAN being yellow, it was nice to get some other colours in there – that’s why we went with a slightly blue colour on the trail.

Some of Digital Domain's PAC-MAN shots required fully digital vehicles and destruction effects, as seen in this partially rendered animation breakdown.

Some of Digital Domain’s PAC-MAN shots required fully digital vehicles and destruction effects, as seen in this partially rendered animation breakdown.

How do you feel about your work on Pixels?

I’m very proud of the work our team created. It’s a cool mix of really odd characters and, in my opinion, cool effects. PAC-MAN looks like something alien that we haven’t seen before. The end result is quite unique, and different from most movies out there. I like that.

What’s your favourite retro videogame?

We had a whole bunch of arcade games on set for the sequence when our young heroes are in an arcade hall in the ’80s. They all worked, so I played a lot of them. I think the highlights for me were Missile Command and Q*bert. Another highlight was meeting Professor Iwatani, the creator of PAC-MAN, and getting a photo standing next to him in front of a PAC-MAN arcade game!

Watch Eddie Plant (Peter Dinklage) battle PAC-MAN in this Pixels featurette:

Trixter – VFX supervisor, Alessandro Cioffi

When was Trixter invited to work on Pixels?

Simone Kraus, CEO and co-founder of Trixter, had been following the project from L.A. since the early days of pre-production, and we’d been looking forward to being a part of the show since then. Matthew Butler led us through the look and the vision on the VFX, with in-depth explanations on the intentions and the dynamics of every shot and sequence. We ended up doing some concept work, and what I call a “VFX cameo”.

How many shots were in your VFX cameo?

We worked on about 25 shots, in six different sequences. Along with some blaster effects and pixelated invaders, we also created the concept for Michael the Robot. We then built, rendered and integrated him into the lab sequence. Also worth a mention is the brief appearance of Nintendo’s Mario, for which we designed a three-dimensional version of the character, extrapolated out of the original game design. We rigged, animated and integrated him in one shot during the Washington D.C. attack sequence.

Was there much scope for creativity, or were you conforming to a look and feel that had already been established?

Mainly, our references were shots done previously by other vendors. However, for Michael the Robot, we referenced real mechanics, and for the Missile Command effect, the videogame itself.

Even though we had to blend into an already strongly-established aesthetic, Matthew always encouraged us to come up with alternatives for the look of this or that effect. In fact, although he and the director had extremely clear ideas on what they were after, Matthew left the door open to improvements or variations. I love these opportunities for such creative work.

Tell us more about Michael the Robot.

We created a CG extension of Michael’s glassy head – which contains complex, hi-tech internal mechanisms and electronics. From an initial 2D concept, we developed a 3D digital concept model which served as our base for approval.

To integrate Michael into the live-action plates, we had to do intricate match-moves, in order to ensure that his movement and action would fit seamlessly. We added procedural animation to the mechanism inside the glass dome in order to achieve a fluid animation, using several lights constrained to procedural animation. Lighting and rendering was done through The Foundry’s Katana and Solid Angle Arnold.

What’s your favourite retro videogame?

Definitely Missile Command. I have spent innumerable coins on it!

"Pixels" including visual effects by Atomic Fiction

Atomic Fiction – VFX supervisor, Ryan Tudhope

Tell us how Atomic Fiction came to be part of the Pixels team.

We had been following Pixels since it was announced, and really wanted to find a way to help out on such a ground-breaking project. Having known Denise Davis and Matthew Butler for some time, we knew the visual effects team was top-notch, and wanted to collaborate with them in any way we could. Fortunately, they had a small sequence that was a great fit for us.

What did the sequence involve?

We came on board fairly late in the project’s schedule, and our involvement was on only a few shots. The work we did was really fun and complex, however, centering on several sequences requiring dialogue-capable CG face replacements of Ronald Reagan, Tammy Faye and Daryl Hall – in the story, they are all avatars of the alien invaders. All three characters leveraged Atomic Fiction’s digital human pipeline, which we’ve utilised on several projects, including Robert Zemeckis’ upcoming The Walk.

How long did you spend working on the shots?

Because of how involved the shots were from an asset and animation standpoint, our schedule spanned approximately four months. We interacted mainly with Matthew Butler. He is an amazing supervisor to work with, always on point with his feedback, and great at inspiring the team!

Were you familiar with Patrick Jean’s 2010 short film?

We’re huge fans of the original short, and love the fact that its popularity led to the development of this feature film. It’s a great Hollywood success story. While our digital human shots didn’t directly relate to the work in Patrick’s original, it was nonetheless a great inspiration for our team.

The cast of "Pixels"

Describe how you created these celebrity avatars.

Our mission was to add our visual effects work to VHS-quality captures from the 1980s. So, in contrast to other projects, our Pixels artists were starting from scratch, with no scan data, textures or on-set HDRIs of these celebrities. This required us to find a wide variety of photographic reference to sculpt, texture and light each celebrity by eye.

It was a really fun challenge, to set aside technology for a moment and just discuss what looks right (or not) about a digital model. It’s fun to find solutions in spite of limitations like these – it gets back to the art of it all.

Atomic Fiction’s CG Supervisor, Rudy Grossman, led a lean team that included Mike Palleschi, senior modeller, and Julie Jaros, lead animator. Because we all knew exactly what our shots would need and what dialogue was required, we were extremely efficient during the face-shape modelling and rigging phase. Our asset work and shot work was happening concurrently, as we were effectively evaluating both modelling and lighting in the same takes. Tom Hutchinson led the charge on look development and lighting, and Jason Arrieta on compositing.

What’s your favourite retro videogame?

That’s easy: Moon Patrol!

Adam Sandler in "Pixels"

Before flashing up the GAME OVER screen on this Q&A, there’s just time to check in with the other vendors who helped bring Pixels to the screen.

Storm Studios delivered a 20-shot sequence showing India’s Taj Mahal being attacked by characters from Atari’s Breakout. Storm’s VFX supervisor was Espen Nordahl (who named Super Mario Bros as his favourite retro videogame). Having determined that the graphics from the original Breakout were a little too rudimentary, Nordahl’s team incorporated design elements from later iterations of the game. This allowed them to give the characters more shape, yet retain the old-school pixelated look. A digital Taj Mahal asset was built, after which the Storm crew ran rigid body destruction simulations in Houdini. Additional effects were layered in, showing sections of the building turning into voxels, adding “light energy” to the voxels at the impact points, and creating holes in the structure where the balls bounced off.

“I’m very proud of the work we did,” Nordahl commented. “This was our first big international show, and I would like to thank Sony and Matthew Butler for trusting us with such a complex sequence.”

At shade vfx, a team of CG and compositing artists was tasked with creating the illusion that the stars of Fantasy Island – Ricardo Montalban and Herve Villechaize – were delivering a congratulatory message from space. Led by VFX supervisor Bryan Godwin, shade’s team reconstructed full-CG versions of both actors from archival photographic reference and clips from Fantasy Island. Even though the task only required new lip-sync to match the newly-recorded dialogue, it was necessary to animate the entire cheek structure, eyelids and even nose to react correctly to the newly structured phonemes.

One More VFX, with Emilien Dessons supervising, worked on the Donkey Kong arcade duel sequence in which young Eddie Plant duels young Sam Brenner. Their goal was to re-create accurate Donkey Kong game graphics using MAME (Multiple Arcade Machine Emulator) software, as well as to re-create the high score motion graphics.

As a point of interest, Johnny Alves, Matias Boucard and Benjamin Darras of One More VFX were also executive producers on Pixels (Patrick Jean was a 3D supervisor at One More VFX when he made the original short).

At Pixel Playground, VFX supervisor Don Lee oversaw the production of a number of shots involving 2D compositing and 3D tracking, greenscreen, the integration of game footage into arcade games and a variety of cosmetic fixes.

Further VFX support was provided by Lola VFX, Cantina Creative and The Bond VFX.


Watch the trailer for Pixels:

Special thanks to Steven Argula, Rick Rhoades, Tiffany Tetrault, Franzisca Puppe, Geraldine Morales, Lisa Maher, Benjamin Darras and Kim Lee. “Pixels” photographs copyright © 2015 by Sony Pictures Entertainment and courtesy of Sony Pictures Imageworks and Digital Domain.