About Graham Edwards

I'm senior staff writer at Cinefex magazine. I also write novels. In a former life, I produced animated films for theme park rides and science centres. If you offer me a cold beer, I won't say no.

Ant-Man – Cinefex 143 Extract

Ant-Man - Cinefex 143

To celebrate the launch of Cinefex 143, we’re treating you to a sneak preview of all the articles inside. First up is Microcosmos, Joe Fordham’s extensive look at the effects of Marvel’s big (or is that little?) summer hit, Ant-Man.

Under the expert guidance of visual effects supervisor Jake Morrison, a VFX team including Double Negative, Method Studios, Luma Pictures, Industrial Light & Magic, Cinesite and Lola Visual Effects – with Prime Focus World and Legend3D handling stereoscopic conversions – meticulously crafted the film’s miniature magic.

Our exclusive extract outlines the work of the “macro unit”, which operated concurrent to the main unit shoot photographing specially constructed sets:

“We approached macro-world scenes like ‘pack shot’ product photography in television commercials,” explained Jake Morrison. “The art department took small sections of the main unit sets and rebuilt them as six- to eight-foot extractions. Macro sets were built 1:1 scale – we never tried to make anything oversized – but detail was much higher. Macro unit art director Jann Engel had bags of dead bugs, gummy bears and all sorts of nasty debris that she dressed in.”

To line up shots, the macro unit used a tiny rapid-prototyped model of Ant-Man, smaller than an HO-scale model railroad figure, and a P+S Technik Skater Skope periscope lens with a rotating nodal head that placed Frazier lenses as low as possible to floor level.

After motion control photography of macro unit sets, production plate photographer Teddy Phu Thanh Danh and Alex Wuttke gathered still imagery using three Canon 5D cameras on Dr. Clauss Rodeon panoramic heads. “The joy of the process was discovering the detail in what we’d shot,” said Wuttke. “To the naked eye, the macro sets looked quite mundane – a rusty pipe, or the corner of an old floorboard – but when we photographed those with a 100mm lens centimeters from the subject and did our preview stitch, the information it revealed was mind blowing. Jake’s strategy of using real objects with natural wear and tear really paid dividends when we got into macro scale, because there was no artifice. An astonishing level of detail appeared before our eyes.”

Read the complete article in Cinefex 143, which also features The Walk, Terminator Genisys and Mission: Impossible – Rogue Nation.

All content copyright © 2015 Cinefex LLC. All rights reserved.

Now Showing – Cinefex 143

Cinefex 143 - From the Editor's Desk

The kids are back in school and the leaves are turning orange on the trees. It can mean only one thing. It’s time to release the fall edition of Cinefex, the premier magazine for visual effects professionals and enthusiasts alike.

Cinefex issue 143 is bulging with big stories. In fact, even the smallest story is big. Yes, I’m talking about Ant-Man, in which cat burglar Scott Lang (Paul Rudd) is hired by biochemist Dr. Hank Pym (Michael Douglas) to battle a rival weapons manufacturer in the development of a serum that can shrink a protagonist to ant size, imbuing microscopic combatants with super powers.

Starting with The Terminator in issue 21, we can proudly say that Cinefex has delivered in-depth effects coverage of every Terminator film to date. Terminator Genisys is no exception, so if you want to know exactly how those duelling T-800s were created, this issue is for you. Another old franchise friend comes in the form of Mission: Impossible – Rogue Nation. Tom Cruise returns as IMF agent Ethan Hunt, as the IMF team undertakes its most audacious feats of espionage and daring to date.

Also featured in Cinefex 143 is The Walk, Robert Zemeckis’s gripping dramatisation of the true story of Philippe Petit (Joseph Gordon-Levitt), a French high-wire artist who in 1974 attempted to walk a steel cable strung between the twin towers of the World Trade Center.

Here’s Cinefex editor-in-chief Jody Duncan to talk about our behind-the-scenes analysis of the latest films by leading moviemakers …

Jody Duncan – From the Editor’s Desk

You know the kind of day that, by the time it’s all over, makes you wish you’d stayed in bed? You’re rushing out the door to work, and can’t find your keys, and then you get stuck in traffic because the main road to your office has been shut down for construction, and when you stop at the grocery store on the way home, the cash register at the checkout line you’re in breaks down. Yeah, that kind of day.

Issue 143 was that kind of issue. Obstacle begat obstacle, until I found myself paraphrasing Joe Gideon in All That Jazz, looking up to the heavens, and asking, “What? You don’t like visual effects journalism?”

We persevered, and the issue turned out spectacularly well – with only one battle scar visible. (You’ll know it when you see it.)

What I like about this issue is that it offers some “off the beaten path” stories. Joe Fordham explores micro-photography effects in his Ant-Man article, while my Mission: Impossible – Rogue Nation piece is as practical effects- and stunt-heavy as any I’ve written for the magazine.

Joe had the privilege of interviewing Robert Zemeckis for his story on The Walk, making for some fascinating commentary on the creative choices involved in dramatising Philippe Petit’s 1974 wire-walk between the World Trade Center towers. We round out the issue with Terminator Genisys, whose cinematic grandpappy, the original Terminator, graced the cover of Cinefex no less than 116 issues ago. That’s a lot of printer’s ink under the bridge.

Even now, we’re hard at work on Cinefex 144. On our travel log this time: Mount Everest and Mars!

Thanks, Jody!

Oh, a word of warning before you pick up your copy of the new Cinefex. Don’t forget to switch off the shrink-ray machine first. It may be cool to look like Ant-Man, but you’ll have one heck of a job turning those pages.

Rewind to “Back to the Future”

"Back to the Future" bluescreen stage photography and final composite by Industrial Light & Magic

Great Scott! It’s nearly October! That means our lives are about to intersect with a monumental moment on one of cinema’s trickiest timelines – the day when Marty McFly arrives in the Hill Valley of the future, in Robert Zemeckis’s 1989 sequel Back to the Future Part II.

Yes folks, 21 October 2015 is “Back to the Future Day”.

Across the world, fans of the classic time-travel trilogy are looking forward to a whole slew of events dedicated to celebrating this fantasy watershed moment. Your local theatre may be running a double or triple feature of the films. Some of the big venues are showing Back to the Future with live orchestra playing Alan Silvestri’s memorable score. There are panels and charity galas galore. I’ll bet there are even a few high schools putting on their very own “Enchantment Under the Sea” dance.

ILM modelshop supervisor Steve Gawley checks the internal lighting systems of the one-fifth scale replica DeLorean constructed for "Back to the Future".

ILM modelshop supervisor Steve Gawley checks the internal lighting systems of the one-fifth scale replica DeLorean constructed for “Back to the Future”.

But I’m not here to look forward. I’m here to look back – to 1985, when Back to the Future was first released.

In those halcyon days, I was an impoverished art student living in London. Like my fellow Brits, I frequently had to wait for what felt like a lifetime to see films that had been released the States months earlier – films like Back to the Future, which hit North American screens in July but didn’t reach good old Blighty until just before Christmas.

The incredible thing is that, despite my film-geek credentials and voracious appetite for movie news, when the film finally landed I knew almost nothing about it. I’d seen a trailer, but it hadn’t stuck in my mind. Spielberg’s producer credit was a good sign, but who was this Zemeckis fellow? Oh yeah, the Romancing the Stone guy. Well, that was an okay film, I suppose …

It’s hard to imagine such ignorance now. Everywhere we look we’re bombarded by film publicity. To go into a film cold, you have to consciously engage in a total media blackout because, let’s face it, your average 21st century movie trailer does more than just tease – it lays out all three acts, and spoils at least six big action scenes and a dozen key reveals.

Sometimes I really do wish I had a time machine.

Michael Lantieri and his physical effects team created specialised rigging for the hoverboard chase in "Back to the Future Part II", including this crane-suspended camera platform and wire-rigged hoverboard.

Michael Lantieri and his physical effects team created specialised rigging for the hoverboard chase in “Back to the Future Part II”, including this crane-suspended camera platform and wire-rigged hoverboard.

Another benefit of watching Back to the Future in 1985 was the venue. For those of you who remember the Empire, Leicester Square, the way it used to be, you’ll know exactly what I mean. No heartless multiplex this. The main auditorium was roughly the size of Texas. The screen, curved to geometric perfection, was concealed behind acres of ruched and ruffled curtains. Just walking in was like ascending to heaven.

When you’d taken you seat, the pre-show began.

If memory serves, the pre-show for Back to the Future involved a scanning laser being fired at those gorgeously draped curtains, creating a kaleidoscope of fire that gyrated in perfect synchrony with a presentation of Jean Michel Jarre’s Oxygène loud enough to make your whole body break out in gooseflesh. Once that was over, the curtains unpeeled and the dazzled audience was granted a brief recovery time courtesy of a few short reels of adverts and previews, during which they were able to recover their proper senses.

Then the movie began.

I’ll be honest – at first, I wasn’t sure what to make of Back to the Future. I didn’t recognise any of the actors, and I wasn’t at all sure where the story was going. The scenes in Marty’s house seemed oddly paced, dwelling curiously on the offbeat reminiscences of his dorky parents. Maybe they’d become relevant later in the movie … Still, the whole thing had a fun feel, just right for the holiday season. Maybe it would warm up.

Of course, it did. The instant Marty arrived at Twin Pines Mall and met Doc Brown, it started to win me over. The chemistry between Michael J. Fox and Christopher Lloyd made their every exchange a delight. I felt the smile growing on my face with each new turn of that pivotal scene – the DeLorean reveal, Einstein’s first trip through time (those iconic trails of fire!), the appearance of the terrorists, their shocking revenge on Doc Brown and the breakneck pursuit of Marty around the shopping mall car park leading ultimately to his escape into the past, all to the accompaniment of the most exuberant musical score I’d heard for years.

By the time Marty had been catapulted back to the year 1955, I realised that the Empire, Leicester Square, had indeed become its own little pocket of heaven. And heaven was where I remained for the rest of the film. It became an instant favourite, and is a favourite still – one of those rare films that captured lightning in a bottle, a film made with such confidence, such chutzpah, that you just knew its director must have been driven by a kind of visionary white heat. Zemeckis stated recently that he’ll resist any attempt to remake it. I hope he sticks to his guns. How can you improve on perfection?

ILM camera assistant Kate O'Neill programmes one of the computer-controlled functions on the miniature locomotive created for "Back to the Future Part III". Seven feet long, the model boasted twenty-four mechanical gags and was controlled by two motion-control systems simultaneously.

ILM camera assistant Kate O’Neill programmes one of the computer-controlled functions on the miniature locomotive created for “Back to the Future Part III”. Seven feet long, the model boasted twenty-four mechanical gags and was controlled by two motion-control systems simultaneously.

Some time after seeing the movie for the first time (and second, and third …) I picked up a copy of Cinefex issue 24. To my delight, it included a feature on Back to the Future – quite an extensive one, considering the film contains fewer than thirty visual effects shots. I learned that Industrial Light & Magic turned those shots around in roughly eight weeks, an incredible accomplishment. The detailed article, written by Janine Pourroy, also confirmed my suspicions about Zemeckis: it seemed he was all over the visual effects, ensuring at all times that ILM’s wizardry was as true to his vision as the rest of his film.

Small though the number of effects shots may be, they still threw plenty of challenges at VFX supervisor Ken Ralston and his team. The “time-slice” effect was an incredibly complicated blend of practical lighting and optical trickery. The climactic lightning strike – described in Zemeckis’s and Bob Gale’s script as “the largest bolt of lighting in cinema history” – was created frame by frame using meticulously hand-drawn animation by Wes Takahashi.

Miniature composite shot by Industrial Light & MagicAnd the film’s crowd-pleasing closing shot, in which the time-travelling DeLorean reveals its new flight mode shortly before bursting out of the movie screen, was a state-of-the-art optical composite boasting a manually-tracked match-move of a live-action plate, a meticulously-constructed miniature vehicle photographed under motion control, and some tricky hand-drawn rotoscoping to mask the airborne speedster as it swoops behind those distant trees.

Gazing back across the thirty years that lie between now and then, I’m filled with a fuzzy nostalgia. They say you can’t turn back the clock but, hey, this is Back to the Future we’re talking about.

So what should I do when “Back to the Future Day” finally comes around? Keep it simple and re-watch all three films in the comfort of my own home? Dress up in a life-preserver and gatecrash my local theatrical event? Rent a DeLorean and see if I can coax that sucker up to 88mph?

Never mind. I have a few weeks left to decide. If I run out of time, I can always fire up the flux capacitor and buy myself some extra breathing space. In the meantime, only one question remains:

What are you doing on “Back to the Future Day”?


Watch the trailer for the upcoming documentary Back in Time – due for release on 21 October 2015 – in which cast, crew, and fans explore the classic time-travel trilogy’s resonance throughout our culture:

Photographs copyright © 1985, 1989 and 1990 by Universal Studios Inc. and Industrial Light & Magic.

N is for New

Cinefex VFX ABC - N is for New in VFXIn the VFX ABC, the letter “N” stands for “New”.

Writing on this blog a little over a year ago, I asked a panel of visual effects experts the following question: What cutting-edge technique or technology is getting you excited about the future of visual effects?

The question prompted some fascinating responses, which you can read in my article I is for Innovation. Collectively, they provide a snapshot of the world of VFX as seen by a range of industry professionals in July 2014.

It’s a snapshot that’s now a year out of date. That’s the thing with cutting edges – the darn things just keep on cutting.

That’s why I’ve decided to revisit the topic, one year on. Because innovation doesn’t go away. In fact, the desire to create something new appears to be hard-wired into the mind of the visual effects artist. And the industry itself, like so many others, is constantly evolving.

This time around, I wanted to hear not only about the latest techniques and technologies, but also the latest business trends. So I stripped my original question back to its simplest possible form:

What’s New in VFX?

How did our panel of experts respond? Let’s find out!


Oculus Rift DK2Virtual Reality

Michele Sciolette, Head of VFX Technology, Cinesite

There is a lot of expectation that 2016 will be the year when immersive technologies such as virtual and augmented reality will become mainstream. Current-generation devices have many limitations, but clearly show the potential for truly immersive experiences. This will inevitably drive demand for new types of entertainment. I expect that the ability to support and create content for immersive experiences will become a common task for visual effects houses in the relatively near future.

Aruna Inversin, CG/VR Supervisor, Digital Domain

With true virtual reality around the corner, content creators and studios are already building their teams and their pipelines to take vancevantage of this next wave of new immersive experiences, the likes of which people have never seen. Using positional tracking, high fidelity screens and haptic (touch-sensitive) inputs, we’ll see a surge in consumer consumption that hasn’t be matched since the invention of the television.

The virtual reality challenges awaiting visual effects artists are numerous – from multiple camera takes and multiple cameras, to 360° video and extremely long frame ranges. As visual effects artists, we’re at the beginning of this amazing ride, with technologies finally catching up to the visions in our heads. Not just on a screen in front of you, but wherever you may look.

Hubert Krzysztofik, Director of Interactive Technology, Pixomondo

The implications of VR, AR and interactive experiences mean that the VFX industry is undergoing historic change. The demand for talented game engine developers is as high as the demand for excellent VFX artists versed in the specifics of working within game engines. Game engines already have a nascent presence in the VFX industry and are increasingly being used in pre-visualisation and look development.

Currently, there are three major players in the game engine field: Epic Game’s Unreal Engine, Crytek Cinebox and Unity Technologies Unity. From a business perspective, it’s important to be platform and technologically agnostic. We identify the strengths of each engine and use them based on the project requirements.

An important part of VR development is the headset component. Currently, Oculus Rift, Sony Morpheus, Google Cardboard and HTC Valve are available in all the engines on the market. Ease of use and minimal bug issues are a major consideration in a VFX studio, which lives and dies by its pipeline.

It’s an exciting time for VR and visual effects, and we seem to building toward an eventual merging of disciplines. I’m looking forward to seeing how VR continues to develop and improve, and how it can be beneficially integrated into the VFX industry and beyond.

Hannes Ricklefs, Head of Software (London, Bangalore) MPC Film

Virtual reality is definitely one of the areas with the most interest and investment being put into it. FMX and SIGGRAPH this year were loaded with talks, panels and demos around the topic. In general, VR is seen as the enabler for pushing the convergence of film and games technology. For fully immersive experiences, it needs the quality of VFX at the speed of a game engine.

One of the major excitements surrounding VR is that there are no established processes around the creation of VR content. Existing process such as 2D stitching, stereo virtual cameras and camera tracking now need to work within a full 360° vision. Recognising the importance of this area, MPC has established MPC VR, a creative- and technology-led VR team focused on immersive exploration across multiple industries, and integrated with Technicolor’s R&I (Research and Innovation) group.

Karl Woolley, VR Lead, London, Framestore

Spend five seconds on the Oculus CV1 or HTC Vive, and you’ll immediately understand what a difference it brings to see your hands, grab objects and move around a space, as opposed to being sat in a locked-off, passive position.

Whether the VR experience is based on live-action, pre-rendered or generated in real-time, game engines are at its heart. They allow you to leverage the input devices, and to craft worlds and environments based on your VFX assets … after a bit of work! Game engines have come on leaps and bounds in terms of visual quality and performance in the last five years, with the folks at Epic (Unreal Engine) releasing dedicated tools to make the VFX-to-VR process even easier and accessible for all in 2016.

With our roots in VFX, we traditionally focus on perfecting the pixel to tell a story. But VR is Virtual Reality, not just Visual Reality. 2016 will be the year we get natural, low-barrier methods of input, with Sony, HTC and Oculus all having consumer kit out making virtual reality available to the masses. That’s when we’ll truly see what the public’s appetite is for VR.


Videogame Engines

Michele Sciolette, Head of VFX Technology, Cinesite

The quality of real-time graphics has been improving at an incredible pace, and the output of modern game engines – supporting features such as physically based shading – is fast approaching the level of quality we need in high-end visual effects. The explosion of independent game publishers in recent years has led to new licensing options, making top quality engines accessible to everyone. Thus game engines could soon become a viable option for certain kinds of work – especially if you drop the constraint to run the engine in real-time.

In my opinion, the main obstacle still to overcome is pipeline integration. In order for visual effects houses to fully embrace video game engines, we need the ability to transparently move assets between a more traditional pipeline and one based on a game engine, to ensure consistency of the final look and minimise duplicated effort.

Watch the animated short A Boy and his Kite, created in Unreal Engine 4 and rendered in real-time at 30fps:


Integration into the Creative Process

Christian Manz, Creative Director, Film, Framestore

The biggest change I’ve observed in recent times is how VFX has been embraced as an important part of the filmmaking and storytelling process from start to finish, not just in post. Only the director and producers serve longer time than the VFX team on a big movie, and in that time you get to collaborate with a lot of talented people across multiple departments to bring the director’s vision to the big screen. Being part of that creative process really excites me – it’s why I think there hasn’t been a better time to be involved in the world of VFX.

Chris MacLean, CG Supervisor, Mr.X

Over the last few years, there’s been a trend towards VFX houses having more creative input with respect to the storytelling process. Whether with a design element like a creature, or something as robust as editorial input, VFX artists are being given more creative responsibility. It’s exciting for us as it gives us an opportunity to be part of the action as opposed to simply facilitating it.

It used to be that you would do your budget, get your shots with your line-up sheets, and drop the shot into your pipeline. Now, we’re being asked to pitch previs or postvis ideas, help design camera moves, and collaborate with production design and art departments. In some cases, we’ve even helped redesign sequences in post. It’s nice to see growing respect for our contributions to the filmmaking process, and to be recognised as visual storytellers in our own right.


Perfect Integration of CG with Live-Action

Mark Wendell, CG Supervisor, Image Engine

One exciting industry advance that’s now becoming possible – with the proper setup – is the near-perfect integration of CG into live-action plates in a practical and efficient way.

A number of developments are contributing to this: improvements in path-tracing renderers, the adoption of physically plausible shading models, and the use of lookdev and lighting setups calibrated to reference photography. While this isn’t particularly “new”, the tools and techniques have reached the point where we’re finally seeing a huge payoff not only in terms of realism, but also in production efficiency.

Of course, getting the proper reference and building colour-calibrated setups requires a bit of up-front investment in time. But when it’s done properly, we’re seeing lighters achieve a nearly perfect plate-match on their first render, in any lighting environment. Amortised over multiple shots, that investment more than pays for itself, and that’s really exciting. Rather than spending endless iterations just getting their CG elements to match the plate, artists actually have the time to refine shots and give them that last five percent kiss of awesome.

Watch Image Engine’s VFX reel showcasing their work on Chappie:


Physically Plausible Shading

Howard Campbell, Lead Technical Director, Tippett Studio

Rendering of computer generated images involves very complex calculations of light and material interactions – so complex, in fact, that we can only ever approximate the results. Traditionally, most render approaches have involved a high level of artist manipulation to arrive at a convincing render. But in recent years, as computer power has increased significantly, there’s been a shift towards more expensive – but much more accurate – physics-based rendering.

Physically plausible shading is a strategy whereby rays of light are traced through the scene according to the physical laws of energy conservation. It results in a more believable-looking result out of the box with much less artist tweaking. In Teenage Mutant Ninja Turtles (2014), for example, Tippett Studio used physically plausible shading to render the young turtles and sewer environments. This greatly reduced the need for the kinds of lighting and surface adjustments common under previous strategies, allowing us to do more shots in less time.

Watch Tippett Studio’s VFX reel showcasing their work on Teenage Mutant Ninja Turtles (2014):


Remote Resources and Cloud Computing

Rob Pieke, Software Lead, MPC Film

At the infrastructure level, one of the more interesting changes is a move towards on-demand remote resources for computing and storage. Platforms such as the recently re-released Zync Render offer opportunities to maintain a smaller “always busy” render farm on-site, but still have near-instant access to extra temporary resource.

Hannes Ricklefs, Head of Software (London, Bangalore) MPC Film

There have been great advances to enable cloud computing for large scale VFX productions. Once the remaining roadblocks – such as security concerns – are resolved, having this “infinite” resource available to more dynamically schedule and scale around the peaks and troughs of any VFX production will be a major game changer.

Kai Wolter, Software Lead, MPC Film

My list of what’s new in VFX includes:

  • Cloud computing and cloud rendering
  • Smarter ways to handle large data sets
  • Finite Element Method (FEM) for muscle simulation
  • VR in VFX (previs, postvis, final images)

Software for All

Damien Fagnou, Global Head of VFX Operations, MPC Film

In recent years, the rise of Open Source Standards like USD, Alembic or OpenSubdiv have given studios really important foundation tools to create VFX. Alongside this, VFX-focused software platforms liked Fabric Engine, authoring software like The Foundry’s Katana or Mari, and the move to fully raytraced renderers such as Pixar’s RenderMan RIS have dramatically changed the game for software development inside VFX studios where the focus is more on workflows and artist tools rather than having to build an entire software stack for VFX from scratch. Furthermore, many of these software packages are now available for non-commercial use, giving students full access to the same toolset as that used by large VFX Studios.

Manuel Huertas, CG Artist, Atomic Fiction

As a surfacing and lookdev artist, I’m very glad to see the most recent release of Mari (Version 3) including shading models from third party vendors such as Chaos Group and Solid Angle being incorporated. This will help to get almost real-time feedback during the texturing workflow for certain materials, using relevant shading models similar to the ones that will actually be constructed in Katana and used in production for final rendering.


Clarisse iFX

Ben VonZastrow, CG Painter, Tippett Studio

Clarisse iFX, by Isotropix, is part of a new breed of VFX tools that allow the artists to work directly on the final image, instead of on a proxy image of variable quality and low accuracy. By foregoing OpenGL and focusing instead on a unified, lightning-fast renderer, Clarisse allows users to directly manipulate scenes with trillions of polygons directly in the final render view.

City compositing in Clarisse iFX. Image copyright © 2015 by Isotropix SAS.

City compositing in Clarisse iFX. Image copyright © 2015 by Isotropix SAS.


6k is the New 2k

Gresham Lochner, VFX Producer, Locktix

We’ve seen a trend recently with our clients increasing to a working format of 4k and 6k for VFX work. Traditionally, this would severely strain a small VFX facility. We’ve built out our infrastructure and pipeline with this work in mind – from the beginning, we’ve invested in a solid Open Drives infastructure as our back end, as well as some proprietary code that sits on top of our hardware, allowing us to easily get around IO bottlenecks. Because of all of this, we don’t blink at 6k work – it’s become the norm.


Drones and Photogrammetry for Aerial Reference Gathering

Ray Sena, Environment Artist, Tippett Studio

With the variety of quadcopter drone rigs available to us, it’s now easy for artists to gather large amounts of reference cheaply and with limited time. For example, in a project we’re currently working on, we needed to obtain a huge number of reference photos over about a dozen different types of landscape in China. The VFX director and I had the ability to launch off-the-shelf quadcopters at each of our shoot locations and quickly capture a vast amount of material – the rigs were small enough to mount on to our hiking packs with all of our other camera gear. With the artists in control of the rigs, the director could fly his own flight path to envision the shots, and I could gather specific texture reference which I’m now using for look development.


Machine Learning and Artificial Intelligence

Michele Sciolette, Head of VFX Technology, Cinesite

Machine learning and artificial intelligence is an area that has the potential to disrupt many industries, including visual effects. It’s difficult to predict what the effect might be in the long term, but we’re already in areas such as computer vision, machine learning techniques are already performing at par with, if not better than, the best algorithms that have been developed so far.

The major next generation of advanced tools for visual effects artists may well involve running what a machine has learned over very large datasets, rather than implementing a specific image processing algorithm designed by a human.

In the future, machine intelligence may be smart enough to make VFX decisions. "Ex Machina" image  copyright © 2015 by Universal Pictures and courtesy of Double Negative.

In the future, machine intelligence may be smart enough to make VFX decisions. “Ex Machina” image copyright © 2015 by Universal Pictures and courtesy of Double Negative.


Back to Practical

Matthew Bramante, VFX Supervisor, Locktix

We use a lot of practical techniques with our clients. Although you can do everything in CG and VR these days, we like to go back to our roots, using traditional cinematography with CG augmentation.


Ambition Exceeds Budget

Dominic Parker, Director, One of Us

The biggest challenge for visual effects is that ambition is heading in one direction, and budgets are waving goodbye as they head in the other. While anyone can promise the moon on a stick, meeting creative challenges and surviving commercial realities means the work can often suffer. But for those who are able to deliver … the future is bright.


VFX Home Working

Webster Colcord, Animation Supervisor, Atomic Fiction

Like Uber, the taxi company that owns no vehicles, there’s a growing list of VFX and animation boutiques who outsource their work entirely to freelancers working from home.  With just a few on-staff supervisors, they manage the workflow to self-employed private contractors who, like Uber’s drivers, use their own hardware and licenses and have flexibility in choosing the work they take on.


The Rise of the Boutique

Peter Rogers, Creative Producer, Bait Studio

In the UK, it feels like the playing field for VFX is levelling out a little. The huge names still dominate most of the blockbusters, but the rise of the boutique studio continues apace. Some of those companies who were seen as boutique a year or two ago have expanded so rapidly that they’ve almost created a middle tier between the big names and studios with under a hundred seats. As a result, there’s more choice than ever for producers.

As a studio in South Wales, we’ve also noticed a change in attitude towards non-London companies in the past year or so. We’ve found it easier to get producers to consider using us and are meeting more and more experienced people and recent graduates who don’t see London as the only option for working in the industry.


California Tax Incentives

Jules Roman, President, Tippett Studio

The downward pressure and stress on the industry in California has been at breaking point for years. California tax incentives are a great move in the right direction.

Will California's film industry weather the storm of tax incentives? "San Andreas" photograph copyright © 2015 by Warner Bros. Entertainment.

Will California’s film industry weather the storm of tax incentives? “San Andreas” photograph copyright © 2015 by Warner Bros. Entertainment.


Conclusion

So, what new VFX developments does the above snapshot reveal?

The one big thing on everybody’s mind is VR. After years of simmering, the pot of immersive technologies appears finally to be coming to the boil. As production companies old and new fall over themselves to jump on this latest media bandwagon, visual effects facilities are well-placed to stake out territory in the brave new world that is virtual reality.

Closely related to VR is the steady convergence of film and television with the gaming industry. Not only is there crossover in terms of content, but VFX studios are now seriously considering the integration of gaming engines into their pipelines.

Then there’s realism. Visual effects artists are using technologies and procedures that mimic the physics of the real world with ever-increasing verisimilitude. If you doubt this, take a moment to think about this year’s big films. Think about how stunningly real the visual effects looked. When you’ve done that, take another moment to reflect on the fact that, for every shot that impressed you, there were at least a dozen others that you didn’t even know were VFX shots.

Setting the technology aside, we see that senior visual effects professionals are becoming more closely involved with the wider creative process. Any why not? These days, it’s not unusual for VFX artists to touch almost every shot in a film. For many feature directors, the methodology for visual effects is becoming as important as that for cinematography, production design or any of the other key disciplines.

There’s plenty more to see in our snapshot, from the rise of 3D scanning technology to the ongoing – and perpetually thorny – issue of tax incentives. Many artists are calling for more practical effects, smartly integrating with CG, while others are placing their faith in machine intelligence acquiring not only the skills, but also the judgement to undertake certain kinds of visual effects duties. And the software used by visual effects artists – not to mention the on-demand and cloud-based computing platforms on which it runs – continues to develop at a breathtaking pace.

As for the future … well, some of the new arrivals outlined above will undoubtedly gain strength, and perhaps even endure over time. Others will flower briefly, then fade. However, there’s one thing we can be sure of.

This time next year, everything will be new all over again.

What new VFX trends or technologies have got you all fired up? Share your thoughts below with a comment or two – we’d love to hear what’s on your mind!

Special thanks to Niketa Roman, Stephanie Bruning, Bronwyn Handling, Helen Pooler, Sophie Hunt, Joni Jacobson, Tiffany Tetrault, Jonny Vale, Alex Coxon, Geraldine Morales and Liam Thompson. This article was updated with additional material on 11 September 2015.

Flying by Wire

Ever since the dawn of cinema, people have been flying by wire.

In Fritz Lang’s 1927 classic Metropolis, for example, shots of flying machines soaring over the film’s iconic cityscapes were achieved by mounting miniature planes on taut wires. A similar technique was used in the original King Kong in 1933, for which a tiny squadron of biplanes was inched along its guide wires one painstaking frame at a time.

Creating the miniature effects of "Metropolis". Illustration taken from "Science and Invention" magazine, June 1927, via Smithsonian.com

Creating the miniature effects of “Metropolis”. Illustration taken from “Science and Invention” magazine, June 1927, via Smithsonian.com

Then as now, there were plenty of amateur filmmakers keen to re-create the kinds of sequences they’d ogled in the blockbusters of the day. Luckily for fans of miniature aircraft shots, cinematographer Jerome H. Ash was on hand to offer advice.

Here’s an extract from Ash’s article Substandard Miniature Shots, published in the May 1936 edition of American Cinematographer:

“I think that by far the most satisfactory way to handle miniature plane shots is to hang the plane from wires, as the professionals do. To begin with, stretch three parallel wires well above the path you want the plane to take: these are strictly for support. From these, hang a little T-shaped wooden framework, on pulleys or eyelets; this supports and guides the plane. From the framework, three wires descend to the plane – one to each wing, and one to the tail.”

Ash is at pains to point out to his enthusiastic amateur readers that the wires mustn’t show up on camera. If only a little camouflage is required, he recommends a light application of blue vitriol. A more extreme solution involves painting the wires with alternating black and white stripes, each around half an inch in length – Ash likens this bold approach to the dazzle camouflage used on WWII battleships.

Model aircraft suspended thus are hardly going to be doing aerobatics, but they should at least be capable of running through a few basic manoeuvres:

“The three-point suspension prevents the plane from turning or flying sidewise. The supporting wires may be rigidly fixed to the frame for some types of action, but you’ll have more complete control of the model if the wires extend, like puppet-strings, to where someone standing beside the camera can manipulate them, altering the level and the inclination of the plane. With a little practice, you can make the plane land, take off, climb, glide, stall or sideslip, as well as “flying” level.”

Wire rigs remained in favour with visual effects artists throughout the twentieth century. Even during the 1980s, after Star Wars had kickstarted the trend of shooting miniatures under computer control in front of bluescreens, there were still people around who preferred to string things up the old-fashioned way – notably the effects team behind the gritty flying sequences seen in the 1983 film The Right Stuff.

Setting up a shot for "The Right Stuff", model shop supervisor Earle Murphy fits a practical rocket motor into the engine port of a miniature wire-mounted X-1.

Setting up a shot for “The Right Stuff”, model shop supervisor Earle Murphy fits a practical rocket motor into the engine port of a miniature wire-mounted X-1.

For scenes in The Right Stuff where USAF test pilots push various jet and rocket craft to the limit, director Philip Kaufman turned to the newly-formed USFX, led by Gary Gutierrez. The first footage produced by the effects team was rejected by Kaufman specifically because they’d been shot using motion control and so lacked the visceral feel he was after.

With the production on temporary hold, Gutierrez started experimenting with different ways of creating the desired hand-held look. One of his offbeat test shots was achieved using a wheelchair to ride the camera past the miniature aircraft. Another saw the model jets attached to helium balloons.

The wackiest of all involved hurling the miniature planes out of a top-floor window.

In the end, a variety of tricks were used to put the magnificent flying machines of The Right Stuff on the screen. Most involved wires and mechanical rigs, enhanced with fan-blown smoke, and photographed using telephoto lenses so that everything looked shot from the hip.

Interviewed in Cinefex 14, in Adam Eisenberg’s article Low-Tech Effects, Gutierrez remarked:

‘The amount of delight [Kaufman] got from the success of any shot was directly proportional to how funky the method that accomplished it.”

ILM visual effects supervisor Bruce Nicholson directs head stage technician Joe Fulmer in his manoeuvring of the Krzanowski wire rig used to puppeteer the miniature flying saucers in "Batteries Not Included".

ILM visual effects supervisor Bruce Nicholson directs head stage technician Joe Fulmer in his manoeuvring of the Krzanowski wire rig used to puppeteer the miniature flying saucers in “Batteries Not Included”.

Sometimes funky, wire rigs can also be finely-crafted pieces of high-tolerance engineering. Take the beautiful flying rig created by model mechanical supervisor Tad Krzanowski to put the pint-sized flying saucers of Batteries Not Included through their paces.

Capable of operating both under computerised motion control and live on set, Krzanowski’s multi-wire rig proved so versatile that director Matthew Robbins was able to capture around 30% of his flying saucer scenes in-camera. And producer Steven Spielberg was impressed enough by the results to remark, “I can’t tell the wire work from the motion control work.”

There’s yet more impressive wire-work in the early films of James Cameron. The Hunter-Killers seen in the future war sequences of both The Terminator and Terminator 2: Judgment Day are strictly fly-by-wire machines. For the most part, so too is the Colonial Marines dropship from Aliens.

The crane and wire rig used for the Colonial Marines dropship in "Aliens" is just visible on the left of this wide-angle shot of the Acheron base miniature set.

The crane and wire rig used for the Colonial Marines dropship in “Aliens” is just visible on the left of this wide-angle shot of the Acheron base miniature set.

As digital techniques have advanced, wires have blended more and more into the background. To paraphrase Ultron, digital characters got no strings. Nevertheless, wire-work remains an essential tool for effects artists … especially when it comes to flying a person around.

For many of the zero-g scenes in Gravity, Sandra Bullock was flown around using a custom harness created by special effects supervisor Neil Corbould and his team. Boasting no less than 12 wires under individual servomotor control, the sophisticated rig was capable of turning its astronaut payload on a dime.

Stuntwoman Juliette Cheveley stands in for Sandra Bullock to assist line-up of an ISS interior scene for "Gravity", suspended from a custom wire harness created by Neil Corbould's special effects team.

Stuntwoman Juliette Cheveley stands in for Sandra Bullock to assist line-up of an ISS interior scene for “Gravity”, suspended from a custom wire harness created by Neil Corbould’s special effects team.

With Gravity, as with other modern movies, digital brings its own special benefit – you no longer have to worry about hiding those pesky wires with dazzle camouflage. The more prominent the cables, the easier it is for visual effects artists to paint them out.

However, the future of human flight – in the movies at least – may lie not in wires but robots. Specialist companies like Robomoco – whose latest work can be seen in the upcoming Pan and In the Heart of the Sea – now offer a range of precision robots capable of flying artists, stuntmen and props through an extraordinary range of movement.

Watch Robomoco’s robot “Leia” in action:

So are the days of flying by wire numbered?

Perhaps not. Mad Max: Fury Road built an entire publicity campaign based on its use of practical stunts and effects. Even the new Star Wars movie has jumped on the old-school bandwagon, with director J.J. Abrams asserting at Star Wars Celebration 2015 that, “Building as much as we could [for real] was a mandate.”

Who’s to say the grand old tradition of wire-work can’t be part of this resurgence?

Who’s to say movie heroes won’t once more find themselves flying by wire?

“The Right Stuff” photograph copyright © 1983 by The Ladd Company. “Batteries Not Included” photograph copyright © 1987 by Universal City Studios Inc. “Aliens” photograph copyright © 1986 by Twentieth Century Fox Film Corporation. “Gravity” photograph copyright © 2013 by Warner Bros. Entertainment.

Memory in the Movies

The memory bubble sequence from "Brainstorm"

Memories. According to the song, they light the corners of your mind. From time to time, they also light up the silver screen, in such classic memory movies as Total Recall, Eternal Sunshine of the Spotless Mind and Memento.

Many such films deal with the concept of memory manipulation, that well-worn science fiction trope in which heroes and villains alike try to record or otherwise meddle with human remembrances … often with disastrous consequences. Let’s just be grateful you can’t manipulate memories in real life.

Or can you?

Recent scientific research has started to unpick the mysteries of the human mind. Take the Lifenaut project, which is exploring the feasibility of storing and replicating human consciousness. Its researchers already claim the ability to scan your brain and record your memories – they’re even offering to beam the recordings into space for inquisitive aliens to decode.

Sounds fanciful? Not according to the website of MTT Neurotech, the company which supplies Lifenaut with those all-important brain scans:

“It is possible to record conscious thoughts as a stream of consciousness, and to store our recorded thoughts is a private manner for reconstruction and use in the future.”

There’s a catch, of course. It’s hidden in that phrase “for reconstruction and use in the future”. While the memory-scanning technology may be coming along nicely, the ability to play those memories back is still way beyond our reach. As Lifenaut points out:

“At the moment we do not have a “memory disc player” but MMT Neurotech expects that such a device will become a reality in the near future.”

If you’re impatient for that elusive “memory disc player” to appear on the market, you might be encouraged to learn that brain replication is in fact already possible. If you’re a worm.

OpenWorm is an open source project whose mission is to create an entirely digital worm. In a recent advance, researchers uploaded a simulated worm brain into, of all things, a Lego robot. Want to know what happened when they turned the worm on? Check out this video:

Of course, here at Cinefex, what we’re concerned with is movies and visual effects. Never mind whether or not memories can be manipulated. What does a memory actually look like?

Science can help us here too. Recently, researchers at Albert Einstein College of Medicine used fluorescent markers to tag memory-making beta-actin mRNA molecules inside the brain of a live mouse. The results are visible in this short video, which effectively shows memories actually being made:

Let’s face it, the science behind the mouse-brain video may be mind-blowing (almost literally), but as entertainment it leaves a lot to be desired. Luckily for us, the subject of memory manipulation has already been explored by some of cinema’s greatest visual thinkers.

Foremost among these is Christopher Nolan, whose thematically connected trio of memory films comprises Memento, Inception and Interstellar.

In Memento, the short-term memory loss suffered by Leonard Shelby (Guy Pearce) is the springboard for an on-screen narrative that plays out backwards. Light on visual effects, this early film of Nolan’s explores the workings of the human mind largely through editorial sleight of hand.

Ariadne experiments with gravity in one of the dream worlds of "Inception". For this composite shot, Double Negative rotoscoped actors Ellen Page and Leonardo DiCaprio from the Paris background, then used photogrammetry to digitally re-create the environment.

Ariadne experiments with gravity in one of the dream worlds of “Inception”. For this composite shot, Double Negative rotoscoped actors Ellen Page and Leonardo DiCaprio from the Paris background, then used photogrammetry to digitally re-create the environment.

For a more spectacular look at the workings of the human mind, we need to turn to Inception, which follows Dominic Cobb (Leonardo DiCaprio) as he plunges through nested layers of other people’s dreams … and into his own haunted past. Along the way, Nolan unleashes a dazzling series of startling visuals ranging from inverted gravity fields to folding cityscapes, all cunningly designed to demonstrate the power of his characters to manipulate the dreamscapes into which they’ve been plunged.

The Tesseract scenes in "Interstellar" were filmed on a stripped-down set with a plexiglass floor. Visual effects artists at Double Negative digitally replicated the Tesseract set up to 220,000 times to create the multi-dimensional environment seen at the climax of the film.

The Tesseract scenes in “Interstellar” were filmed on a stripped-down set with a plexiglass floor. Visual effects artists at Double Negative digitally replicated the Tesseract set up to 220,000 times to create the multi-dimensional environment seen at the climax of the film.

In Interstellar, Nolan goes a step further. When space pilot Cooper (Matthew McConaughey) enters a black hole near the climax of the epic space adventure, he finds himself inside a hyper-dimensional construction called a “tesseract”, within which a kaleidoscopic array of memories has been laid out for him to explore.

Inside the tesseract, memories exist as captured moments of time, each one contained within the four walls of the bedroom of Cooper’s daughter, Murph. Each memory is thus a kind of bubble, so it’s no surprise that Interstellar carries visual echoes of another, earlier film in which memory bubbles play a crucial part.

That film is Brainstorm, the granddaddy of all memory movies. Released in 1983, and directed by VFX pioneer Douglas Trumbull, Brainstorm chronicles the development of a memory-recording device by scientists Lillian Reynolds (Louise Fletcher) and Michael Brace (Christopher Walken).

Partway through the film, Reynolds suffers a fatal heart attack, during which she has the presence of mind to don one of the experimental headsets and record her own final moments. The resulting “death tape” becomes the plot device about which the rest of the film revolves.

When scientist Lillian Reynolds (Louise Fletcher) suffers a fatal heart attack, the experimental brain scanner to which she is attached records her experiences as she dies, as shown in this still from Douglas Trumbull's 1983 film "Brainstorm".

When scientist Lillian Reynolds (Louise Fletcher) suffers a fatal heart attack, the experimental brain scanner to which she is attached records her experiences as she dies, as shown in this still from Douglas Trumbull’s 1983 film “Brainstorm”.

Rather than a series of rooms, Brainstorm imagines memory as an infinite array of bubbles, each containing a different scene from a person’s life. The dizzying sequences in which Trumbull’s camera plunges through three-dimensional gridworks of memory bubbles were achieved using a specialised horizontal animation stand called a Computerised Multiplane System – “Compsy”, for short.

Capable of moving multiple layers of artwork through up to twelve different axes, Compsy worked round the clock to create Brainstorm’s memory bubble scenes. Each individual bubble was built up from three separate pieces of film – one for the fisheye motion picture footage visible inside the transparent sphere, another for the reflection bubble shape, and a third for the matte, which would be used to mask the bubble from its neighbours.

Artist Carolyn Bates prepares the basic artwork used to generate distant arrays of memory bubbles for the death tape sequence in "Brainstorm". A 4x5 still camera, mounted on a curved track, would advance on the backlit bubble transparencies, making incremental exposures on to a single piece of film.

Artist Carolyn Bates prepares the basic artwork used to generate distant arrays of memory bubbles for the death tape sequence in “Brainstorm”. A 4×5 still camera, mounted on a curved track, would advance on the backlit bubble transparencies, making incremental exposures on to a single piece of film.

One look at a still from the memory bubble sequence – which features thousands upon thousands of such multi-layered orbs – should prove that the task of putting human thoughts on the silver screen is enough to make any visual effects artist’s brain explode.

Few other films present the mechanics of memory as audaciously – and successfully – as Brainstorm. Yet, despite the difficulties of visualising consciousness, the human mind remains a potent playground for ambitious filmmakers, ensuring that this peculiarly cerebral branch of moviemaking will continue to, well, stick in the mind.

Whatever your taste, there are plenty of memory movies to choose from. There’s the mind-trafficking nightmare of Strange Days, penned by James Cameron and directed by Kathryn Bigelow, which posits not only a memory-recording device called a SQUID, but also the kind of playback apparatus those researchers at Lifenaut are working so hard to perfect.

Or maybe your tastes run to the Martian mysteries of the 1990 sci-fi hit Total Recall, in which Arnold Schwarzenegger plays a man who buys himself an artificial memory of a holiday on the red planet … and ends up fighting for his life?

Oh, and who could forget the innovative cranial screw-top surgery of Steve Martin, in his role as Dr. Michael Hfuhruhurr in The Man with Two Brains?

Which memory movie do you remember most fondly?

Arnold Schwarzenegger in a memorable performance as Douglas Quaid in "Total Recall" (1990)

Arnold Schwarzenegger in a memorable performance as Douglas Quaid in “Total Recall” (1990)

“Brainstorm” photographs copyright © 1983 by MGM/UA Entertainment Company. “Inception” photograph copyright © 2010 by Warner Bros. Pictures. “Interstellar” photograph copyright © 2014 by Paramount Pictures. “Total Recall” photograph copyright © 1990 by Columbia/TriStar Pictures and via IMDb.

Inspiring Rodeo FX

What drives people to work in the visual effects industry? The glamour? The technology? All those ravening monsters and exploding spaceships? Or is it just another job? In an ongoing series of articles, we ask a wide range of VFX professionals the simple question: “Who or what inspired you to get into visual effects?”

Here are the responses from the staff at Rodeo FX.

Close Encounters of the Cinematic Kind

"Toy Story" posterEveryone remembers favourite films from their childhood. For Benoit Rimet, character TD, one movie in particular took his imagination to infinity, and beyond: “Like pretty much everyone from my generation, I was inspired by Toy Story.”

Inspiration also came at a young age to Thomas Montminy-Brodeur, digital compositor. “My passion for visual effects started when I watched The Santa Clause. I was at that age when kids think that everything in films is real. So, when I saw Santa’s little helpers flying with jet packs, I asked my parents for a jet pack for Christmas!”

“When I was a kid,” said Samuel Jacques, CG artist, “I was always pausing and rewinding the VHS player at home to show my parents the latest visual effect flaw I’d discovered in a movie. Watching a scene in The Last Action Hero, I told them, ‘Look! You can see how the car is pushed in the air by this metal rod before the explosion!’ They always kept telling me I had an eye for ‘that kind of stuff’.”

"The Black Hole" posterWayne Brinton, VFX supervisor, was sucked into the business by the sight of a spinning singularity. “I remember my friend’s dad taking us to see The Black Hole in the theatre,” he recalled. “It scared the crap out of me. I remember being determined to figure out how they did the movie so that I wouldn’t be freaked out that an actual black hole would come and engulf our planet.”

Having experimented with filmmaking from a young age, Félix Vallières, digital compositor, got a big kick out of a classic Robert Zemeckis film from 1994. “I was watching some bonus features on a Forrest Gump DVD,” Vallières remembered. “That’s when I realised that everything was possible. The invisible effects they pulled off in that movie have always blown me away, from the beautiful feather shot, to adding Forrest Gump next to John Lennon and JFK.”

"Forrest Gump" features a number of groundbreaking invisible effects shots by ILM, including those in which the legs of actor Gary Sinise were digitally removed for scenes in which he plays double amputee Vietnam vet Lieutenant Dan.

“Forrest Gump” features a number of groundbreaking invisible effects shots by ILM, including those in which the legs of actor Gary Sinise were digitally removed for scenes in which he plays double amputee Vietnam vet Lieutenant Dan.

Martin Pelletier, VFX supervisor and studio manager, was drawn into the effects business by something a little more horrific. “Back in 1998, I got hooked on a documentary about the VFX work on Mimic,” stated Pelletier. “I was struck by the glint in these guys’ eyes as they showed the rigging and renders of a full-CG creature done in Softimage 3D on those old Silicon Graphics O2 workstations. I remember saying to myself, ‘I can’t believe that can be a job – it must be so cool!’”

As for Valérie Clément, production manager, there’s one film alone that shines brighter than any other in her childhood memory. “It all started when I was a child,” Clément reflected. “My favourite movie was the 1939 version of The Wizard of Oz. For me, it represented everything that was magical in movies.”

"Photoplay" magazine - September 1939

Timeless Classics

Still on the subject of favourite films, there are certain titles that crop up time and again. Ara Khanikian, VFX supervisor, recalled, “I grew up watching ‘awe’ movies like Close Encounters of Third Kind, E.T.: The Extra-Terrestrial, and of course Star Wars. I was truly inspired by the storytelling and the visual quality of these timeless classics. Years later, when Terminator 2: Judgment Day and Jurassic Park came out, I was blown away and fell completely in love with the quality and realism of their visual effects. I kept asking myself, ‘How d’they do that?!’”

These same movies were also a source of inspiration for Cedric Tremblay, digital compositor.Growing up, my father showed me Star Wars and Jurassic Park,” Tremblay remarked. “This is when I really started wondering how those mind-blowing effects were made. Then, those great trilogies of the early 2000s – The Matrix and The Lord of the Rings – helped me discover computer animation and the digital arts. At that point, I just couldn’t imagine myself doing anything else, and so I studied as hard I could to get in this extraordinary world.”

Richard Edlund and the ILM crew prepare a motion control shot of the Millennium Falcon for "Star Wars".

Richard Edlund and the ILM crew prepare a motion control shot of the Millennium Falcon for “Star Wars”.

Yet another devotee of the classics is Frédéric Simard, CG artist, who commented, “I remember the first time I saw Star Wars as a kid. I was blown away. I believed it was all real. Then came Jurassic Park. After seeing that, I had all these questions boiling in my brain. How could they make these things look so real? What process was involved? I eventually got accepted into graphic design and bam, The Matrix came out. Then I knew for sure that’s what I wanted to do.”

It was the dinos that did it for Simon Mercier, matte painting TD, who reported, “One day, my granny took me to see Jurassic Park. At that time, people in Quebec kept talking about our “Quebecois” talent that had contributed to that amazing movie. Right then and there, I stopped studying the dictionary to become a doctor, and tried to learn everything I could about that emerging and buzzing workstream. Today, I can assure you that it was worth all the effort!”

For Guillaume Poulin, VFX editor, it was less about individual films and more about a single filmmaker. “David Fincher is the reason I got into visual effects,” Poulin asserted. “Movies like The Social Network, The Curious Case Of Benjamin Button and Panic Room use VFX as a tool to help the story move forward. Invisible effects are often the most impressive ones. When you can’t tell what was done in post-production, that’s when it’s interesting to me.”

For this scene in "The Curious Case of Benjamin Button", Matte World Digital extended the partial set with an elaborate 3D environment featuring architectural details from the 100-year-old Union Station in Washington D.C.

For this scene in “The Curious Case of Benjamin Button”, Matte World Digital extended the partial set with an elaborate 3D environment featuring architectural details from the 100-year-old Union Station in Washington D.C.

Early Adopters

For many among the current generation of VFX artists, early inspiration came not only from the movies, but also from hobbies, toys and videogames.

Cedric Tremblay was inspired by a classic children’s construction toy. “As a kid, I played a lot with LEGO blocks,” he revealed. “I was creating worlds and inventing stories in space, on earth, in the ocean.”

Ara Khanikian noted, “For a large part of my life growing up, I had a penchant for creativity. I put it to use by building scale models and radio-controlled cars, by drawing and creating classical animations, or by experimenting with stop-motion animation.”

Commenting on his early research into the specifics of visual effects, Khanikian added, “I got a lot of my answers by watching a TV show called Movie Magic … and by reading Cinefex!”

"Myst" by Cyan GamesFrédéric Simard remarked, “I didn’t really know I wanted to do CG until I played Myst – a pre-rendered adventure puzzle game where you can interact with the environment. I started searching for the software I could use to do that stuff. I got lucky enough to use 3D Studio in DOS, and was able to start modelling objects and edit small clips.”

Computer games also inspired Samuel Jacques: “When I saw the cinematics in Final Fantasy VII, I knew for sure what I was going to do later in life. I would record them on the VHS player so I could play them back again and again. At that time, they were a huge step forward in the visual quality of videogames, and the high quality of the images mesmerised me.”

Final Fantasy VII

Getting the Picture

By definition, VFX is a visual discipline, so it’s no surprise that the driving force for Wayne Brinton was “my love for making images on a computer.” Reminiscing about his first Commodore VIC-20 computer, Brinton elaborated, “I spent a weekend programming a ball to bounce and change colour, which I had to save to tape. Then I got my first IBM. Someone loaded a copy of 3D Studio release 3, and explained some of the process of making CGI for VFX. I was hooked!”

Wayne Brinton of Rodeo FX cut his digital teeth using a Commodore VIC-20 home computer. Photograph by Evan-Amos via Wikimedia Commons.

Wayne Brinton of Rodeo FX cut his digital teeth using a Commodore VIC-20 home computer. Photograph by Evan-Amos via Wikimedia Commons.

Nicolas Lemay, digital compositor, recalled his own early obsession with pictures: “I had a deep passion for photography, and I always wanted to push the limits of grading and compositing. For me, the capacity to create and modify images was astounding. You could create universes out of thin air. That is really what brought me into VFX.”

For Félix Vallières, digital compositor, it was always going to be about the moving image. “Everything started when I was about ten years old, and a friend of mine got a video camera,” he commented. “We started doing some analog effects, costumes, and stuff like that, just for fun, making really stupid movies. At that time, I didn’t even realise that digital visual effects were something you could do as a job!”

Course Changes

The author Douglas Adams once wrote, “I may not have gone where I intended to go, but I think I have ended up where I needed to be.”

Early career changes are not unusual for visual effects artists, including Ara Khanikian, who recalled, “When I was studying biochemistry, I came to the conclusion that I hated it, and could not fathom the idea of wearing a lab coat and working in an environment lit by neon lights. I decided to drop out of university and pursue my passion by enrolling in a school that taught 3D animation.”

Tara Conley, VFX producer, also followed a circuitous route into the business: “My VFX story started almost by accident. I graduated from British Columbia Institute Of Technology in Broadcast Journalism in 2002. Over the course of my first year in the labour market, I worked in many different facets of the broadcast industry, from promotions at local radio stations, to writing material for a realtor’s website, to writing, producing and directing my own Mystery Shopping Segment for the television show The Shopping Bags. I quickly realised that I really enjoyed the entertainment side of the industry rather than breaking news after all!”

Valérie Clément, production manager, began by running down a different kind of track altogether. “I started my career in sports, very far from the film/VFX industry,” she admitted. “Several years later, my urge to work on films became too strong, and the geeky part of me brought me to visual effects.”

For Thomas Hullin, CG supervisor, even the lure of the courtroom wasn’t enough to divert him from certain pyromaniac tendencies. “I was about to start law studies, but then I watched The Lord of the Rings. I thought, ‘OK, forget about everything, because there is actually a job where you can bring fantastic universes to life, create armies and blow stuff up!’ Now, a few years later, I am extremely fortunate because I get to work on amazing feature films … and blow stuff up!”

Stuff blows up in this scene from "The Lord of the Rings: The Two Towers", for which physical effects supervisor Stephen Ingram and his NZFX team staged the blast on a large-scale miniature set of Helm's Deep. Weta Digital added multiple layers of animated Uruk-hai warriors, siege ladders and pikesmen.

Stuff blows up in this scene from “The Lord of the Rings: The Two Towers”, for which physical effects supervisor Stephen Ingram and his NZFX team staged the blast on a large-scale miniature set of Helm’s Deep. Weta Digital added multiple layers of animated Uruk-hai warriors, siege ladders and pikesmen.

Pure Inspiration

Like Mary Poppins, inspiration will sometimes just blow in on its own irresistible wind, setting its own rules and just whisking you away.

Julien Klein, digital compositor, described his own unique view of VFX inspiration: “I remember a documentary about Jean-Paul Gaultier’s workshop. and all the little hands that were contributing to his collections, and also the fine embroidery that only a few seamstresses could make. I wanted to be part of such an army of little elves.”

For Alexis Bélanger, digital compositor, the act of doing the work is inspiration enough: “There is a point while working on a shot when you think it’s never going to happen. But then, everything comes together, and the final result magically appears before your eyes. Whether it’s removing an ugly scar from a face, or making a planet explode, the feeling when you finish creating something that is completely different, and improved from what it was originally, is such a rush.”


Watch the Rodeo FX 2014 feature reel:

Established in 2006, Rodeo FX has offices in Montreal, Los Angeles and Quebec City. Recent credits include The Walk, Fantastic Four, Tomorrowland, Cinderella, Unbroken and Birdman, as well as extensive work on the hit HBO series Game of Thrones.

Special thanks to Anouk Deveault. “Forrest Gump” photographs copyright © 1994 by Paramount Pictures Corporation. “Star Wars” photograph copyright © 1977 by Lucasfilm, Ltd. “The Curious Case of Benjamin Button” photographs copyright © 2008 by Paramount Pictures and Warner Brothers Pictures. “The Lord of the Rings: The Two Towers” photograph copyright © 2002 by New Line Cinema.

Visual Effects in China

"Monster Hunt"

Whichever way you look at it, China is huge. Not only is it home around 20% of the entire world’s population, but it’s third only to Russia and Antarctica in terms of land area.

China is big in the movies, too. With a box office of nearly $5 billion in 2014, and its number of cinema screens growing exponentially (over 23,000 in 2014 compared to around 4,000 in 2008), it’s set to get even bigger.

All this growth means that Hollywood productions are desperate to make films that appeal to Chinese audiences – not to mention investors. Conversely, Chinese doors are gradually inching open to admit more Western films into the country’s cinemas, with the Chinese administration now permitting the distribution of 34 foreign films each year, compared to 20 just three years ago. This summer’s biggest hit, Jurassic World, relied for nearly 20% of its $524.4 million debut on tickets bought in China.

Meanwhile, following the success of Ang Lee’s Crouching Tiger, Hidden Dragon in 2000, homegrown Chinese films have begun to capture the imaginations of Western audiences. Recent hits include the VFX-heavy fantasy film Zhong Kui: Snow Girl and the Dark Crystal, John Woo’s The Crossing and the phenomenally successful Monster Hunt, which shortly after release became the biggest-grossing Chinese film of all time. The success of these and other movies proves the appeal of effects-driven movies in all corners of the globe.

Watch the trailer for Monster Hunt:

So, what’s it like working as a VFX artist in one of the world’s fastest-growing motion picture markets? How does the nature and quality of Chinese visual effects work compare to what’s produced in the West? How do budgets and deadlines differ?

Eager to learn more about the state of the art in Chinese visual effects, Cinefex put these questions and more to Wil Manning, VFX supervisor at the Beijing branch of international visual effects company Pixomondo.

How long have you been working in China?

I’ve been working for Pixomondo in China for five years. I spent the first half of that time doing commercial VFX supervision and art direction. The second half has been focused on feature film work. My experience is purely within the local market, although there’s the occasional co-production. I started supervising and producing on my first feature for Pixomondo in 2013, and by the end of this year I’ll have eight feature supervision credits under my belt, plus a couple of other miscellaneous credits.

What drew you to China?

I came here because I thought China was at the start of something pretty amazing. China was growing, changing. If you want to make an impact – to be a part of something and help forge it – then China is the kind of place you go. Besides, I guess I felt like doing something really different, having an adventure. And China has been, if nothing else, a very interesting adventure!

How does Pixomondo’s Beijing branch integrate with the Chinese film industry as a whole?

Our Beijing branch has always been interested primarily in the local industry – we work on a lot of Chinese features – but we’ve also done plenty of work in commercials and international features. What we don’t tend to do is typical outsourcing work. In fact, on most features we’re sending out wire removals and extractions to other vendors, and instead trying to focus on building up our creative chops. We’re super-busy, and it’s amazingly exciting to be here working on these shows.

Do you need to be based in China in order to work on Chinese films?

The broader Chinese VFX industry is very much in a constant state of flux, as a result of government mandates and market forces, both of which continue to change rapidly. If you’re not on the ground here, then I’d argue that you’re not able to keep up with those changes. That makes it very hard to engage with the local market.

One of our focuses has been on building and keeping a strong and stable team. We need people who understand directors and have experience here. The majority of our team are mainland Chinese in nationality and they’re amazing artists. They engage with Chinese as well as Western cinema, and we try to run a facility that they want to belong to.

How do VFX for Chinese films differ from what’s produced in the West? How do the budgets differ, for example?

Budgets have a huge impact on what’s possible in VFX. Currently, the budgets in China are not high.

Consider the Harry Potter, Transformers, or Marvel features – any of your average tentpoles. These can break $800m worldwide, which means they can afford very large budgets. Bigger budgets make bigger VFX possible, as well as more carefully and preciously crafted VFX.

Compare this to local Chinese mainland films, which have yet to bust $250m – in fact, many films that are considered big releases don’t break $100m. But the Chinese market is growing – at an astounding rate of around 35% a year. It’s already the second largest in the world, and expectations are that it will surpass the flat North American market (US and Canada) within three years.

"The Monkey King" posterHow heavily do these lower budgets impact on the quality of the visual effects?

If you watch a lot of Chinese films, you’ll see a heap of bad VFX. Bad edges. Poor extractions. Mistimed plates. Missing grain. Painful animation. Mediocre lighting. It makes trained eyes bleed and gives me nightmares. Part of what we’re doing here is trying to put a stop to this, to make quality control part of the process.

For example, I recently watched The Monkey King with a supervisor friend in L.A., and they were amazed at the lack of finish on the show. Then I mentioned that, as far as I know, the 1,800 or so VFX shots were all completed within three months. My friend was still amazed, not about how these shots passed quality control, but about how the hell anyone could do that much work in such a short time. And why they would agree to it to begin with!

Give us an idea of what that kind of workload means in real-world terms.

If you sat with the director to start reviewing those shots in the last month of production – after two months of work, in other words – you’d need to get through 100 shots a day just for one round of directorial review. Where do revisions even fit into that kind of schedule?

Watch the international trailer for The Monkey King:

Is the situation likely to improve as the Chinese market grows?

Yes. I think that as budgets increase – and they most certainly will – these films will become more and more competitive aesthetically with their Western counterparts. And, while there might be a shortage of time and money on these films in the meantime, studios are not letting that stop them. There is no shortage of ambition in China.

How are you preparing for the changes to come?

One of the things we’re doing is to actively seek out those directors and producers who are concerned about quality in visual effects, and engage them in discussion. We want not only to provide a reliable service in a market which is constantly changing, but also to influence the direction of the industry by being involved at a local level.

We’re already seeing a lot more smaller features coming to us because they have invisible effects, or they need amazing design, and they want to be able to trust that it’s going to be done right. They want someone they can depend on because, in the Chinese industry, that’s a very difficult thing to find.

The bigger fantasy films will come in time, but there’s a lot more money at risk with high budgets at the moment because of the cap in earnings. Also, when it comes to realism, fantasy is a lot more forgiving.

"Zhong Kui: Snow Girl and the Dark Crystal" posterHow do you go about bidding for work?

Fixed bids are very common in China. I personally don’t mind them, and have had a lot of success on smaller shows working with directors to manage the money. Directors have almost full control on sets here – producers are rarely around, and usually don’t interfere.

There are a lot of facilities here that will bid a show at a flat fee off a script. They get awarded the show, and then just work until there’s no money and no time left. The result is the director is trapped into a compromise situation – they literally run out of time for revisions, and the film is not ready, but they have to take it to market. It’s underbidding in a way that beggars belief, and it hurts the industry here, as it surely does elsewhere too.

What sort of working relationships do you typically have with Chinese directors?

We spend a lot of time personally in contact with directors. We use WeChat to communicate, we’re emailing each other directly, we go out to dinner together. It’s rare for a facility supervisor to not be directly involved with the director, and it’s equally rare to be dealing with producers and the production studio, except where money is concerned.

Do you generally work through a production VFX supervisor?

No – there is rarely a production or studio-side VFX supervisor. This is changing, but for many films this role is one we naturally step into. In a way, we represent the directors, and we ensure they get what they need. This includes making sure quality standards are adhered to.

This is a difficult thing to understand from a Western point of view, but when you have production companies that don’t need to ensure VFX quality to sell their films, paired with directors who obviously do care, it’s important to be there and to be a responsible player.

How about deadlines? Are they always as tight as you’ve described?

The Chinese industry is highly driven by getting to market. This means we work to some of the most amazingly short deadlines you’ve ever seen. Right now, with all the growth, the situation is not really improving. Investors are often private, or groups of studios, and they want their returns within the timeframes specified. Release windows are highly contested, and will make you rethink everything you thought you knew about working fast.

Do you work a lot with South Korean companies?

Yes. If you’re doing VFX in mainland China, then you’re working with South Korean VFX companies a lot of the time. Their industry is highly evolved and full of skilled, talented, hard-working artists. They have their own issues too, but the Chinese and South Korean film industries share a lot of ground.

Most of the visual effects for "Zhong Kui: Snow Girl and the Dark Crystal" were created by Korean VFX company Macrograph, with the exception of this underwater action sequence, delivered by Pixomondo.

Most of the visual effects for “Zhong Kui: Snow Girl and the Dark Crystal” were created by South Korean VFX company Macrograph, with the exception of this underwater action sequence, delivered by Pixomondo.

What’s the one quality you need to survive in the Chinese VFX industry?

An ability to think outside the box.

What we at Pixomondo do exceptionally well is to solve people’s problems. If our clients don’t have enough budget for their A-plan, we try to work out a B-plan, or propose an alternative methodology. When our clients come to us with problems in post-production supervision, we go around to visit the vendors and help them get systems working. We sit with our clients and point out what is not good enough and why it should be better.

It’s all a little strange at the beginning, but if you have a passion for making films and just want to get shit done, then it all comes naturally. You just do it.

The VFX-heavy fantasy "Monster Hunt" is now the highest grossing Chinese film of all time.

The VFX-heavy fantasy “Monster Hunt” is now the highest grossing Chinese film of all time.

So, what does the future hold for visual effects in China?

Something that I’ve been thinking about a lot lately is audience expectation in China. My hope is that, over the next few years, the audiences here will become more discerning and more demanding of filmmakers. I feel like this is starting. You hear comparisons of Game of Thrones to Chinese features, and audiences are starting to ask questions.

Frank conversations with producers and investors here will tell you that the quality of a film’s VFX isn’t something they rate highly. That’s because, from a business point of view, it just doesn’t matter much yet. I believe that will change. As it does, Chinese cinema will come into a renaissance.

As the market expands and these changes take hold, will Chinese films become more Westernised?

Not really. That’s not how China works.

There’s a tendency in the West to think that China is trying to expand beyond itself. If my time here has taught me one thing it’s that this country is very internally orientated. That’s not to say it isn’t aware of the outside world, but Chinese people will always want to have Chinese stories as well – and they have a market to support it. So they’ll keep making the films they like.

Will Chinese cinema be influenced by the West? Sure! Absolutely! But it will still become its own thing, similar to how Japanese film is its own thing. The outside influences are still present, but the net result will be something unique and wonderful in its own right.


“Zhong Kui” – VFX Q&A

"Zhong Kui: Snow Girl and the Dark Crystal" - Cinefex VFX Q&A with Wil Manning of Pixomondo

One of the biggest Chinese films in recent years is Zhong Kui: Snow Girl and the Dark Crystal, an epic fantasy that reimagines China’s legendary folk hero, Zhong Kui, as a shapeshifting superhero with a Hulk-like inner self.

Co-directed by Peter Pau, Academy Award-winning cinematographer of Crouching Tiger, Hidden Dragon, and Zhao Tianyu, the film chronicles Zhong Kui’s (Chen Kun) journey of self-discovery as he battles to prevent the demons of hell from invading the world of mortals, while attempting to romance the porcelain-skinned Snow Girl (Li Bingbing).

Most of the 1,200-plus visual effects shots in Zhong Kui: Snow Girl and the Dark Crystal were created by Macrograph, a Korean VFX company, with director Pau fulfilling the role of overall production VFX supervisor. However, for a key action sequence near the movie’s climax, the filmmakers turned to Pixomondo’s Beijing facility.

In this exclusive Q&A session, Cinefex spoke to Pixomondo’s VFX supervisor, Wil Manning, who described the challenges involved with delivering a full-throttle, full-CG sequence to a tight budget and in record-breaking time.

Watch this detailed breakdown video of Pixomondo’s visual effects work for Zhong Kui: Snow Girl and the Dark Crystal:

How did you get involved with Zhong Kui?

One day, in August 2014, I was pulled into the Beijing Pixomondo conference room by our management team. They asked if I would drop what I was doing and jump on to this sequence for a Chinese fantasy film called Zhong Kui. I had no idea what they were talking about.

John Dietz, our former head of production, told me, “Zhong Kui is this gigantic possessed demon hunter, and he fights the even more gigantic Demon King, who has a flaming head and snakes sticking out of his back. They fight through a village, destroying it as they go, then Zhong Kui lures the Demon King into a lake, where they fight through ancient sunken ruins until the Demon King causes a volcanic eruption that levels the underwater environment. Zhong Kui tricks the Demon King, cuts off his hands with a sword he pulls from his spine, and throws him back into the village. The sequence is three minutes long, stereo, full CG. You’ve got four months to deliver – and that time started two weeks ago!”

How did you react?

I blinked a few times, then asked politely if this was some kind of strange hazing ritual to the rookie VFX supervisor. As it turned out, the brief pretty much described exactly what I ended up working on for the next 14 weeks.

"Zhong Kui" visual effects by Pixomondo

How did you set about tackling the enormous workload?

Our Beijing facility was very busy delivering Jiang Wen’s Gone with the Bullets at the time, so we had to reach out to what we affectionately term the “Pixoverse” for help.

My colleagues around the world were very patient in listening to my pleas for assistance, and only a few of them questioned the state of my sanity, given my requests. To be fair, this was expected – I think it’s really hard for people with a predominantly Hollywood background to understand this kind of Chinese brief. Fortunately, we’re a company full of brave and intrepid individuals, so help was forthcoming in the form of three teams spread around the globe.

How did you divide up the work?

We had a team in L.A. that handled previs, animation, look-dev for the characters, and the finishing of the village shots. Our team in Stuttgart, Germany, did all the heavy lifting for the underwater shots. And we had a small home team in Beijing that handled storyboards, design tasks and much of the TD setup work.

In total, we were 35 people spread over three time-zones. Key awesome people involved were Thilo Ewers (division VFX supervisor) and Sebastian Meszmann (VFX producer) in Stuttgart, Tim Jones (division VFX supervisor) and Julia Neighly (VFX producer) in Los Angeles, and Charlie Winter (CG supervisor) and Cinzia Wang in Beijing. I wore two hats as overall VFX supervisor and producer, which may or may not be responsible for the permanent facial twitch I carry to this day.

I think Thilo Ewers and Sebastian Meszmann are as responsible for the success of the underwater sequence as anyone, and I’m very grateful to them for their leadership and professionalism. I tried very hard to give them as much creative control over their own sequences as possible, and it was great that this was rewarded with strong results.

How closely did you liaise with the rest of the production?

On my second day on the show, I went to meet Peter Pau, who was director, producer, production VFX supervisor and cinematographer on the film. Peter is an accomplished and amazing veteran – a man who can wear as many hats as he likes without missing a beat.

Peter explained to us that the rest of the movie’s visual effects – over 1,200 shots – were being handled by Macrograph, and that our little sequence would slip in somewhere near the end of the film. Macrograph would supply us with the creature models, while Peter would supply us with lots of reference footage of the village. There were some storyboards, he told me, but we were free to throw them out.

The original character designs came from Weta. Macrograph created the character assets as Maya rigs with V-Ray for Maya shading and Yeti grooms. Everything else we developed for ourselves.

What was it like working with Peter Pau?

One of my favourite things about working in China is the direct access we tend to have to directors and key decision-makers. This show was no exception. Peter was always available to give feedback, and had an amazing ability to respond to emails with a comprehensive answer within twenty minutes of them being sent. He would do this day or night, from anywhere around the globe. I tried to keep up, but was shamed by my need for regular sleep. This kind of access to Peter, along with his rapid feedback, was one of the reasons we were able to complete the show in such a short time.

"Zhong Kui" visual effects by Pixomondo

How did you plan the sequence?

The previs took five weeks. The animators built the sets as they went, and these were then exported and refined as they worked, eventually being updated with pieces of our kit-bashed environments. We sort of threw it all together at once – there wasn’t much time for anything else.

Did you go to L.A. to brief the animators?

Yes. I tied everyone to their chairs, and then forced them to watch my favourite kung fu action sequences. I talked about camera moves, quick takes and punch-ins, exposition of fighting and environment – a sort of crash course in Chinese “wuxia” cinema.

Which films did you show them, specifically?

One of the films we watched a few times was Crouching Tiger, Hidden Dragon. Peter won an Academy Award for Best Cinematography for that. There’s an action sequence early in the film where the two female leads battle it out over rooftops during the night. That was great, because it was similar to what we were going to do … just with fewer giant demons bursting through the walls and no volcanic eruptions. Still, I think any excuse to watch Ziyi Zhang fight Michelle Yeoh is entirely valid, and frankly action doesn’t get much better.

What lessons did you draw from that?

The shot count in that rooftop scene is really low. Instead of quick cuts, you’ve got long, following shots with a lot of great expository camera movement. One of my missions was to reduce our shot count, and watching Crouching Tiger, Hidden Dragon gave me a lot of ideas of how to do that. Eventually we got down to 34 shots.

"Zhong Kui" visual effects by Pixomondo

Apart from choreography, what else did the previs help you with?

We made a conscious effort during previs to simplify the FX problems. When you have an average shot length of six seconds, and only five weeks for a massive amount of FX tasks, there’s a certain amount of danger involved: longer sims, added complexity, more difficulty with art direction and so on. I suffer from sentimentality and find the tears of FX artists unsettling, so I wanted to reduce the pain.

Can you give us an example?

We have shots where the camera is pulling back ahead of Zhong Kui while the Demon King, in hot pursuit, trips and crashes through a house, destroying it. But you never really see that house being completely destroyed, or see into the interior, because we had the camera whip around to follow Zhong Kui. But the impact is still there and the story is still told. When you don’t have a lot of time, you have to get creative about how you serve the story. In a nutshell, that was the whole challenge of this show.

"Zhong Kui" visual effects by Pixomondo

The characters have a distinctive look, with inner fire glowing through cracks in their skins. How was that developed?

The fiery glow was pretty much all done in comp, using ID and UV passes to augment CG passes of procedurally generated lava textures. I think it was Falk Hofmann, our comp lead in Stuttgart, who worked the look-dev of the characters underwater. We had a lot of problems with the textures we received from Macrograph when we tried to use them in our scenes, but Falk did a fantastic job of turning a rough output into a wonderful polished look.

Tim Jones, our division VFX supervisor in L.A., was responsible for Zhong Kui’s shaders and he also did a great job – so much so that Peter Pau asked us to pass our adjusted shaders back to Macrograph to be incorporated into other shots.

Beyond that, we didn’t have a lot of control over the characters. We were trying to both match and enhance Macrograph’s look, without making it so different the audience would notice. We only had a few stills of theirs to work with – I don’t think I ever saw a complete shot of theirs until I watched the final film. So we were working a little blind.

"Zhong Kui" visual effects by Pixomondo

How did you approach the character animation?

Animation was extremely challenging. We were racing to get things finished so that all the character interaction FX could be done. I actually feel that in the end we didn’t quite get to where I wanted. Some shots are beautiful, but in others the weight of these giant warriors just doesn’t come across. The previs went well, but it probably took too long, and the refining of the animation just needed more time. But the circumstances were such that we didn’t have a lot of options.

For example, it would have been great to have done animation dev on things like the snakes, but we only got rigged snakes seven weeks out from delivery, and four weeks out they were re-supplied at half the length. Peter was great about this, and understanding of what we were trying to do in the time frame – for which I’m grateful – but it does make it hard to do work when you have no time to explore.

Did the deep-sea setting make it difficult to pace the action?

I would have loved to play more with the whole underwater feel, and to have integrated it better into the animation and storytelling. Sometimes we nailed the floaty feeling, other times the size and strength of the characters against the buoyancy of the water just doesn’t communicate well. If we’d had another month, I think we would have been able to make it more special.

Tell us more about the underwater environment. Did you build it all in 3D?

Yes, the underwater environment was fully 3D. One of the real challenges the Stuttgart team faced was staying flexible while being in such a rush. 3D made it easier – we could lower a few bricks here for better contact with animation, or add bits and pieces there for better shot composition, all on the fly as cameras changed.

"Zhong Kui" visual effects by Pixomondo

How about the lighting? Did the underwater setting complicate things?

Above water, our lighting was heavily simplified. Underwater, the lighting interacts a lot more with the environments. While the dust passes are faked, the non-volumetric passes are lit from the characters in reasonably complete scene files.

Compositing was pretty traditional from a technical point of view, but I think the art direction and look development from Thilo Ewers, our division VFX supervisor in Stuttgart, makes the underwater sequence very visually striking. The colour and tone are immersive, and they draw your eye to the key action, even when the scenes get murky. We went through several iterations, adjusting the amount of depth, and shots were individually tuned from templated look-dev.

You talked about simplifying the demands on the FX department. But there’s still a lot going on in those shots – fire, underwater debris and so on.

I’m really proud of what the Stuttgart FX team accomplished within the time. From final animation to the time the shots were delivered, they had about 4-5 weeks – and the last animation publishes came to them with less than two weeks to spare. Patrick Schuler, our division FX lead in Stuttgart, and his team did an amazing job.

What specific effects did they add to the shots?

For every shot in the two-minute-long underwater sequence, there’s a pass of bubbles for each character’s lava skin, bubbles from nose and mouth, interactive plankton in the water, a dust/suspension FX pass and, if there’s contact with any surface, another dust pass driven via contact.

On top of that, there’s fracturing destruction with added dust trails, fine particle passes, and large particle passes. Debris that hits the ground sinks into the ground mesh and emits further dust. There’s also brick rigid bodies, brick fracturing and ground fracturing effects. Oh, and there’s blood and a complex series of water entry passes for the first water shot.

"Zhong Kui" visual effects by Pixomondo

Was it hard to get all those different simulations looking as if they were happening within the same body of water?

Well, one interesting problem was trying to consolidate the fields and underwater turbulence across multiple solvers. We accomplished this by being careful about the order with which we used the solvers, and by having emissions driven by the previous solve. For example, fracturing came first, then dust, then plankton. When one sim drives another, you get a stronger feeling that they are suspended in the same water body.

Apart from this, the FX approach was reasonably traditional: low res collision meshes were cached out, along with the higher resolution ones, from Maya. We then used Houdini to handle things like plankton, a lot of the fracturing and general destruction. 3DSMax, with Fume and thinkingParticles, was used extensively for the dust, blood, and other destruction effects. Where possible, Houdini meshes were bought back into Max for lighting, but some things – like the plankton – were rendered directly in Mantra. Cloth was done with nCloth, while Hair was handled by Ornatrix attached to a proxy head in Max. We were happy that Zhong Kui has a very manly stiff beard – it made the sims a lot easier!

Bingbing Li stars as Snow Girl in "Zhong Kui: Snow Girl and the Dark Crystal"

Bingbing Li stars as Snow Girl in “Zhong Kui: Snow Girl and the Dark Crystal”

How do you feel about the project, looking back?

Ah, my feelings are very complicated. It’s a Chinese fantasy film, and probably none of my family will ever see it. Even if they did, they would compare it with other VFX-heavy superhero tentpoles, and from that point of view it’s hard to see the strengths of the film. And it would be incorrect to say what we achieved is as good as the work we did on Star Trek: Into Darkness, or what we’ll end up doing on the Fantastic Four reboot.

But, for 14 weeks of work, designing and building a full CG sequence that on paper was incredibly challenging – and for a Chinese market budget to boot – well, it’s pretty cool seeing it come together, having happy clients, paid artists and some genuinely cool shots for the reel.

On that note, I’m not sure how anyone ever really copes with projects that require justification in order for people to appreciate them fully. No supervisor wants to say “it was good considering the restrictions”, despite that often being the case.

But that’s a challenge that you continually face, working in China right now. We don’t have the budgets. We don’t have the same amount of time. But the ambition is certainly there and, given time, we’ll catch up.


Zhong Kui: Snow Girl and the Dark Crystal is released in the US on Blu-ray and DVD today, 4 August 2015.

Don’t miss next week’s accompanying article, where we’ll be talking to Wil Manning about the state of the art in Chinese visual effects, and discussing the challenges and benefits of working in one of the fastest-growing motion picture markets in the world.

Special thanks to Joni Jacobson, Christian Hermann and Sirena Ung.

“Pixels” – VFX Q&A

Pixels - Cinefex VFX Q&A

It’s game over for humanity. Well, it is by the end of Patrick Jean’s 2010 short film Pixels, in which a swarm of 8-bit videogame characters escapes from a trashed TV and proceeds to turn first New York City, then the entire planet, into multi-coloured cubes.

Now, Jean’s original concept has been expanded into a full-length feature directed by Chris Columbus. Sharing its title with its progenitor, Pixels stars Adam Sandler and Michelle Monaghan … not to mention a host of re-spawned retrogamer favourites like PAC-MAN, Donkey Kong and Q*bert.

Ironically, in order to create the desired old-school look for the movie’s digital characters, the filmmakers needed to deploy state-of-the-art visual effects. Heading the effects team were production VFX supervisor Matthew Butler and VFX producer Denise Davis. The majority of the effects shots being assigned to Digital Domain and Sony Pictures Imageworks, with nine other VFX companies playing supporting roles.

One of the biggest challenges faced by this extended visual effects team was how to level-up the 1980s game characters to become fully three-dimensional entities. The solution involved discarding traditional flat pixels, and instead constructing the characters using 3D cubes known as “volume pixels – or “voxels”, for short.

So how exactly were the voxelised characters of Pixels brought to life? To find out, we spoke to key artists at Digital Domain, Sony Pictures Imageworks, and a number of the other vendors who joined forces to craft the pixels of Pixels.

"Pixels" including visual effects by Trixter

Sony Pictures Imageworks – VFX supervisor, Daniel Kramer

How did Sony Pictures Imageworks get involved with Pixels?

Lori Furie from Sony Pictures invited us to bid on the work. I met with Matthew Butler and Denise Davis to talk about the challenges, and Matthew and I hit it off pretty quickly – we had similar ideas about how to approach the look of the characters. I was on the show for about a year, which included some on-set supervision for Imageworks’ portion of the work. In December 2014/January 2015 we started getting our first plate turnovers, so actual shot production lasted about 5­6 months.

What was the scope of your work?

We delivered about 246 shots in all, though some of those didn’t make it into the final cut. The bulk of our work was during the chaotic sequences towards the end of the film, where the aliens unleash all the videogame characters on to the streets of Washington D.C. We had to develop a large number of characters for that – 27 in all – as well as the alien mothership.

We also handled the shots in Guam, when the Galaga characters first arrive, as well as the digital White House and White House lawn extensions. And we were responsible for all the Q*bert shots, some of which we shared with Digital Domain.

For "Pixels", Sony Pictures Imageworks digitally re-created a number of hard-to-access Washington D.C. locations, including the White House and its surroundings.

For “Pixels”, Sony Pictures Imageworks digitally re-created a number of hard-to-access Washington D.C. locations, including the White House and its surroundings.

Describe your relationship with the director and production-side VFX team.

I worked very closely with Matthew, both on-set and during shot production, meeting several days a week. Fortunately, he was close by at Digital Domain, which is only about a 15-minute drive from Imageworks. We generally reviewed work in person, with only the occasional cineSync session.

Chris Columbus worked from his offices in San Francisco, and had daily reviews with Matthew and team over a high-speed connection to Digital Domain. It was a very slick system – the VFX production team in Playa del Rey could stream full 2k content to Chris’s projector, and Chris could stream Avid media back. When we had shots to review, I would head to Digital Domain with Christian Hejnal, our Imageworks VFX producer, and review our shots directly with Chris and Matthew.

Matthew and Denise were really great about including Imageworks as a peer in the process, so I was able to present work directly to Chris and hear his notes first-hand. That really tightened up the feedback loop.

Did you take visual cues from the original 2010 short film by Patrick Jean?

We studied Patrick’s short quite closely for inspiration. His short is really charming, and a lot of that charm comes from the very simple shapes and silhouettes of his characters. We quickly learned that over­detailing the characters destroyed what made the original game concepts so engaging, and so we always worked toward keeping the characters as low-res as possible, with just enough voxel resolution to read the animation clearly.

For each game, John Haley, our digital effects supervisor, was generally able to find original sprite sheets and YouTube videos of gameplay for the team to reference. We’d use the sprite sheets for modelling inspiration, and then Steve Nichols, our animation supervisor, would study the gameplay, working in as many elements as possible into our characters’ motion.

Watch Patrick Jean’s original short film Pixels:

What challenges did you face when translating the 2D game characters into 3D?

The 3D “voxel” look was already established in Patrick Jean’s short, but there are many ways to go about voxelising a character, and to determine how those voxels track to the animation.

For example, should we model the characters with voxels directly, or build them procedurally? Should voxels be bound to the character like skin, or should characters move through an invisible voxel field, only revealing the voxels they intersect? This latter solution – “re-voxelisation” – is akin to rasterising a 2D game character on a CRT: as the sprite moves through screen space, the static pixels on the screen fire on and off.

Which solution did you favour?

Chris and Matthew liked the notion that the characters would re­voxelise as they moved – it felt more digital. But our first attempts at a pure, static voxel field posed a few problems.

First, it proved impossible to control the orientation of a voxel relative to the character’s orientation, as the two were independent. On one frame, the voxel faces might be perpendicular to features on the character’s body; but after the character turns, those same voxels might have turned relative to the same feature. This made it difficult to keep the characters on-model.

Another issue was that even very small motions caused the whole character to re­voxelise as it intersected different parts of the static field, which was distracting.

The last big issue revealed itself in lighting. If the voxels were static, and simply turned on and off as the character moved, they never changed their relationship to the set lighting. This made it difficult to shape our characters and make them feel believably integrated. So, while we really liked the idea of a static field, in practice there were too many issues.

Since the static field option wasn’t working out, what did you opt for instead?

We ended up using a hybrid approach, parenting smaller voxel fields to different parts of a character’s body. So, one field might be tracked to the face, another to the chest, another to the upper arms, and so on. These fields moved independently with the rotations and translations of the skeleton. Any deformation – like squash and stretch – would cause re­voxelisation in that region. This calmed down the re­voxelisation to a pleasing level, gave us more control on how voxels were orientated to the characters’ features, and fixed our lighting issues by allowing voxels to rotate through space.

With that decided, how did you then go about building and rigging the character models?

For most characters, we would build a smooth-skinned model with a simple rig. Our FX department, headed up by Charles­Felix Chabert, would build a procedural Houdini network to break up the character into sub-voxel fields.

Even though the characters looked quite simple, they were actually really heavy, with a solid volume of cubes, each cube with bevelled edges. The polygons added up fast! For large scenes with hundreds of characters, we quickly learned that the voxelisation process could take days to complete. Much of our further development was therefore about optimising the workflow. Our final pipeline ended up passing a single point per cube to the renderer, and instancing the cubes at render time.

What approach did you take with the lighting?

Chris Columbus didn’t want the characters to feel plastic. That said, there’s a lot of charm in keeping the characters simplistic and blocky, as seen in the original Pixels short. Chris, Matthew, and Peter Wenham, the production designer, came up with the idea of “light energy”, whereby the cubes emit light from an internal source. This allowed the cubes to retain a simple geometric shape, while still showing hints of complexity burning though the surface.

How did that work for scenes in bright sunlight?

Consider a light bulb outside in the sun – the internal light needs to be incredibly bright to be visible, and once you get there you’ve lost all shape on the object. That makes it really difficult to integrate it into the scene. After much trial and error, we settled on having only a subset of the cubes emit light at any one time. We also animated that attribute over time. This allowed the environment light to fall more naturally on the dormant voxels, thus anchoring the objects into the scene and giving good contrast against the lit voxels.

SPI-Pixels-Qbert-Build-01

Which was the most difficult character to develop?

Q*bert took the most effort. He’s really the only game character who needed to act and emote with a broad range. We started by pulling as much of the original game art as possible. The in­game sprites are incredibly low-res, but there’s a lot of detail in the original cabinet artwork – that was our main source of reference for features and proportions.

With basic approval on the smooth model, we moved on to voxelisation in Houdini. The first versions used a single voxel size for the whole body, but we quickly found that we needed more detail in areas like the eyes and brows, and less in areas like the skull. Each feature of Q*bert was dialled to get just the right voxel size and placement. Most of our trial and error in learning how to voxelise our characters happened during Q*bert’s development.

A number of techniques were used to soften the angular appearance of Q*bert's voxel building blocks, including multiple voxel sizes, and transparent outer layers revealing smoother shapes beneath.

A number of techniques were used to soften the angular appearance of Q*bert’s voxel building blocks, including multiple voxel sizes, and transparent outer layers revealing smoother shapes beneath.

Q*bert is very round and cute. Did the blockiness of the voxels fight against that?

When we presented our first Q*bert lighting tests to Matthew and Chris, we had applied a simple waxy plastic shader to the model. Chris felt our shading treatment was too close to Lego. With all the hard cube edges he said, “It looks like it hurts to be Q*bert!” This comment sent us on a long journey to figure out how to make a character built from hard, angular cubes look cute and soft.

We ended up doing literally hundreds of tests, adjusting all aspects of the model and shading to find a combination that would work. We introduced light energy to the interiors and edges of the cubes, dialling a pattern to control where and when cubes would emit light. We layered voxels at different scales into the interior of Q*bert, and adjusted the transparency of the top layer to reveal the depth.

We also introduced some of the underlying round shape of the smooth model into the cube shading – this allowed us to rim and shape Q*bert with softer graduations of light. The combination of all of these tweaks – and a lot of elbow grease by our look-dev team – finally found a look Chris and Matthew liked.

What approach did you take with Q*bert’s animation?

Animation for Q*bert was a lot of fun, with cartoon physics and lots of opportunities for gags. In one early test, we only allowed Q*bert to move through a scene by jumping in alternating 45° rotations, just like the videogame. We really liked this idea in theory, but in practice it wasn’t that interesting. Instead, Q*bert transitions from hopping to walking and running, varying his gait in a more natural way.

See Q*bert and a host of other videogame characters in this Pixels featurette:

How did you tackle the big action scenes towards the end of the movie, when the videogame characters are trashing Washington D.C.?

One of our more difficult shots was the attack on the Washington Monument, which opens our “D.C. Chaos” sequence. The camera tracks a group of Joust characters to the monument, and we circle the action as they begin to destroy it. The difficult part was the location – we’re flying right over the National Mall, next to the White House and Capitol Building. This is a strict no­fly zone. So, with no way to get the background plate in-camera, we knew we would need to create a photoreal 2½D matte painting of the whole area.

What reference were you able to get of the area around the Washington Monument?

We started with previs from the team headed up by Scott Meadows. This gave us the exact angles of D.C. we needed to plan for. We were also able to get a permit to fly a helicopter around the outside perimeter of the National Mall to acquire reference. We earmarked about four key locations where we could hover and acquire tile sets to use in our reconstruction.

In practice, none of these locations was really ideal – Homeland Security just wouldn’t allow us to get close enough to the monument. So, in addition to the helicopter footage, we acquired panoramas and hundreds of stills of the monument and surrounding buildings by walking up and down the Mall. We were also able to go inside the monument and capture stills through the top windows.

In order to show the destruction of the Washington Monument, Sony Pictures Imageworks created a 360° panoramic matte painting of the surrounding environment, to act as a backdrop for a digital model of the monument itself.

In order to show the destruction of the Washington Monument, Sony Pictures Imageworks created a 360° panoramic matte painting of the surrounding environment, to act as a backdrop for a digital model of the monument itself.

How did you then create the digital environment?

Once we had all the reference back at Imageworks, we started building a simple model of the area. We 3D-tracked stills from our helicopter shots, adding them to the model as needed. Jeremy Hoey, our matte painter, had the difficult task of bringing all these sources together to create one seamless, 360°, 2½D matte painting of Washington D.C. as seen from the monument location.

What about the Washington Monument itself?

We built a 3D, photoreal version of the monument, which needed to be destroyed using a mixture of voxel shapes and natural destruction. As each Joust character strikes the monument, the area local to the hit is converted to large voxel chunks, with light energy spreading from the impact point. Then, as the destruction continues, large cube chunks begin to loosen and fall away from the monument.

We found that keeping the scale of the chunks really large looked a lot more interesting and stylised – smaller voxels started to look too much like normal destruction damage. FX artist Ruben Mayor designed all the sims in Houdini for the destruction shot, and Christian Schermerhorn did an excellent job compositing a completely synthetic shot to create a very photographic feel.

How do you feel about your work on Pixels, looking back?

At first blush, Pixels seems like a simple job, because the characters look so simple. Nothing could be further from the truth! Each process had to be invented, and every shot required a complicated Houdini process to deliver the data to lighting. I underestimated the number of challenges we would encounter, many of which we just couldn’t predict until we dived in. I’m really proud of what the team was able to accomplish.

What’s your favourite retro videogame?

That’s a tough question! I played most of these games as a kid, plus numerous computer games on my Apple. For arcade machines, I really liked Pole Position and Zaxxon. On my Apple, I loved the original Castle Wolfenstein and Karateka.

"Pixels" including visual effects by Digital Domain

Digital Domain – VFX supervisor, Mårten Larsson

How did Digital Domain get involved with Pixels?

Patrick Jean, the creator of the original short, was actually represented by Digital Domain’s production company, Mothership, for a short time. After the feature film had been attached to Chris Columbus, his team reached out to Matthew Butler, senior VFX supervisor at Digital Domain, to work on the film with him.

I started doing tests for the show in October 2013, and we delivered our last shots in June 2015. Most of our shot production happened between September 2014 and June 2015.

Which sequences did you work on?

The bulk of our work was three sequences: PAC-MAN, Centipede and Donkey Kong. We created the characters, along with anything they interacted with and the environments they were in. We also did a few shots of people dissolving into voxels and reassembling from voxels, as well as some characters from other sequences.

Many of the Donkey Kong shots in "Pixels" deliberately use angles and compositions inspired by the original gameplay, as in this shot by Digital Domain.

Many of the Donkey Kong shots in “Pixels” deliberately use angles and compositions inspired by the original gameplay, as in this shot by Digital Domain.

How much contact did you have with the director?

We worked very closely with both Chris Columbus and Matthew Butler. Chris was based in San Francisco, so after the shoot we mainly interacted with him over video-conference. The production-side VFX supervisor and producer had space here at Digital Domain for the duration of the show, so we worked very closely with them.

What aspect of the show’s visual effects did you find most challenging?

I’d say the trickiest part was how to translate 2D characters into fully 3D characters that move in a physically plausible way, while still trying to retain the spirit of the simple video games that we have all come to know and love.

Take Donkey Kong, for example. He has a very iconic look that was fairly easy to match when seen from the front, in a pose that matches the videogame. But, when you start looking at that same model from a three-quarters angle, it looks less like the game. Add motion to that. and you’ll end up in poses that he never does in the game.

One of the big challenges was to keep Donkey Kong looking consistently like the original game sprite, even when seen from multiple angles in three dimensions.

One of the big challenges was to keep Donkey Kong looking consistently like the original game sprite, even when seen from multiple angles in three dimensions.

How did you solve this problem?

There was no real silver bullet to solve it. We basically tried to get Donkey Kong looking as close to the game as possible, using iconic poses, and trying really hard to not show him from angles that were too different from the game.

The characters in Pixels are quite visually complex, compared to the 8-bit originals. Was that deliberate?

The characters needed to be made out of boxes – or voxels – to resemble the pixels from the original low-res games. In order to make the characters look complex and more real, we added a lot of detail and pattern to both the individual boxes and the overall character. The idea is that, even though they’re made out of simple voxels, they are actually aliens with very complex technology. This approach also gave us an excuse to add some light-emitting energy, and make things look cooler and more interesting.

Early on, we ran into the issue of reflections in flat surfaces. If you look at PAC-MAN, he is a sphere. If you build that sphere out of boxes, your brain tells you that you are looking at a sphere, but you are actually seeing flat, mirror-like reflections on the surface. It looks really strange. We got around this on all the characters by blending some of the pre-voxelised model normal into the normal on the voxels.

Were the animators working with the basic, smooth-skinned characters, or with the final voxelised versions?

The animators worked with the pre-voxelised character, but had the ability to turn on the voxelisation process to check poses and facial expressions when needed. A lot of attributes were also transferred from the skinned character across to the voxels, tracking with its movement and sticking on the per-frame-generated voxelised version.

So the voxels were only generated after the animation was complete?

Yes – all things voxelising went through our FX department, and were passed on to lighting for rendering. We also had setups for the characters to go straight from animation to lighting via an automated voxelisation system. But, anytime we needed to do anything special for a character in terms of how it voxelised, the FX department picked up the animation publish of the regular skinned character and generated a new voxelised version for lighting.

Digital Domain's Centipede character went through a number of design iterations, and includes many features inspired by the 1980s arcade game artwork.

Digital Domain’s Centipede character went through a number of design iterations, and includes many features inspired by the 1980s arcade game artwork.

How did you develop the look of the Centipede character?

Centipede started as a design that resembled an actual centipede. From there, it was tweaked to look much more like the art on the side of the arcade game, with claws for feet, a lizard-looking face and eyes, and snake-like teeth. We went with this look for a while, using different sizes of voxel to capture the smaller features.

How did it progress from there to the final design?

After a few rounds of testing looks and poses, we got a note from Chris – he thought the character had lost some of the simple charm of the game. At that point, we went back to a design much closer to the previs model. We still incorporated some features from the arcade game art look – for example, we made the eyes similar to the artwork, but didn’t put the pupils in. We also used the sharp teeth and the claws. We ended up with a character that looks mean, but is still similar to the game. I think where we landed in the end is a very successful mix.

"Pixels" including visual effects by Digital Domain

What happens in the PAC-MAN sequence?

PAC-MAN is terrorising the streets of New York, and being chased by our heroes in “ghosts” – Mini Coopers with special equipment that can kill PAC-MAN. The approach for this sequence was to use CG for PAC-MAN (obviously), and also for most of the things he interacted with. As much as possible, the rest would be shot practically.

For the PAC-MAN sequence, Digital Domain combined their CG character with practical effects and vehicle stunts shot on location in Toronto.

For the PAC-MAN sequence, Digital Domain combined their CG character with practical effects and vehicle stunts shot on location in Toronto.

Tell us about the PAC-MAN location shoot.

The Mini Coopers were practical cars driven by the stunt team in downtown Toronto, which was made to look like New York. Since PAC-MAN is essentially a giant yellow light bulb in the middle of a street at night, we knew that he would throw out a lot of interactive light. To help with this, a Mini Cooper was rigged with big yellow light panels and generators on the roof, and used as a stand-in for PAC-MAN. The car also had a paint pole on the roof, with an LED light up top, to show the height of PAC-MAN and help with the framing of shots.

The decision was taken to sink PAC-MAN a little into the ground, bringing his mouth closer to street level and thus making it easier for him to bite down on his prey.

The decision was taken to sink PAC-MAN a little into the ground, bringing his mouth closer to street level and thus making it easier for him to bite down on his prey.

For some shots, the interactive light car was only used for lighting reference, and in other cases it was present in the filmed plate. Where the timing of the car didn’t work with what Chris wanted PAC-MAN to do in the animation, we ended up painting out both the car and the light it threw. Overall, it was very helpful to have as a reference though, and in some shots that was all the interactive light we used in the shot.

Practical light rigs were used on location to simulate the yellow glow cast by PAC-MAN. In many shots this was enhanced, or replaced entirely, with digital interactive lighting.

Practical light rigs were used on location to simulate the yellow glow cast by PAC-MAN. In many shots this was enhanced, or replaced entirely, with digital interactive lighting.

Did you do 3D scans of the street environment?

Yes – we did lidar scans of most environments to help our modelling and tracking departments. We knew we’d be modelling a lot of geometry that would need to line up very closely to buildings and parked cars in the plates, in order to get the reflections and interactive light from PAC-MAN to look believable.

In the end, we modelled a bit more than we we’d originally planned, but the interactive light helps so much to sell that PAC-MAN is really in the scene, since it’s pretty obvious that PAC-MAN is fake, no matter how good we make him look!

The lidar was also very helpful in creating geometry for collisions with all the voxels we threw around when PAC-MAN bites cars and objects. Our general rule was that anything he bites turns into voxels, while anything he bumps into is destroyed as if he was a normal earthbound object. Of course, we broke that rule a lot, and did whatever we thought looked more interesting.

How did you create the glowing trail left by PAC-MAN?

PAC-MAN had both a heat trail and a misty-looking energy trail. These were generated and rendered by the FX department.

The reason for the trail was to tie PAC-MAN into the ground a bit better. Because he had to be able to bite objects at ground level, and because we had restrictions on how much he could open his mouth, we ended up having to sink him into the ground. If we hadn’t, his lower lip wouldn’t have reached the ground, and he wouldn’t have been able to bite anything that touched the ground.

It looked a bit odd to just have him truncated at ground level and not have him influence the ground at all, so the trail element was added. I think it adds to PAC-MAN’s look. With all the white and warm lighting in the city, and with PAC-MAN being yellow, it was nice to get some other colours in there – that’s why we went with a slightly blue colour on the trail.

Some of Digital Domain's PAC-MAN shots required fully digital vehicles and destruction effects, as seen in this partially rendered animation breakdown.

Some of Digital Domain’s PAC-MAN shots required fully digital vehicles and destruction effects, as seen in this partially rendered animation breakdown.

How do you feel about your work on Pixels?

I’m very proud of the work our team created. It’s a cool mix of really odd characters and, in my opinion, cool effects. PAC-MAN looks like something alien that we haven’t seen before. The end result is quite unique, and different from most movies out there. I like that.

What’s your favourite retro videogame?

We had a whole bunch of arcade games on set for the sequence when our young heroes are in an arcade hall in the ’80s. They all worked, so I played a lot of them. I think the highlights for me were Missile Command and Q*bert. Another highlight was meeting Professor Iwatani, the creator of PAC-MAN, and getting a photo standing next to him in front of a PAC-MAN arcade game!

Watch Eddie Plant (Peter Dinklage) battle PAC-MAN in this Pixels featurette:

Trixter – VFX supervisor, Alessandro Cioffi

When was Trixter invited to work on Pixels?

Simone Kraus, CEO and co-founder of Trixter, had been following the project from L.A. since the early days of pre-production, and we’d been looking forward to being a part of the show since then. Matthew Butler led us through the look and the vision on the VFX, with in-depth explanations on the intentions and the dynamics of every shot and sequence. We ended up doing some concept work, and what I call a “VFX cameo”.

How many shots were in your VFX cameo?

We worked on about 25 shots, in six different sequences. Along with some blaster effects and pixelated invaders, we also created the concept for Michael the Robot. We then built, rendered and integrated him into the lab sequence. Also worth a mention is the brief appearance of Nintendo’s Mario, for which we designed a three-dimensional version of the character, extrapolated out of the original game design. We rigged, animated and integrated him in one shot during the Washington D.C. attack sequence.

Was there much scope for creativity, or were you conforming to a look and feel that had already been established?

Mainly, our references were shots done previously by other vendors. However, for Michael the Robot, we referenced real mechanics, and for the Missile Command effect, the videogame itself.

Even though we had to blend into an already strongly-established aesthetic, Matthew always encouraged us to come up with alternatives for the look of this or that effect. In fact, although he and the director had extremely clear ideas on what they were after, Matthew left the door open to improvements or variations. I love these opportunities for such creative work.

Tell us more about Michael the Robot.

We created a CG extension of Michael’s glassy head – which contains complex, hi-tech internal mechanisms and electronics. From an initial 2D concept, we developed a 3D digital concept model which served as our base for approval.

To integrate Michael into the live-action plates, we had to do intricate match-moves, in order to ensure that his movement and action would fit seamlessly. We added procedural animation to the mechanism inside the glass dome in order to achieve a fluid animation, using several lights constrained to procedural animation. Lighting and rendering was done through The Foundry’s Katana and Solid Angle Arnold.

What’s your favourite retro videogame?

Definitely Missile Command. I have spent innumerable coins on it!

"Pixels" including visual effects by Atomic Fiction

Atomic Fiction – VFX supervisor, Ryan Tudhope

Tell us how Atomic Fiction came to be part of the Pixels team.

We had been following Pixels since it was announced, and really wanted to find a way to help out on such a ground-breaking project. Having known Denise Davis and Matthew Butler for some time, we knew the visual effects team was top-notch, and wanted to collaborate with them in any way we could. Fortunately, they had a small sequence that was a great fit for us.

What did the sequence involve?

We came on board fairly late in the project’s schedule, and our involvement was on only a few shots. The work we did was really fun and complex, however, centering on several sequences requiring dialogue-capable CG face replacements of Ronald Reagan, Tammy Faye and Daryl Hall – in the story, they are all avatars of the alien invaders. All three characters leveraged Atomic Fiction’s digital human pipeline, which we’ve utilised on several projects, including Robert Zemeckis’ upcoming The Walk.

How long did you spend working on the shots?

Because of how involved the shots were from an asset and animation standpoint, our schedule spanned approximately four months. We interacted mainly with Matthew Butler. He is an amazing supervisor to work with, always on point with his feedback, and great at inspiring the team!

Were you familiar with Patrick Jean’s 2010 short film?

We’re huge fans of the original short, and love the fact that its popularity led to the development of this feature film. It’s a great Hollywood success story. While our digital human shots didn’t directly relate to the work in Patrick’s original, it was nonetheless a great inspiration for our team.

The cast of "Pixels"

Describe how you created these celebrity avatars.

Our mission was to add our visual effects work to VHS-quality captures from the 1980s. So, in contrast to other projects, our Pixels artists were starting from scratch, with no scan data, textures or on-set HDRIs of these celebrities. This required us to find a wide variety of photographic reference to sculpt, texture and light each celebrity by eye.

It was a really fun challenge, to set aside technology for a moment and just discuss what looks right (or not) about a digital model. It’s fun to find solutions in spite of limitations like these – it gets back to the art of it all.

Atomic Fiction’s CG Supervisor, Rudy Grossman, led a lean team that included Mike Palleschi, senior modeller, and Julie Jaros, lead animator. Because we all knew exactly what our shots would need and what dialogue was required, we were extremely efficient during the face-shape modelling and rigging phase. Our asset work and shot work was happening concurrently, as we were effectively evaluating both modelling and lighting in the same takes. Tom Hutchinson led the charge on look development and lighting, and Jason Arrieta on compositing.

What’s your favourite retro videogame?

That’s easy: Moon Patrol!

Adam Sandler in "Pixels"

Before flashing up the GAME OVER screen on this Q&A, there’s just time to check in with the other vendors who helped bring Pixels to the screen.

Storm Studios delivered a 20-shot sequence showing India’s Taj Mahal being attacked by characters from Atari’s Breakout. Storm’s VFX supervisor was Espen Nordahl (who named Super Mario Bros as his favourite retro videogame). Having determined that the graphics from the original Breakout were a little too rudimentary, Nordahl’s team incorporated design elements from later iterations of the game. This allowed them to give the characters more shape, yet retain the old-school pixelated look. A digital Taj Mahal asset was built, after which the Storm crew ran rigid body destruction simulations in Houdini. Additional effects were layered in, showing sections of the building turning into voxels, adding “light energy” to the voxels at the impact points, and creating holes in the structure where the balls bounced off.

“I’m very proud of the work we did,” Nordahl commented. “This was our first big international show, and I would like to thank Sony and Matthew Butler for trusting us with such a complex sequence.”

At shade vfx, a team of CG and compositing artists was tasked with creating the illusion that the stars of Fantasy Island – Ricardo Montalban and Herve Villechaize – were delivering a congratulatory message from space. Led by VFX supervisor Bryan Godwin, shade’s team reconstructed full-CG versions of both actors from archival photographic reference and clips from Fantasy Island. Even though the task only required new lip-sync to match the newly-recorded dialogue, it was necessary to animate the entire cheek structure, eyelids and even nose to react correctly to the newly structured phonemes.

One More VFX, with Emilien Dessons supervising, worked on the Donkey Kong arcade duel sequence in which young Eddie Plant duels young Sam Brenner. Their goal was to re-create accurate Donkey Kong game graphics using MAME (Multiple Arcade Machine Emulator) software, as well as to re-create the high score motion graphics.

As a point of interest, Johnny Alves, Matias Boucard and Benjamin Darras of One More VFX were also executive producers on Pixels (Patrick Jean was a 3D supervisor at One More VFX when he made the original short).

At Pixel Playground, VFX supervisor Don Lee oversaw the production of a number of shots involving 2D compositing and 3D tracking, greenscreen, the integration of game footage into arcade games and a variety of cosmetic fixes.

Further VFX support was provided by Lola VFX, Cantina Creative and The Bond VFX.


Watch the trailer for Pixels:

Special thanks to Steven Argula, Rick Rhoades, Tiffany Tetrault, Franzisca Puppe, Geraldine Morales, Lisa Maher, Benjamin Darras and Kim Lee. “Pixels” photographs copyright © 2015 by Sony Pictures Entertainment and courtesy of Sony Pictures Imageworks and Digital Domain.