About Graham Edwards

I'm senior staff writer at Cinefex magazine. I also write novels. In a former life, I produced animated films for theme park rides and science centres. If you offer me a cold beer, I won't say no.

Avengers: Age of Ultron – Cinefex 142 Extract

Avengers: Age of Ultron - exclusive VFX coverage in Cinefex issue 142

All this week we’re featuring exclusive extracts from the articles in our latest issue – Cinefex 142, now available in both print and digital editions.

Our final offering is Avengers: Age of Ultron. You might think that once you’ve seen one Hulk, you’ve seen ‘em all. Not so, as proved in this extract featuring interviews with some of the key players at ILM:

“We did some nips and tucks on the original model to make Hulk leaner,” explained ILM animation supervisor Marc Chu. “Hulk is a little more GQ handsome this time around.”

ILM also developed a new muscle rig for Hulk, based on real-life physiologies. “ILM has always taken an outside-in approach to this type of animation,” commented ILM visual effects supervisor Ben Snow. “We would create a detailed skin surface with a lot of the muscle shapes built in, and then we’d create a muscle rig underneath that – but, often, the rig was used mostly for getting fleshy sims to jiggle and bulge.

“We still relied very heavily on the work and detailing done by our sculptors. On this one, creature technical director Sean Comer proposed building an inside-out system that entailed a more realistic set of underlying muscles, which would allow us to transfer more correct motion back onto the skin.”

Read the complete article in Cinefex 142, which also features Jurassic World, Mad Max: Fury Road and San Andreas.

All content copyright © 2015 Cinefex LLC. All rights reserved.

San Andreas – Cinefex 142 Extract

San Andreas - exclusive visual effects coverage in Cinefex issue 142

All this week we’re featuring exclusive extracts from the articles in our latest issue – Cinefex 142, now available in both print and digital editions.

This time it’s the turn of San Andreas. Destruction on the scale seen in San Andreas demands the use of digital techniques. However, when it comes to making a room set, you can’t beat practical effects, as described by special effects supervisor Brian Cox:

“Originally we discussed putting the whole set on a shaker rig. When I learned that Brad was planning to film the scene using a Steadicam – following Carla Guigino through the restaurant, up and out onto the roof – I pointed out that shaking the whole set would make it very difficult for the camera operator. Instead, we made everything move around the camera on individual shaker rigs.

“We put water features on rails and had air bags push and pull those to get the water moving. We rigged bottles coming off the bar, all the tables and the chairs. We had shakers everywhere, with debris dropping from the ceiling and a small explosion in the kitchen. It was quite a big deal. I had 30 effects technicians operating rigs all around that set.”

Read the complete article in Cinefex 142, which also features Jurassic World, Mad Max: Fury Road and Avengers: Age of Ultron.

All content copyright © 2015 Cinefex LLC. All rights reserved.

Mad Max: Fury Road – Cinefex 142 Extract

Mad Max: Fury Road - exclusive visual effects coverage in Cinefex issue 142

All this week we’re featuring exclusive extracts from the articles in our latest issue – Cinefex 142, now available in both print and digital editions.

Today we’re looking at Mad Max: Fury Road. In this extract, stunt supervisor and action unit director Guy Norris describes the massive logistical operation behind the action-packed location shoot in Namibia:

“The desert base camp accommodated approximately 1,000 crew members. “It was a Western on wheels. It was like when John Ford would get together 40 riders and they’d spend three months filming in Colorado. We had 65 stunt people on location for nine months. On our biggest days, we had more than 150 stunt performers.

“We had our training base and gym in a factory in Walvis Bay, and when we rolled out to location it was like a military operation with different divisions — bikes, trucks and cars. Every morning I used a big whiteboard to brief the crew, drawing roadmaps and using Matchbox toys to show what we were going to do.”

Read the complete article in Cinefex 142, which also features Jurassic World, San Andreas and Avengers: Age of Ultron.

All content copyright © 2015 Cinefex LLC. All rights reserved.

The VFX of “Jonathan Strange & Mr Norrell”

Bertie Carvel as Jonathan Strange and Eddie Marsan as Mr NorrellThis month, June 2015, marks the bicentenary of the Battle of Waterloo, the decisive European conflict of 1815 in which the French Emperor Napoleon and his army were defeated by British and Prussian forces.

It can be no coincidence that BBC Television has chosen this season of commemoration to air Jonathan Strange and Mr Norrell, a dark fantasy in which two rival magicians of the 19th century direct their otherworldly powers to aid the war effort at Waterloo. Adapted by Peter Harness from the award-winning novel by Susanna Clarke, the seven-episode show is directed by Toby Haynes, and features around 1,000 visual effects shots created by a team of 50 artists at Soho-based Milk VFX.

“The show’s producer, Nick Hirschkorn, is a long-standing client of ours. About four years ago he told me he’d just acquired the rights to Jonathan Strange and Mr Norrell,” recalled Milk CEO Will Cohen. “I got very excited – I’d read the book when it first came out, and loved it. The show went into preproduction in April 2013, at which point we sat down with the first scripts, and had initial meetings with the production designer [David Roger] and the director of photography [Stephan Pehrsson] to discuss how to bring it alive.”

Rain ships created by Milk VFX for "Jonathan Strange & Mr Norrell"

The Battle of Waterloo

Throughout the series, the magical powers of Jonathan Strange (Bertie Carvel) and Mr Norrell (Eddie Marsan) are made manifest through a number of supernatural set-pieces. One of the most spectacular of these occurs at the beginning of episode five, when Napoleon’s army attacks the British and allied forces at Château d’Hougoumont in Belgium – a key moment in the Battle of Waterloo.

During the sixty-second opening shot, the camera swoops over a smoke-covered battlefield filled with tens of thousands of warring soldiers and resounding with cannon-fire, before finally descending into the fortified garrison of Hougoumont, where Jonathan Strange is using his magic to help repel the enemy hordes.

“Instead of just seeing twenty extras in the scene, with the main battle happening off-camera – which is a very ‘television’ conceit – we wanted to get the full horror of the fighting,” remarked Cohen. “We’ve done similar scale shots for films, like Insurgent, so to me it represents a crossover to what you can do with high-end television in 2015. I’m really proud that the producer and director and execs backed up doing it.”

Watch a video breakdown of the Battle of Waterloo sequence:

Inspired by the battle scenes in Sergei Bondarchuk’s 1970 film Napoleon, the Milk team set about crafting the Waterloo sequence. “We worked out that in Napoleon there were about 40,000 extras,” commented visual effects supervisor Jean-Claude Deguara. “That film was also a great point of reference for us in terms of terrain and textures.”

To re-create the garrison at Hougoumont, a set was constructed at an airport location in Canada. The final frames of the grand opening shot were photographed as a crane move, with the camera descending from high level down into the set. Deguara and his team incorporated this live-action into the latter half of their shot, digitally replacing the airport surroundings with the appropriate terrain. The front end of the shot was fully digital.

For the most part, the terrain was historically accurate, based on both documents from the period and data gleaned from Google Maps. Further research served up facts and figures about how the troops were deployed on the day. While the environment and troop movements were reproduced as accurately as possible, artistic and technical considerations made necessary the occasional bit of historical revisionism. “We had to cheat it slightly to get everything in,” Deguara confessed.

Editorial and budgetary pressures meant that a shot as ambitious as the Waterloo flypast was constantly under threat of being trimmed – or even cut altogether. “We knew that this shot could potentially be chopped at any point,” remarked Deguara. “So we started off at a very basic previs level – the armies were just blue squares and red squares. We began with the movement of the camera, then gradually built it up, step by step. With a TV budget, you have to be so regimented in how you do it.”

CG supervisor Nicolas Hernandez added: “On a feature film, you can create an asset straight away that will be photorealistic for a full-screen close-up. On this show, we had to put in more time towards the end to get the assets looking nice.”

"Jonathan Strange & Mr Norrell" visual effects by Milk VFX

The massive 60-second aerial shot that opens the Battle of Waterloo sequence took around three months to complete. The 50,000 digital soldiers were controlled using Golaem Crowd.

Trumping Bondarchuk, the Milk team populated their digital battlefield with no less than 50,000 soldiers. During production, performers and extras were photoscanned wearing period costume, with the resulting data being used to create multiple types of CG double, ranging from Napoleonic grunt to Regency officer.

To control these huge numbers of digital extras, Milk turned to Golaem Crowd, their crowd management tool of choice, which they had first used on Brett Ratner’s 2014 feature Hercules. “We had soldiers, captains, all with different guns, different uniforms, different props,” said Hernandez. “They were all procedurally managed by Golaem.”

Similar attention to detail was used when re-creating the firepower of the digital troops. “Every cannon had five soldiers around it, which was historically correct,” observed Deguara.

Simulation systems permitted each cannon to “fire” automatically. “The simulation worked out the projection of the cannonball, and where it would hit the ground,” explained Hernandez. “We imported it into Maya, where we used a library of explosions that triggered procedurally. The first version we did of the shot looked like a Michael Bay film! We had 25,000 explosions, and the whole screen was covered in smoke!”

Ultimately, the choreography of the cannon-fire – together with the rest of the action in the shot – was determined through a number of meetings with the director. “It was all about Toby’s subjective taste,” Cohen commented. “That included questions about whether the camera should get interfered with by the explosions and cannon fire, and at what point in the shot that should happen.”

The opening Battle of Waterloo shot took around three months to complete. During the refining process, the shot proved too complex for one person alone to check. “We had four people watching it back in dailies,” revealed Deguara. “They each took a quadrant of the screen. Each time we’d notice little mistakes and glitches – like a soldier who’s running through a tree – and we’d go back and fix them. We could still be working on that shot today!”

Jonathan Strange & Mr Norrell - visual effects by Mlik VFX

The digital matte painting of the battlefield seen at the end of the Battle of Waterloo was inspired by wartime paintings in London’s National Gallery.

Magic at Hougoumont

Once the camera has finished its sweep of the battlefield, the action continues inside the Hougoumont garrison. As the French soldiers attack, Jonathan Strange uses his magical powers to help counter the onslaught. First, he brings to life the vines which ramble over the garrison’s fortified walls. Snake-like, one of the vines plucks an enemy soldier from the ground and flings him to his doom.

To achieve the effect, a stuntman was photographed on location, suspended from a wire rig. Hero vines, modelled and animated in Autodesk Maya, were tracked to his body. Just before the throw, the live-action performer was replaced by a digital counterpart.

Jonathan Strange & Mr Norrell - visual effects by Mlik VFX

Milk used both procedural and keyframe animation to bring the vines of Hougoumont to life.

In a classic piece of movie misdirection, the transition from stuntman to digi-double was concealed by the body of another soldier moving briefly in front of the camera. “You see the top part of the vine gearing up for the throw, then as soon as the other soldier wipes across the frame, we’re into our digi-double getting flung out of the way,” explained Deguara. “Actually, that was a reference to an episode of The Simpsons, when Bart and Homer are trying to trap a rabbit! It’s a comedy moment that we tried to slip in there.”

When the building behind him catches fire, Strange summons a giant waterspout from a well and uses it to extinguish the flames. “We had big fire hydrants and hoses there on the day,” recalled Deguara, “and we had real water pitching down over the doorway – interactive elements for when the soldiers run out. We did the shot with two camera moves, which we joined together. Our digital waterspout comes out of the well, then splits into five sections to put out the fire.”

The main body of the waterspout was procedurally generated in Houdini, with extra detail built up using liquid flip and white water simulations, plus layers of mist. Additional procedural tweaks were used subsequently to allow artists to choreograph the animation of the liquid. The central column was rendered in Maya using Arnold, while Mantra handled the more finely detailed effects in Houdini.

As the French soldiers press home their attack, the resulting hand-to-hand combat was enacted by actors and stunt performers. Weapon strikes were enhanced by Milk, using a combination of practical and CG effects to increase the blood quotient. “We got them to send costumes over,” stated Cohen. “We would put a costume on a dummy, and cut it as if it had been hit by an axe. Then we would photograph that and comp it into the shot.”

Jonathan Strange & Mr Norrell - visual effects by Mlik VFX

At the height of the Battle of Waterloo, Jonathan Strange combats a French soldier using a giant replica of his own hand made from mud.

Towards the end of the sequence, Jonathan Strange finds himself at the mercy of a French soldier. Staring death in the face, he causes a giant facsimile of his own hand to rise up from the ground and uses it to crush his attacker to death. By the time the live-action for the scene was shot, however, the Hougoumont set had become a quagmire, causing Milk to reconsider their initial design for the effect.

“We originally thought the hand would be made of dry, crumpled mud,” Deguara commented. “When we got the plate back the place looked like a swamp, so we had to re-evaluate it. We turned the hand into something more muddy and slimy. I think it worked in our favour, that fluid look.”

The geometry of the CG hand was deformed to give it a liquid appearance, with viscous fluid particles being emitted from the more distorted areas to produced falling chunks and streams of mud. Additional displacements and variations in colour, applied at render time, added the interactive effects of the rain which falls throughout the scene.

Jonathan Strange & Mr Norrell - visual effects by Mlik VFX

The mud hand was animated to match the movement of the performer playing the French soldier, who was suspended on a wire rig.

As with the vine sequence, the actor playing the doomed French soldier was held aloft using a wire rig. “We couldn’t afford a digi-double for that shot,” noted Hernandez. “The hand is animated to the guy on a wire – it was a bit tricky, but we did get it to work in the end.”

Equally tricky was making the soldier’s death gruesome, yet without upsetting squeamish viewers. “There were a lot of taste decisions to be made,” Cohen remarked. “Originally, as Jonathan Strange squeezes the life out of the soldier, his head was going to pop off. But even though it’s after the nine o’clock watershed, there’s still a line you can’t cross!”

The Waterloo scenes are augmented by a number of environment extensions and matte paintings, including the final shot in which the camera cranes up to reveal a wide shot of the battlefield. “It’s very painterly, inspired by the wartime paintings in the National Gallery,” commented Cohen. “We rendered out some of the topography that was created for the battle sequence, and then painted on top of that.”

Jonathan Strange & Mr Norrell - visual effects by Mlik VFX

The burning windmill scene was one of many digital environments created by Milk VFX for “Jonathan Strange & Mr Norrell”.

A Round-Up of Regency Magic

Jonathan Strange’s manipulation of the elements during the Battle of Waterloo is characteristic of the way magic is portrayed throughout the series. Referred to by the characters as “practical magic”, these supernatural spectacles of the Regency era are earthy and visceral – no Harry Potter pyrotechnics here.

“That comes from the book,” Deguara asserted. “It was a key part all the way through: if you want to use magic, you have to negotiate with the elements. Norrell has to read about it from a book, whereas Strange has more of a natural feel for it.”

Some of the other magical set-pieces featured in Jonathan Strange and Mr Norrell include:

Mr Norrell brings the statues of York Minster to life

Milk delivered thirty shots for the York Minster sequence. Their cast of animated statues included a line of seven stone kings, musicians and a Latin-speaking bishop. Bertie Carvel stepped briefly out of his role as Jonathan Strange to perform as the bishop while wearing a green chromakey leotard. His presence on set gave Dr Foxcastle (Martyn Ellis) a physical presence to react to, and provided Milk’s animators with crucial reference for the statue’s performance.

Jonathan Strange conjures a herd of giant sand-horses

Though comprising just eight shots, the sequence offers considerable spectacle, from an aerial shot pursuing cracks racing across the beach, to the explosive impact of the galloping sand-horses with a stranded warship. The sand-horses themselves – modelled to combine the features of both sleek racehorses and sturdy shire horses – were rendered as volumes, enhanced with particle effects and rigid body simulations.

Sandhorses created by Milk VFX for "Jonathan Strange & Mr Norrell"

Jonathan Strange’s sand-horses emerge from an English beach and gallop towards a stranded warship.

Mr Norrell creates a fleet of illusory rain-ships

Answering director Toby Haynes’s request for a sequence that resembled a Turner painting, the Milk team combined live-action rowing boats (shot against greenscreen in a Yorkshire pond), simulated ocean environments and CG ships. Swirling displacement effects and interactive rain reinforced the ghostly navy’s subtle, dreamlike appearance.

Rain ships created by Milk VFX for "Jonathan Strange & Mr Norrell"

French scouts cautiously approach the illusory fleet of rain-ships conjured by English magician Mr Norrell.

Jonathan Strange creates a road for British troops

A close-up shot of stones spawning outwards from a central point – dubbed the “popcorn shot” – was created with a rigid-body simulation in Houdini. Matte painting techniques, combined with a CG dust trail, were used to show the road extending rapidly over distant hills.

Milk created the eerie netherworld of "Lost Hope" as a fully 3D environment, enhanced with rolling mist and illuminated with volumetric lighting.

Milk created the eerie netherworld of “Lost Hope” as a fully 3D environment, enhanced with rolling mist and illuminated with volumetric lighting.

Small Screen, Big Ambitions

Jonathan Strange and Mr Norrell contains more visual effects shot than many feature films, even allowing for the show’s seven-hour running time. Squeezing such a VFX quart into the pint pot of a television budget demanded precision planning.

“It’s a massive logistics and communications exercise,” Cohen observed. “We had pockets of people working all over the place for months at a time. They would park a certain piece of work, then come back to it weeks later.”

Just as important as being organised was being able to roll with the punches. “You have to pre-plan as much as you can,” Deguara agreed. “But when you get out on set, obviously things change and you have to adapt. Then we would all get in a room and have serious chats about an edit that might be five or six shots over what it should be. That’s when you have to make hard decisions.”

Hernandez added: “We were compromising to stay in budget, so whenever the director said, ‘I want this,’ half the time we would have to say, ‘You can’t afford it.’ We did a lot of stretching to make everything look good.”

Reflecting on the series as a whole, Cohen concluded, “We went on quite a journey. It was a very collaborative and enjoyable process for eighteen months, and we’re all very proud of it. From Peter Harness putting the words on the page, to Toby’s seven hour marathon of shooting it, there was a feeling all the way through of serving the material, and not letting it down.”


Jonathan Strange and Mr Norrell is currently airing on BBC America in the US, and BBC One in the UK.

Special thanks to Jenny Burbage. “Jonathan Strange & Mr Norrell photographs copyright © 2015 by BBC Television.

Jurassic World – Cinefex 142 Extract

Cinefex 142 "Jurassic World" coverAll this week we’re featuring exclusive extracts from the articles in our latest issue – Cinefex 142, now available in both print and digital editions. First up is Jurassic World.

Describing a scene in which Owen Grady (Chris Pratt) releases the genetically-engineered velociraptor Blue from her harness, Image Engine visual effects supervisor Martyn Culpitt describes the attention to detail required to ensure close interaction between the human and digital performers:

“There was a gray-shaded raptor maquette with a harness on it in the plate, and we replaced that with our animated raptor. Our biggest challenge was creating the very subtle interaction between the raptor and Chris Pratt, especially in closeups, which had to show Blue’s connection to Chris Pratt’s character. We were always pointed toward nature reference of real animals in creating raptor facial expressions.

“The difficult thing with the anatomy of the raptor’s head is that there is a lot of bone above its eyebrow line and along the snout – so all of that had to remain rigid. What we could move was the soft tissue around the edges of the mouth and eyes. We could make the nostrils flare to suggest emotion, too. We couldn’t push any of this too far, though, because they had to look like animals. The facial animation was very subtle.”

Read the complete article in Cinefex 142, which also features Mad Max: Fury Road, San Andreas and Avengers: Age of Ultron.

All content copyright © 2015 Cinefex LLC. All rights reserved.

Now Showing – Cinefex 142

Cinefex 142 - From the Editor's Desk

Roll up! Roll up! The new issue of Cinefex is now officially open for business!

Just like that theme park with all the dinosaurs, issue 142 of the world’s premier visual effects magazine is filled with white knuckle rides – sorry, make that “reads”.

First up is Jurassic World. For the safety of our readers, we’ve caged this article behind a ninety-foot titanium-steel electrified fence, so that none of the in-depth details about the making of the movie can possibly escape. We think.

Also on parade in Cinefex 142 are Avengers: Age of Ultron, the earth-shaking San Andreas and the eye-popping Mad Max: Fury Road. We’re 100% committed to your security so rest assured that, as you read these articles, there’s only a very small chance you’ll be smashed by the Hulk, swallowed by a hole in the ground, or torched by a gang of crazed post-apocalyptic punks.

Here’s Cinefex editor-in-chief Jody Duncan to talk about the high-octane attractions igniting the interior of issue 142 …

Jody Duncan – From the Editor’s Desk

It was 1974 when a date and I drove the 45 minutes to Palm Springs to experience Earthquake in the highly marketed “Sensurround” audio system that had our seats, teeth and nerves rattling. San Andreas, one of the four films covered in Cinefex 142, is probably the biggest earthquake movie to hit the theaters since that time, and as a resident of Southern California who has often driven over sections of the San Andreas fault line, I look forward to the film with particular interest. Joe Fordham wrote our San Andreas story, as well as our coverage of Mad Max: Fury Road, the first George Miller-directed “Mad Max” film in 30 years.

I spent the past three months writing the effects story for Avengers: Age of Ultron, which Marvel Studios allowed us to see twice before the film’s opening. The first screening was so early, there were virtually no completed visual effects shots (which is why I had to screen it again, many weeks later) – but at least I got the gist of the storyline.

I believe Avengers: Age of Ultron set my personal record for the number of visual effects companies interviewed for one article: 20 effects studios, and a total of 26 interview subjects! The large pile of transcripts was daunting, but the resulting story leaves no Sokovian cobblestone unturned, no Ultron nuance unexplored, no Hulk muscle twitch unexplained.

Then there was Jurassic World.

Jurassic World felt a bit like a homecoming to me. I wrote the cover story for the first Jurassic Park in Cinefex 55, as well as the books The Making of Jurassic Park, which I wrote with Don Shay, and The Making of The Lost World, for which I went solo. I’ve been tracking those genetically engineered dinosaurs for a very long time …

So, I felt pleasure – and a small measure of relief – when the visual effects artists at ILM spoke of those films with such respect and reverence. I had wondered if a new generation of effects artists – people accustomed to more digital firepower on their cell phones than the Jurassic Park crew had in their entire arsenal – might dismiss that pioneering work as “less than.” To the contrary, they all spoke as if fully aware that they were standing on the shoulders of giants.

Thanks, Jody! All that remains is for me to open the gates and declare this issue of Cinefex well and truly … wait a second … is that a hole in the fence? Where did those gigantic footprints come from? And why are all those people screaming?

There’s only one thing for it. Grab your copy of Cinefex 142 right now … and run!!!

 

The Visual Effects of “Sense8”

Sense8 - a Netflix originalEver since the release of The Matrix in 1999, filmmaking siblings the Wachowskis have constantly pushed the boundaries of the motion picture medium. Always ready to embrace innovative – and frequently mind-bending – narrative techniques, they have now turned their attention from big screen to small, with the release of the new Netflix series, Sense8.

Created by the Wachowskis and World War Z screenwriter J. Michael Straczynski, Sense8 explores the interwoven lives of eight people whose minds are irrevocably linked by a single, extraordinary event. Throughout the course of the show’s twelve episodes, these interconnected individuals – known as “sensates” – must not only come to terms with seeing the world through each others’ eyes, but also evade the hunters who are trying to track them down.

Visual effects for Sense8 were overseen by production VFX supervisors Dan Glass (who also directed the story set in Seoul) and Jim Mitchell. The production planned to cover most of the work with an in-house team, with a few larger sequences being commissioned to Deluxe’s Encore TV. However, as the show’s visual requirements grew, additional vendors were brought on board.

Watch the Sense8 trailer:

VFX Q&A – Dan Glass

When did you get involved with the project?

The first discussions of involvement began in 2012, while we were in pre-production on Jupiter Ascending.

Creatively speaking, what was your overall approach to the show’s VFX?

The show had an incredibly tight budget and timeline, so we made every effort to capture what we could in-camera. Whilst the Wachowskis are known for their epic visual style, they are actually very pragmatic, and this show ran incredibly smoothly as a result. We decided early on to play the transitions between the sensates as largely sleight of hand, having them physically in each others’ spaces when communicating. Whilst this made for simpler VFX, the shooting style often required covering the dialogue up to four times – once in each location with the two sensates interacting, and again with each individual sensate acting alone in their respective environment.

How did you go about assembling the VFX team?

We set up an in-house team in Chicago, where editorial was based. This was led by digital effects supervisor Ryan Urban, who has worked with us for a number of years. The in-house team ended up completing over 700 shots for the series. For a small number of sequences, we knew early on that we would need the help of an external vendor: the Nairobi bus chase, and weather augmentation in Iceland for the finale. For the more complex work, and to help with bandwidth for the quick turnarounds, we enlisted the support of Encore VFX, Locktix and Technicolor VFX, with additional help from Studio 8 FX, Trace VFX and Almost Gold.

What were the key VFX scenes?

Most of the VFX is pretty invisible: split-screens (some very complex!), crew and rig removal, weather augmentation and screen inserts. The more visible work includes age manipulation of actors, more dramatic weather, a few greenscreens, CG blades, blood and wounds. For the most part, we aimed to shoot everything for real, and enhance later where appropriate.

How did you find the transition from VFX supervisor to director?

Directing was a great and satisfying challenge – if nerve-racking at times! There are a lot of skills you learn as a VFX supervisor towards the craft of telling stories, but nothing compares with showing up in a country where limited English is spoken, and having to deal with the feature film ambitions of a project on a TV shooting schedule.

Would you do it again?

Of course!

Daryl Hannah in "Sense8". Photograph by Murray Close.

VFX Case Study – Locktix

One of the vendors hired to accommodate the growing needs of Sense8 was L.A.-based Locktix. “They got in touch with us and said they needed extra firepower,” explained Locktix VFX producer Gresham Lochner. “Originally it was only for one episode – maybe five or ten shots – but then they realised how much the shot count was growing. It didn’t make sense for them to scale up internally, so when they started to divvy out work, we were kind of a shoe-in to continue on with the rest of the episodes. We’d love to thank specifically Dan Glass and Ryan Urban – they were some of the best clients we’ve ever worked with.”

Locktix was set up in 2011 by Lochner, following a stint as senior compositor at Digital Domain. Lochner had previously worked at a number of other effects facilities including Rhythm and Hues, Method Studios, Rising Sun Pictures and MPC. Matthew Bramante became his business partner in 2013, following a similar tour of duties around the world. “We’ve been growing steadily for the past couple of years,” Lochner stated. “We’ve just moved into a new space in downtown L.A., about five times the size of the original office space we had in Santa Monica.”

Fire enhancement was among the many invisible effects used by Locktix to enhance scenes in “Sense8”.

Fire enhancement was among the many invisible effects used by Locktix to enhance scenes in “Sense8”.

With just six full-time staff, ramping up to around twenty-four during busy periods, Locktix encourages its VFX artists to take on multiple roles. “If we have, say, a comp shot that requires some 3D tracking, I absolutely want to have an artist who can shepherd it all the way through,” Bramante remarked. “Also, it gives the artists ownership of their work, and their own creative input. That’s something that was really important to me when I was coming up as an artist, so I try to give it back to them.”

Ultimately, Locktix found themselves working on every episode of Sense8, with work ranging from digital fixes such as wire removal and split-screen effects to combine different actor performances from multiple takes of a scene, through to set extensions and atmospheric enhancements. “It definitely grew in scope as we went along,” commented Bramante, Locktix’s VFX supervisor on the show. “By the end we were up to 160-180 shots across all twelve episodes.”

To fill out this scene at a movie premiere, Locktix duplicated extras from the plate to expand the crowd, as well as adding extra dressing including spotlights and limos.

To fill out this “Sense8” scene at a movie premiere, Locktix duplicated extras from the plate to expand the crowd, as well as adding extra dressing including spotlights and limos.

In true Netflix fashion, all twelve episodes of Sense8 were released simultaneously in all Netflix territories around the world. Indeed, in a recent interview with TVLine, star Daryl Hannah stated, “You may want to refrain from calling Sense8 a TV series at all. It was shot like a twelve-part movie. It is an incredibly cinematic, massive, epic-scale film.”

Despite the massive scale, however, Sense8 was delivered according to a relatively conventional television production schedule. “We’ve now done a couple of shows with Netflix, and they keep to a weekly or bi-weekly schedule, depending on what the production decides, and when editorial starts locking,” Bramante revealed. “On Sense8, we had a general understanding of how much was going to happen across the series, so we could hedge our bets and start work early on effects-intensive stuff that might be coming down the pipe later on. But schedule-wise it was still handled and delivered episodically, week by week.”

One of the major sequences Locktix worked on features a hazardous car journey to a hospital in blizzard conditions. While the sequence forms an important part of the series finale, time-shifts within the show’s overall narrative mean that references to it appear throughout the entire series.

Locktix added falling snow and atmospheric effects to around 40 shots in the climactic "Sense8" blizzard sequence. Top: original production plate. Bottom: final composite shot.

Locktix added falling snow and atmospheric effects to around 40 shots in the climactic “Sense8” blizzard sequence. Top: original production plate. Bottom: final composite shot.

“They wanted to increase the danger of the scene, to add some more snow and really make it look like there’s a blizzard going on,” commented Bramante. “That was definitely our biggest sequence – somewhere in the vicinity of forty shots, from the beginning of the show all the way through to the end.”

One of the main tasks facing the Locktix team was filling the air with snowflakes. “There was snow on the sides of the roads, and there were one or two shots where they got some practical snow to fall, but for the most part the falling snow was added in,” Bramante stated. “We went through a bunch of iterations with our CG effects, exploring different types of snow and asking, ‘Does this look like just a flurry? Is this too blizzardy?’ We also added skies, clouds and various atmospheric effects.”

Demanding though the blizzard sequence was, shots for which Locktix were asked to combine actors from multiple takes were not without their challenges. “Funnily enough, most of the ‘straightforward fixes’ ended up being some of the more difficult shots to do!” observed Bramante. “As with most modern filmmaking, the cameras were moving all over the place, so with the split-screens there might be one take that really worked, and another take that really worked, but the cameras would be in wildly different places for each one. That was where we had to get creative.”

However, the ultimate challenge for the Locktix team came in the form of a humble chicken.

“There was this chicken, and it was supposed to be the same bird in two different shots,” Bramante recalled. “But they couldn’t get the same chicken both times, so in one shot it was all white, and in the other it was white with black feathers.”

Bramante’s solution was to push the 2D distortion tools in compositing software Nuke to the limit. “We have one or two Nuke guys who love to come up with new tools,” explained Bramante. “They created a whole suite of distortion tools based on Nuke’s IDistort, which let us do different skews and transforms, all connected to multiple trackers. We used those tools to create some fantastic looking feathers. For something that nobody will ever think of as being visual effects, it was actually a pretty big shot!”

To create this composite shot, Locktix integrated the wintry exterior view with a greenscreen shot of the boy, adding the window glass and the falling snow beyond.

To create this composite shot for “Sense8”, Locktix integrated the wintry exterior view with a greenscreen shot of the boy, adding the window glass and the falling snow beyond.

VFX in Los Angeles

Working as they do in Los Angeles, the Locktix team are uncharacteristically optimistic about the future for the visual effects scene in a city whose once-thriving VFX industry has suffered a decline, with major effects facilities moving elsewhere or even closing down altogether. Lucrative subsidies continue to attract studios to whatever country offers the biggest financial advantage, forcing VFX houses – not to mention the “pixel gypsy” artists who are employed by them – to set up shop wherever the work is.

“Contrary to popular belief, there’s plenty of work here,” Bramante asserted. “There’s a lot of stuff that’s really quick turnaround, which definitely suits our particular company ethos. So, in the past couple of years, we’ve been specialising in doing what we call ‘911 effects work’.”

While providing an emergency service to filmmakers can be stressful, due in no small part to the short notice and tight deadlines, being local can be a distinct advantage. “A lot of the filmmakers are here in L.A.,” Bramante explained, “and what happens is that, at the last minute, editorial finds fifty shots that have to be done in a week and half. That’s happened for us on some pretty major movies. On the show we’re working on right now, we’re going back and forth with editorial, and they’re just a twenty minute drive across town. So it’s really easy to interface with them. With a phone call, or even on Skype, things can get lost in the shuffle.”

Lochner added, “When you get a larger company, there’s maybe twelve people who have to touch something before you can bring in work and get it back out. At Locktix we’re lean and mean. So just one person can bring a shot all the way into the pipeline and back out again.”

For “Sense8”, Locktix artists digitally augmented this facial makeup, adding texture to the wound and skin detail to the prosthetic patch, and matching the colour of the prosthetic to the actor’s skin as his face became flushed.

For “Sense8”, Locktix artists digitally augmented this facial makeup, adding texture to the wound and skin detail to the prosthetic patch, and matching the colour of the prosthetic to the actor’s skin as his face became flushed.

Personal contact also brings creative benefits. “I like to be in the room with the editor, or creative director, or producer,” Bramante commented. “I love coming up with creative options that give people other ways to achieve their goals, so at the end of the day they’re not saying, ‘Oh, these guys just want to make money off of us.’ I really want our visual effects to help the production.”

Having built a business in a city which many others have fled, Lochner has his own views on how best to tackle the thorny subject of subsidies. “My personal opinion is that it’s just a symptom of a poor business model – which is kind of rhetorical at this point,” he observed. “But we’ve structured things internally to get around that, so we can still compete even when people are being subsidised. So when those subsidies go away, and people are chasing them to a new area, we’re still going to be right here. It won’t disrupt our operation at all.”

Sense8 is now exclusively streaming on Netflix.

“Sense8” photographs copyright © 2015 by Netflix.

Lucasfilm & ILM Launch “ILMxLAB”

ILMxLAB

Lucasfilm, Ltd. have just announced the formation of ILM Experience Lab. That’s ILMxLAB to you and me.

Drawing upon the talents of Lucasfilm, ILM and Skywalker Sound, this new division is on a mission to create immersive entertainment experiences at a fidelity never seen before. ILMxLAB’s ambitious plans include the development of virtual reality, augmented reality, real-time cinema, theme park entertainment and narrative-based experiences for a range of future platforms.

Watch the ILMxLAB launch video:

The official ILM press release includes comments from some of the key players, starting with Lucasfilm Executive Vice President and ILM President Lynwen Brennan, who stated:

“The combination of ILM, Skywalker Sound and Lucasfilm’s story group is unique and that creative collaboration will lead to captivating immersive experiences in the Star Wars universe and beyond. ILMxLAB brings together an incredible group of creatives and technologists together to push the boundaries and explore new ways to tell stories. We have a long history of collaborating with the most visionary filmmakers and storytellers and we look forward to continuing these partnerships in this exciting space.”

Vice President of New Media for Lucasfilm Rob Bredow added,

“The pioneering spirit that inspired storytellers and technical artists to improvise, innovate and help imagine a galaxy far, far away is in the DNA of ILMxLAB. We see xLAB as a laboratory for immersive entertainment. It’s amazing to be working in a new medium where we get to help invent how stories are told and experienced, connecting artists with their audiences like never before.”

Lucasfilm President Kathleen Kennedy stated:

“The people who work here have been investing in achieving the unachievable for more than 40 years. Creative storytelling was something that George Lucas instilled in each of the companies from their earliest days and out of that came the incredible innovation that continues to this day. We are currently exploring the fictional universes of Star Wars, and I think a lot of people would like to be immersed in them. The challenge of ILMxLAB will be to find out what storytelling looks like in this new space.”

Creative Director, ILMxLAB John Gaeta noted:

“Cinema is a master storyteller’s art form. Until recently, a “4th wall” has contained this form. Soon, however, we will break through this 4th wall and cinema will become a portal leading to new and immersive platforms for expression. ILMxLAB is a platform for this expansion. We want you to step inside our stories.”

ILMxLAB is currently collaborating on a number of projects, all of which are currently under wraps. But, given the steady build-up towards a certain cinematic event in December, it comes as no surprise to learn that the division has already promised to announce “exclusive Star Wars-based experiences later in the year”.

ILMxLAB

M is for Matte Painting

M is for Matte PaintingIn the VFX ABC, the letter “M” stands for “Matte Painting”.

Take any film aficionado’s top ten list of favourite movie tricks, and the chances are you’ll find the venerable art of matte painting near the top. But what actually is matte painting, and what makes it so special?

To put it in a nutshell, a matte painting is a piece of artwork used to fill in part of a scene that can’t otherwise be photographed. Take a cathedral interior, for example. Assuming you can’t find a real cathedral to shoot in, do you really want to shell out half your precious budget on constructing that mile-high vaulted ceiling? Wouldn’t you prefer to build your set up to a convenient height of, say, ten feet, then use a painting to patch in the rest?

Or, let’s say you want to photograph Count Dracula’s castle perched precipitously on top of a mountain. Are you prepared to ship a construction crew all the way out to the Bavarian Alps? Are you ready to face a mob of locals with torches and pitchforks protesting about how you’re defacing the landscape? Doesn’t it make more sense to photograph a suitably rugged portion of rocky terrain, then hire a skilled artist to paint in the vampire’s looming lair?

In short, isn’t the most straightforward solution to use a matte painting? Of course it is.

Unfortunately, matte painting isn’t quite as simple as that …

Top: a scene from "Dancing Pirate" (1936) showing original stage photography. Bottom: final shot composited with matte painting background. Chief technician on the film was Willis O'Brien. Frame enlargements first published in “Movie Makers”, November 1936.

Top: a scene from “Dancing Pirate” (1936) showing original stage photography. Bottom: final shot composited with matte painting background. Chief technician on the film was Willis O’Brien. Frame enlargements first published in “Movie Makers”, November 1936.

What’s a Matte

Matte painting has been around since the dawn of cinema. To understand its origins, we first have to understand the use of the word “matte”, which in visual effects terminology is really just another word for “mask”.

The original mattes were nothing more than pieces of black material, cut to shape and positioned in front of a camera in order to blank out part of a frame for later enhancement –the top half of a cathedral interior, for example. Thus the use of the term “matte painting” to describe the artwork created to fill in the blank.

As for combining the painting with the live-action, a common solution was to double-expose the artwork into the blank space left in the original footage, while masking the already-exposed portion of the frame with a counter-matte to protect it from further exposure. Another option was the “glass shot”, in which the vaulted ceiling of our notional cathedral would be painted in situ on a piece of glass positioned between the camera and the partial set.

Original photography for a shot in "The Last Days of Pompeii" (1935), showing the partial stage set before the addition of a foreground glass painting.

Original photography for a shot in “The Last Days of Pompeii” (1935), showing the partial stage set before the addition of a foreground glass painting.

Completed shot, with the background added via a glass painting positioned between camera and set. Chief technician on "The Last Days of Pompeii" was Willis O'Brien. Photographs originally published in “American Cinematographer”, January 1940.

Completed shot, with the background added via a glass painting positioned between camera and set. Chief technician on “The Last Days of Pompeii” was Willis O’Brien. Photographs originally published in “American Cinematographer”, January 1940.

Ah, but the artwork used in a glass shot isn’t strictly speaking a matte painting, because no masks are involved.

To split such hairs is to open a much wider debate on the subject of process photography – the craft of taking multiple elements and combining them into a seamless, composite shot. There are many ways this can be done. Indeed, in the early days of the movies, hardly a year went by without a legion of competing special effects technicians filing one patent after another, each trying to corner the process photography market.

Here’s just a small selection, as chronicled by Earl Theisen in the June 1934 edition of The International Photographer:

  • 1874 – C. M. Coolidge was granted patent No. 149,724 for a process of making composite prints by masking
  • 1912 – A. Engelsmann was granted patent No. 1,019,141 for a system of combining actors and artwork on a glass plate placed in front of a painted backdrop
  • 1917 – R. V. Stanbaugh was granted patent No. 1,226,135 for a process in which a traveling mask was threaded in the camera together with an unexposed film
  • 1918 – Norman Dawn was granted patent No. 1,269,061 for a process using photographs of the foreground as a cut-out mask, behind which a background was then added
  • 1918 – Frank D. Williams was granted patent No. 1,273,435 for a bi-pack matte process
  • 1923 – D. W. Griffith was granted patent No. 1,476,885 on a process using a painted screen with a hole cut in it, with actors performing behind
  • 1925 – Ralph Hammeras was granted patent No. 1,540,213 for a new glass shot technique
  • 1926 – Eugene Schufftan was granted patent No. 1,569,789 for a variant of the glass shot which involved photographing through a transmission mirror
  • 1927 – C. D. Dunning was granted patent No. 1,613,163 for a traveling matte process using colour separations

I’ll spare you the long version of the above list. It goes on for a very long time.

The matte photography room at Warner Bros. circa 1940. Photograph first published in “American Cinematographer”, January 1940.

The matte photography room at Warner Bros. circa 1940. Photograph first published in “American Cinematographer”, January 1940.

The Golden Era of Matte Painting

So how does a matte artist actually go about his business? Here’s an account from the July 1929 edition of American Cinematographer, written by Fred W. Sersen, who at the time was chief of the art department at William Fox Studios, and bearing the charming subtitle Some of the Intricacies of Making Things Seem What They Are Not Explained for the Amateur by an Expert with Years of Experience:

“It is best for the matte to be placed about thirty inches from the camera, even further if the glass or the material to be used for the matte is easy to obtain. When matting to the footage (especially when there is any wind blowing) or when dust is created by action in the scene, in a rain or snow storm, very soft blend is desirable and the matte should be placed four to six inches from the lens.”

Sersen goes on to explain various ways in which the painted part of the frame can be matched to the original photography. One of these involves projecting a test section of the original photography on to a coloured surface, and tracing the outlines of the scene:

“After the drawing is completed, it is laid in with oil paint in black and white, and on the artist’s ability and experiences depends the matching of the tones of the first exposure, which is ascertained by making the hand test and comparing the tones. He does this repeatedly by correcting the painting until the match is perfect.”

No sooner had artists perfected such techniques within the monochromatic world of early cinema, than along came colour. Now it wasn’t just a matter of matching light and shade, but also hue.

In the January 1940 edition of American Cinematographer, Byron Haskin (who went on to direct The War of the Worlds in 1953) made the following observations about the use of colour in matte paintings:

“It is obvious that the coloring of the actual set or landscape of the live-action portion of the shot must be precisely matched by the coloring of their respective continuations in the painting. This is by no means easy.

“It is entirely possible that the pigments used to paint a set may not photograph with the same Technicolor values as will visually identical pigments used in producing the matte painting. Therefore … the matte painter must not only know what colors were used in the set, but what paints were used to produce those colors. Where it is possible, he should have samples of the colors and paints used. The same, of course, is also true of fabrics and the like where they enter the matte painter’s problem.

“Equally important is the color of the lighting used in photographing the matte painting. Most Technicolor interiors are lit with special arc equipment which gives a light very closely matched to natural daylight.”

Because of these and other technical constraints, matte painters were forced to develop skills quite different to those of the average artist. They had to mix their colours and make their brushstrokes while all the time bearing in mind the myriad quirks and vagaries of the photochemical filmmaking process.

Peter Ellenshaw produced between 30–40 matte paintings for Richard Fleischer's fantasy adventure "20,000 Leagues Under the Sea".

Peter Ellenshaw produced between 30–40 matte paintings for Richard Fleischer’s fantasy adventure “20,000 Leagues Under the Sea”.

Once the colours were matched and the composition approved, the matte painting was finally ready to be combined with the original negative. Alignment of the two was critical, so as to avoid unwanted black lines at the point where the two images came together. Feathered edges were frequently used to soften the joins. Equally critical was the stability of both camera and projector, as the slightest movement would cause the painted part of the frame to judder against the original photography.

By now it should be evident that these pioneers of matte painting faced some stiff challenges. As Earl Thiesen remarked in the November 1936 edition of Movie Makers:

“Glass paintings are not simple, since the picture must be photographic in technique. The details, tone values and harmony of all parts of the painting must resemble a photograph and must match the perspective and photographic values of the setting. Very few artists have the ability necessary to do a glass painting.”

If you want to dig a little deeper into the history of the craft, check out Matte Shot, Peter Cook’s expansive repository of screenshots and commentary exploring the golden era of matte painting.

Mike Pangrazio of ILM works on a matte painting for "Return of the Jedi". Live-action stage photography was rear-projected into the areas of glass left blank.

Mike Pangrazio of ILM works on a matte painting for “Return of the Jedi”. Live-action stage photography was rear-projected into the area of glass left blank.

The Matte Painting Evolves

Throughout the 20th century, matte painting continued to thrive, enhancing the look of Biblical epics, Westerns, thrillers and period dramas alike. Got a scene in a lavish ballroom? Use the cathedral trick and just build the bottom half of the room – those matte boys just love painting chandeliers. Need to see a Spanish galleon anchored just off-shore? Forget shipbuilders – call the art department instead. And even if you’re forced to shoot on an overcast day, there’s no need to fret. It’s the work of a moment to paint a dramatic sky filled with ominous clouds.

As they developed their craft, matte painters were constantly trying out new techniques to stop their paintings looking like, well, paintings. They used layered artwork to create a greater illusion of depth. They devised cunning animated gags to simulate movement in the waves of a painted sea, or used backlighting to create convincing flares around a setting sun.

During the 1980s – that ambitious age when traditional photochemical effects techniques were being stretched to the limit – matte painters continued to push the envelope. For many of the spectacular wide shots in Return of the Jedi, for example, multiple sections of live-action were rear-projected into gaps left deliberately empty in gigantic glass paintings.

Top: A matte painting of Pankot Palace by Mike Pangrazio and Christopher Evans for a scene in "Indiana Jones and the Temple of Doom". The painting was ultimately rejected by the artists. Bottom: The final shot was created using photography of a cut-out silhouette positioned on a hilltop, enhanced by matte painted highlights and additional architectural details.

Top: A matte painting of Pankot Palace by Mike Pangrazio and Christopher Evans for a scene in “Indiana Jones and the Temple of Doom”. The painting was ultimately rejected by the artists. Bottom: The final shot was created using photography of a cut-out silhouette positioned on a hilltop, enhanced by matte painted highlights and additional architectural details.

At the same time, artists were also exploring more deeply the idea of combining “straight” matte paintings with other visual effects techniques. Disappointed by the look of their original matte painting of Pankot Palace for Indiana Jones and the Temple of Doom, ILM’s Mike Pangrazio and Christopher Evans prepared a cut-out silhouette of the building, which they photographed on a convenient hilltop at sunset. Matte painting techniques were then used to enhance the resulting image with highlights and other architectural details.

By the time ILM was working on the 1988 fantasy Willow, the line between matte painting and miniature had blurred still further. Speaking about Willow in Cinefex 35, matte department supervisor Christopher Evans remarked:

“There are real advantages to using miniatures with matte paintings. Miniatures have incredible perspective, but matte painting can create a sense of atmosphere and distance better than a model. Combining the two techniques is like making an alloy in metallurgy – the combination of the two ingredients is stronger than either by itself. The miniature is shot latent image and then we add the paint to it – so it is actually a blending and intermixing of the two.”

Yet, in this same decade, digital techniques were also on the rise. In this brave new world of photorealistic visual effects, would there still be a place for the traditional matte painting?

A Bigger Canvas

Astonishingly, despite all the changes that have swept through the effects business over the decades, matte painting is still a recognised – and thriving – discipline. Nowadays, of course, it’s called “digital matte painting” – or DMP – and on the face of it bears little resemblance to what was practised by those early paintbrush-wielding pathfinders.

In an era when the movie camera can (and frequently does) go anywhere, the new watchword for all visual effects artists is flexibility. It’s no longer good enough to lock down the camera and let the action play out across a static canvas – even a digital one.

What’s more, it’s now not unusual for a visual effects shot to be tweaked ten, twenty, even hundreds of times, with notes and comments flying endlessly back and forth between artist, supervisor and director. Faced with the need to generate multiple iterations of a shot, what artist would be crazy enough to use a paintbrush?

That’s why matte painters have had to enter not only the digital realm, but also the third dimension.

This Rodeo FX Game of Thrones reel contains many great examples of digital matte paintings and environment extensions:

A modern digital matte painting is created not by daubing paint on to a piece of glass, but by mapping artwork on to CG geometry. In a typical DMP environment, the models used might be relatively crude – perhaps just a series of flat planes or geometry layered up in virtual space. However, the devil is in the detail, and the detail is in the textures.

Textures for a DMP might be collages assembled from reference photography, perhaps taken on location or on the set, or artwork created from scratch. The digital matte artist refines these with retouching and hand-painting techniques, using software such as Adobe Photoshop or The Foundry’s MARI to create files that can subsequently be wrapped around the necessary geometry.

The result? A rich, dimensional environment that can be viewed from a number of key angles or, if necessary, from all sides.

Yet, even in this integrated digital world, the artistry of the matte painter still holds sway. And on the rare occasion a matte painter actually finds himself in the director’s chair – as Robert Stromberg did recently when he helmed Disney’s Maleficent – the possibilities are endless.

Talking about his directorial debut in Cinefex 138, Stromberg revealed:

“I did about 100 matte paintings [for Maleficent] … I’ve been lucky enough to make people feel a certain way through background art or matte painting. This was an opportunity to explore other ways to create emotion. As a movie director, it’s a bigger canvas.”

A composite shot from "Maleficent" integrating a physical clifftop set with a digital environment by MPC. The latter included matte paintings, terrain projections and a digital model of the distant castle.

A composite shot from “Maleficent” integrating a physical clifftop set with a digital environment by MPC. The latter included matte paintings, terrain projections and a digital model of the distant castle.

Matte Painting in a Nutshell

As we’ve learned, in the context of visual effects, “matte” means “mask”.

According to the Oxford English Dictionary, “paint” is “a coloured substance which is spread over a surface and dries to leave a thin decorative or protective coating”.

Taking those two definitions at face value, traditionalists might argue that modern DMP isn’t matte painting at all. I can see their point. Since it began, the craft has changed out of all recognition, hasn’t it?

Yes, and no.

Take another look at that list of patents from the first few decades of the 20th century. What do they prove if not that change has been with us since the very beginning? Each generation strives to better the last, and will eventually develop the tools to do so. Either every age is a golden age, or none of them is.

But it’s really only the tools that change, isn’t it? Everything else stays the same: the intent of the artists; the commitment they show to their task; the skill with which they wield their tools.

That’s why I don’t believe matte painting has changed at all. The skill-sets of the people doing the work may have altered, but their purpose remains what it has always been.

What is that purpose? To extend reality beyond its natural borders. To create the solid gold setting in which the jewel of performance can shine. To enhance what exists with what we can only imagine, and in doing so makes the mundane beautiful.

Now that’s matte painting in a nutshell. What do you know? It turned out to be simple after all.

Related articles:

“Return of the Jedi” photograph copyright © 1983 by Lucasfilm Ltd. “Indiana Jones and the Temple of Doom” photographs copyright © 1984 by Lucasfilm Ltd. “Maleficent” photograph copyright © 2014 by Walt Disney Pictures.

“Avengers: Age of Ultron” – Stereo Q&A

Marvel's Avengers: Age Of Ultron  L to R: Hulk (Mark Ruffalo), Captain America (Chris Evans), Iron Man (Robert Downey Jr.), Hawkeye (Jeremy Renner), Black Widow (Scarlett Johansson), and Thor (Chris Hemsworth)  Ph: Film Frame  ©Marvel 2015

When booking your tickets for the latest summer blockbuster, you’ll probably be faced with a choice: 2D or 3D? With Avengers: Age of Ultron, the question is boiled down to its most visceral form. Is it enough just to see Hulk smash? Or do I want to see Hulk smash in stereo!

According to the latest MPAA report, Theatrical Marketing Statistics, nine out of the top ten box office hits in US/Canada in 2014 boasted a 3D theatrical release. The same year saw the global proportion of 3D digital screens increase to 51% (70% in the Asia Pacific region). Pundits continue to debate the pros and cons of 3D, but as long as the major studios continue to pump out big stereo movies, the desire will remain to make the 3D experience as punchy as possible.

Satisfying this desire on Avengers: Age of Ultron were two stereo conversion facilities: Prime Focus World and Stereo-D. In total, Prime Focus World converted 830 shots for the movie, with production running for three months, and the number of team members peaking at 613 across their London, Mumbai and Vancouver offices.

Cinefex spoke to Richard Baker, senior stereo supervisor at Prime Focus World, about the state of the art in stereo conversion, and about the company’s work on Avengers: Age of Ultron.

Marvel's Avengers: Age of Ultron - Hulk

So how about Hulk? He’s a big guy – does that make him a natural subject for 3D?

The issue of Hulk’s size was an interesting one. We obviously wanted to use the stereo to emphasise his scale, and the natural tendency would have been to pump him up in the stereo conversion. But this actually has the opposite effect, and tends to minimise scale. To increase the feeling that Hulk is much bigger than the other characters, we actually flattened him off a little, slightly reducing his internal depth and ensuring that he was never too separated from the background.

Okay, before we get too deep into Avengers: Age of Ultron, let’s fill in the background. First, is the demand for stereo as big now as it was shortly after Avatar brought it back to the mainstream?

There still is a big demand, although the market has now consolidated, with four main service providers in the conversion industry. Marvel is a great example of a studio committed to 3D: all of their films are planned to release in 3D, and they have recently added more films to their slate. Disney, Warner Bros., Fox, Lionsgate … most of the studios have 3D releases in their line-ups.

Have the techniques changed much since those early days?

In the early days of stereo conversion, it was assumed that linear stereo was always preferable – that it was more natural. In fact the industry learned that linear was not always acceptable – that sometimes it just didn’t look right. Also, filmmakers wanted full control over depth, in order to use the stereo as another storytelling device, alongside the grade, edit and sound mix.

This is where stereo conversion comes in. We have developed techniques to incorporate linear and non-linear elements into the same scene, including the generation of virtual stereo camera rigs from our converted scenes and delivering them to VFX to allow them to render CG elements that will slot straight into the conversion.

Editor’s note: The terms ”linear” and “non-linear” refer to the way the stereo offset – in other words, the amount of “3D-ness” – reduces from the foreground to the background of an image. With linear stereo, the offset reduces at a constant rate. With non-linear stereo, the rate of decrease follows a curve, to more comfortably emulate what the human eye naturally perceives.

You use a number of proprietary tools that you’ve developed in-house. What are the advantages of doing this?

When we create tools, it is generally to cater to a specific requirement that off-the-shelf tools can’t handle, or don’t handle well. One of the benefits of maintaining an R&D department and developing tools in-house is that you can create them to be exactly what you want them to be. And sometimes we need do things that no-one else is doing, or has thought to do. We specify, build and rigorously test all our proprietary tools, then roll them out across our global network.

The conversion process itself is constantly evolving also. My background is in VFX, so it always seemed sensible to me to use geometry in the conversion process – for example, with characters’ heads, and with certain environments where it is important to maintain consistency and accuracy of depth across shots. We use cyberscans of the main actors to produce head geometry for all our shows, and we also request lidar for environments that we know would benefit from geometry.

Watch a video about DepthGen, one of Prime Focus World’s proprietory stereo conversion tools:

What kind of background or training do you need to become a stereo artist?

As with VFX, the best stereo supervisors and artists understand and embrace both the creative and the technical aspects of what they are doing. Conversion is very much a VFX process, and we have a range of levels of compositing artists in our stereo teams – seniors, mids and juniors. Being able to really see the subtleties in a stereo shot takes months of training, of looking at stereo, and developing an eye for what works and what doesn’t.

Stereo conversion can provide a great grounding for junior artists who may decide to move into VFX later, and with most of the big blockbuster movies being released in 3D, this training can prove invaluable, not just in learning to recognise good stereo but also in learning about pipelines, colour space, working in Nuke or Fusion and a whole host of other areas.

Watch a video about another of Prime Focus World’s stereo conversion tools – PFLive:

Let’s get back to Avengers: Age of Ultron. How did Prime Focus World get involved in the show?

We were awarded the work on Avengers: Age of Ultron on the back of our stereo conversion work for Marvel on Guardians of the Galaxy. That was a great project to be involved with creatively, and it went on to win an award for “Best 3D Live Action Feature” from the Advanced Imaging Society.

At what stage in the production did you actually start to work on the show?

I was invited out to view an early cut of Avengers: Age of Ultron at the end of 2014. Even though it was all still previs and greenscreen at that point, the edit was already in great shape. I sat with Evan Jacobs, Marvel’s stereographer, and Mike May, Marvel’s stereo producer, to talk through the creative aspects of the conversion and discuss production details. This allowed me to put together my depth summary for the show – an internal guide for our international teams that describes how we intend to approach the stereo, how it will play out across the sequences we are working on, and where we can use the depth to create “3D moments”.

Marvel's Avengers: Age of Ultron - Scarlet Witch

Was any of the film shot with a stereo camera rig?

No, the whole film was stereo converted, although there were some stereo renders – for example, the Iron Man HUD shots. Robert Downey Jr. was shot mono on greenscreen for these sequences, and this footage was subsequently stereo converted. The converted shots were then delivered to Cantina, who comped the stereo HUD elements into the shot. Stereo renders of graphics content and transparencies always look better.

Does it make for more work, doing the stereo conversion in post?

To shoot Avengers: Age of Ultron native would have been nigh-on impossible. There was so much greenscreen content that making depth choices for the live-action characters would have been tantamount to setting the stereo blind, because the backgrounds hadn’t even been created yet. Also, for the VFX houses to work in and render stereo is a big overhead. Adding this to a tight production schedule and hugely complicated creative work would have been an extra hit they didn’t need.

Do you get a superior end result this way?

Ultimately, being able to control the stereo in post is the best way to go. The quality of the stereo result is no longer in question. In fact, it is arguably superior to native, unless native has been through its own post process treatment. Perhaps most importantly, it gives studios and directors the flexibility to make changes to the depth as they see the film come together. With native shooting, you are locked in to the decisions you made on set, so you’d better hope that they were right!

Tell us more about how the stereo “look” of the film was planned.

Stereo decisions are generally made by the show stereo supervisor – Evan Jacobs in this case – in conjunction with the director. For Avengers: Age of Ultron, Evan and I discussed the stereo up-front, making a lot of references to the style and direction that we had set up for Guardians of the Galaxy. Once the brief was clear, we started working up the depth in individual shots. Specific comments or decisions would come up later during client review. As the stereo supervisor across the entire show, Evan was seeing the big picture, and thinking about how the depth was coming together for the whole movie, rather than just for any one sequence.

How closely do you liaise with the VFX vendors during the stereo conversion process?

Our production team has a great rapport with the Marvel production team, which is invaluable on a big, complicated show like this. We also have great relationships with the VFX vendors, which is crucial when we need to harvest VFX elements early. ILM couldn’t have been more helpful. Their incredible one minute-long opening shot for Avengers: Age of Ultron was one of the last renders to come through to us, and they were there 24 hours a day to assist us with breaking it out for conversion.

Marvel's Avengers: Age of Ultron - Church battle

Describe your workflow for stereo converting the film’s big VFX sequences.

The huge battle in the abandoned church towards the end of the movie is a good example of a massive, elements-based VFX sequence. We started working with ILM early in the conversion process to get element passes so that we could start to set up the scene in stereo, even though we knew the animation was likely to change. This benefited Marvel, as it gave them an early temp stereo version of the shot so they could see how it was going to play in the cut, and for DI. As newer versions of the shots came in, we incorporated the changes using our internal tools. This kind of collaboration is crucial to delivering shots like this, with hundreds of VFX layers, within the allocated production timeline.

In terms of the elements pipeline, the VFX work was made available to us in comp scripts containing all the constituent elements and layers that we needed for the conversion. We used our proprietary tool, AssistedBreakout, to render out smaller “minicomp” versions of the huge VFX files, giving us a more manageable script. Without this, the breakout process would have been manual, and taken skilled artists much longer to perform.

Watch a video explaing how Prime Focus World’s AssistedBreakout operates:

The depth generation was handled through our View-D conversion process, using depth mattes, Z-depths (where available), geometry and hand-sculpting to create a depth map which gave us the stereo offset. I reviewed every shot first here in London and delivered any internal notes. Once I was happy, the shot went forward to the client for approval, and any client notes were reviewed over TVIPs, our live stereo review system, with Evan Jacobs in LA and me in London. Once the shots were finaled, they were delivered as 2K log DPX files.

Is it easier to create effective stereo for an element-based VFX shot, rather than live-action straight out of the camera?

Let’s say you had two similar shots: one with digital characters and lots of CG dust, smoke and explosions, and the other shot live-action with practical effects. The benefits of having access to the VFX elements would be most apparent in the cleanliness of the image. The live-action shot would require lots of paint work to fill the occluded areas – which can absolutely be done, and which we have done many, many times – but with VFX elements available we would be able to approach the shot in a different way. And it would be quicker.

On VFX-heavy shows – the majority of the shows that we work on – having the VFX elements allows us to extract Z-depths for the CG characters, and to separate the various layers of smoke, dust, particles and explosions. This reduces the amount of paint required and allows us to work at a higher quality with even more detail and accuracy.

Which shot in Avengers: Age of Ultron gave you the biggest challenge?

I guess if I had to choose the most challenging shot it would be the opening sequence. It was over a minute long. It was a continuous shot. It was one of the last to deliver. ILM had split it into three parts, and we had to split it into eight parts to make it more manageable in the time-frame. It was challenging from an editorial point of view, from a production point of view and artistically too. But we’re used to working with long shots after our work on Gravity!

Which shot do you think gives the most stereo bang for buck?

I think probably the two slow-motion shots that we worked on – the iconic tableau of the Avengers flying across the screen in the opening forest sequence, and the big rotating battle shot in the abandoned church. These are both perfect 3D moments, giving the viewer time to really take a look at the stereo. In the abandoned church shot, you have characters flying in and out of the action, towards the screen, plus there’s a really nice camera move, lots of atmospherics and dust and lasers and lightning … it’s all going on!

Marvel's Avengers: Age Of Ultron  L to R: Hulk (Mark Ruffalo), Captain America (Chris Evans), Thor (Chris Hemsworth), Iron Man (Robert Downey Jr.), Black Widow (Scarlett Johansson), and Hawkeye (Jeremy Renner)  Ph: Film Frame  ©Marvel 2015

You talked about big VFX shots coming in very late in the day. Since stereo conversion is one of the last jobs to be done, how do you cope with the time pressures?

In terms of the VFX, yes, we’re one of the last to touch the film, which means there is huge pressure on the stereo team. By the time we receive turnover, many departments have overrun – through no fault of their own – either because of reshoots, or edit changes, or other delays. It’s an inevitable aspect of the process. We know that the biggest VFX shots are the ones that will deliver last because they are the most complicated. It’s just the way it is, and we anticipate this in our schedules. We know that the last month of a big project like Avengers: Age of Ultron is going to be super-intense!

And yet, working with Marvel, this push to the finish line is also enjoyable. We have weekly calls with the entire post team, and everyone is working towards the common goal of delivering the best quality work and hitting their delivery dates. This openness and transparency is so important. One of Prime Focus World’s great strengths is that we can mobilise our global resources very quickly to ensure that we can handle any situation.

"Marvel's "Ant-Man"

Prime Focus World is continuing its relationship with Marvel on Ant-Man. Does the film’s microscopic hero make it a good subject for stereo conversion?

Ant-Man is going to be a whole new challenge, with its own stereo style driven by the scale considerations of the main character. We’re talking to Marvel a lot about how we develop the stereo language for this show. How does our hero look when he is transformed into his tiny form? Does he feel small in a big environment? Or do you imagine the camera has shrunk with him so that he feels normal-sized? This movie is perfect for 3D, because the stereo can play such a huge part in creating a feel for the show, and we can use it to support the storyline. We’re really looking forward to this!

Finally, are there any new advances in stereo on the horizon?

Deep compositing is really exciting for us. Not all VFX houses are working with deep yet, but companies such as ILM, Double Negative and Weta Digital are. Receiving deep comps from VFX allows us to use the depth data to create the stereo offset, because the occluded areas are present within the deep information. Once we create the left eye/right eye offset, the “missing” information is automatically filled in. The development of our DeepGen tool to take full advantage of deep comps in the conversion process is probably one of the biggest areas of advancement on the tech side for us at the moment.

Also our merger with Double Negative last year was a big step forward for the company, and we’re already seeing the benefits of this relationship in our collaboration on Avengers: Age of Ultron, and on forthcoming projects such as Terminator: Genisys and Ant-Man. The close working relationship of our production teams, the ability to access VFX elements earlier in the process, the time and cost saving benefits to our clients – it’s a big step forward in terms of our efficiency.


Read the full story behind the visual effects of Avengers: Age of Ultron in Cinefex 142. For this latest Marvel spectacular, VFX supervisor Christopher Townsend oversaw a visual effects team that included ILM, Double Negative, Animal Logic, Luma Pictures and Framestore, with special effects supervisor Paul Corbould and the mechanical wizards at Legacy Effects lending practical effects support. Buy your copy of Cinefex 142 now!

Special thanks to Tony Bradley. “Avengers: Age of Ultron” and “Ant-Man” photographs copyright © 2015 by Marvel Entertainment.