About Graham Edwards

I'm senior staff writer at Cinefex magazine. I also write novels. In a former life, I produced animated films for theme park rides and science centres. If you offer me a cold beer, I won't say no.

“Ted 2” – VFX Q&A

"Ted 2" animation and visual effects by Tippett Studio and Iloura

In 2012, unsuspecting movie audiences were introduced to Ted, the animated teddy bear star of Seth MacFarlane’s irreverent comedy of the same name. Ted was a hit at the box office, rapidly gaining status as one of the biggest-grossing R-rated comedies of all time.

Now, the potty-mouthed plush toy is back in the sequel, Ted 2. The new film sees Mark Wahlberg reprising his role as Ted’s buddy and original owner, John Bennett, and chronicles the bawdy bear’s attempts to prove that he has the same legal rights as a regular human being … while amusing and offending pretty much everyone he encounters along the way.

Key to the success of both films was the convincing creation of Ted himself. For the sequel, as for the original movie, animation and visual effects duties were divided between Tippett Studio and Iloura, with additional support from Weta Digital, and with Tippett Studio’s Blair Clark fulfilling the role of production VFX supervisor.

Tippett Studio delivered around 600 shots of Ted, including a break-in sequence, a scuba dive, and a parody of John Candy’s singing performance with Ray Charles in Planes, Trains and Automobiles. Iloura delivered approximately 1,000 animation shots during their nine-month assignment.

For this exclusive roundtable Q&A session, Cinefex brings together insights from the following key artists at Iloura and Tippett Studio:

  • Eric Leven, Visual Effects Supervisor, Tippett Studio
  • Glenn Melenhorst, Visual Effects Supervisor, Iloura
  • Colin Epstein, Senior Compositor, Tippett Studio
  • Brian Mendenhall, Animation Supervisor, Tippett Studio
  • Jeff Price, VFX Editor, Tippett Studio
  • Howard Campbell, Lead Technical Director, Tippett Studio
  • Niketa Roman, PR Specialist, Tippett Studio

Now, without further ado, let’s talk teddy bears!

"Ted 2" animation and visual effects by Tippett Studio and Iloura

On the original Ted, the visual effects load was shared between Iloura and Tippett Studio. Is that how it worked on the sequel, and was the original team an automatic choice for the production?

GLENN MELENHORST: The old team were definitely brought back together for Ted 2. On Ted, we shared Ted around fifty-fifty with Tippett. This time around, Iloura completed twice the number of shots as we did for the first film, sharing the rest with Tippett and a small portion with Weta.

ERIC LEVEN: I think there’s always a hope that you’ll be asked back, but you can never assume it’ll be so – ask Boss Films about Ghostbusters 2! There’s always love and friendship in Hollywood, but there’s always money, too. So we had to bid for the work and meet their price. Certainly Tippett and Iloura had a leg-up, but it was never a given.

GLENN MELENHORST: We were approached from the start to be part of Ted 2, and it was down to our availability and the bid. Seth was so happy with the work on the first film it’s no surprise he wanted us and Tippett back on this next adventure. As with the first film, Blair Clark was show VFX supervisor. He is an awesome supervisor from Tippett and he really understands both Seth’s aesthetic and how we work here at Iloura. As a company with a strong creature animation focus, we have always felt culturally aligned to Tippett and really enjoy working with them.

"Ted 2" animation and visual effects by Tippett Studio and Iloura

How closely involved was Seth MacFarlane with the visual effects process?

ERIC LEVEN: I think it’s fair to say that Seth IS Ted in many respects, so he was instrumental in Ted’s performance. He not only provided motion capture, voice acting, and video reference, but also detailed animation performance notes. In addition, he always made sure that Ted stayed on-model.

GLENN MELENHORST: Seth was closely involved in reviewing our dailies from start to finish. Given his background in 2D animation, his focus was more on the subtleties of the animation than the nuts-and-bolts FX side of things, while Blair made sure the work between the studios matched and all the technical aspects of the job were on track.

Jumping back briefly to the first film, how was the original Ted character designed and rigged to give him that authentic teddy bear look?

GLENN MELENHORST: The original Ted was based on some rough designs from Seth. When it came to making the actual model, we collected several teddy bears and studied them to make sure our asset had all the right seams and panels. We had to make sure the whole way from model to rigging to grooming that we were creating a plush toy and not an animal or cartoon character.

ERIC LEVEN: The model and rig are generally pretty simple. The face rig is a bit more complex to make sure that we can get the correct range of expressions from Ted. What makes his face challenging is that he can’t move his eyes, his nose doesn’t change shape, and he only uses his eyelids for a very occasional blink. We rely much more on the eyebrows and mouth but we – and Seth – are constantly watching to make sure his face isn’t too misshapen or otherwise off-model.

GLENN MELENHORST: We also put a lot of irregularity and asymmetry into him so he would feel like a toy who’d been around for thirty-odd years and been beaten up a bit.

Did you develop a new Ted for the sequel, or did you just bring the old bear out of storage?

GLENN MELENHORST: I guess from a production perspective, and an audience expectation standpoint, Ted needed to be his old lovable self. But, as you can imagine, every film teaches you things about animation and pipeline, and after the first film we were keen to reinvent some of our workflows and rendering tech to make shots easier to turn out, as well as to step up the look a notch.

ERIC LEVEN: We were re-treading old ground in the sense that this is a different story about the same characters. Mark Wahlberg is back, and Ted is back. We don’t want a different Mark Wahlberg, and we don’t want a different Ted.

GLENN MELENHORST: Another thing to keep in mind is that software and hardware continue to march forward, speeding things up and allowing us to discard some of the cheats and workarounds and use a more unified raytrace lighting pipeline. In terms of animation, the process was much the same, although this time we went into the show already understanding much of the nuance that Seth was after for the character of Ted.

"Ted 2" animation and visual effects by Tippett Studio and Iloura

Specifically, what changes did you make to the digital teddy bear for Ted 2?

GLENN MELENHORST: We made a few nips and tucks, mainly just tidying a couple of things up which were bugging us in the first film, such as messy hair around the lips which interfered with lip-sync. We reworked his facial rig slightly to make him a little easier to animate, and also his hands were reworked slightly to allow him to articulate a bit more easily.

ERIC LEVEN: The real changes we made at Tippett Studio involved upgrading our shading model to allow for more realistic fur lighting. Because of this, we didn’t need to futz with Ted too much in the comp, and he looked basically correct right out of the render. On Ted, there was a lot more playing with the look in the comp, which was obviously not ideal. Tippett Studio also transitioned to Katana on Ted 2 – that provided a great deal more flexibility for the TDs, and much faster turnaround of lighting tests and changes.

HOWARD CAMPBELL: We used environmental images captured on-set to mimic subtle variations in light and colour. This really improved our ability to integrate Ted into the scene and tie him in with the actors. We saw real leaps in details such as fur quality and reflections, but were still able to maintain the look and feel of the bear from the previous film. It’s the same bear, only better.

GLENN MELENHORST: The biggest changes we made at Iloura were improvements in our rendering technology. In Ted, we used a hybrid pipeline of raytracing in V-Ray and REYES in 3Delight. This time around, we still opted for a hybrid approach but put more emphasis on 3Delight, which we now also used for raytracing. We also completely reworked our cloth pipeline, using Marvelous Designer to build as well as simulate our clothing. This gave a very robust and realistic result, even when Ted was doing outrageous actions such as in the main title dance routine.

Watch the featurette Ted 2: A Look Inside:

Animated characters have benefited hugely from improved flesh and muscle simulations. Is that of any use when you’re animating a stuffed toy?

ERIC LEVEN: For a teddy bear, we didn’t need any muscle sims. What brought the model to life was the use of cloth simulations to get the right shape and weight of his body beneath the fur.

GLENN MELENHORST: We did do a full cloth solve to simulate the effect of his body being made of fabric. This meant if Ted bent over or twisted his body, you would get creases and folds appearing.

Were there any special requirements for Ted’s fur?

GLENN MELENHORST: On the original Ted, we had a scene with Ted in the bath. That required custom groom and shaders. This time, there were no real special requirements for his fur … other than to look real!

ERIC LEVEN: The effects model was basically the same on this film. There were new costumes to deal with – a raincoat, a scuba suit, a hooker outfit – that provided their own aesthetic challenges, but nothing technically new.

"Ted 2" animation and visual effects by Tippett Studio and Iloura

Stuffed toys often pop up in horror films – there’s something inherently creepy about seeing them come to life! How did you get around this problem?

GLENN MELENHORST: I think dolls in horror films are either motionless or spin their heads slowly with creepy music, which makes them seem more threatening. Ted moves more or less like a human, and he talks and cracks jokes, which is enough to give him empathy.

ERIC LEVEN: Our very first test of Ted for the first movie DID look creepy! We wanted Ted to appear worn and ragged – the way you might imagine a thirty year-old stuffed animal would become. So we had a few rips that had been sewn up and a big ratty cloth patch that covered a hole. His fur was matted and his expression was just generally pissed off, even when he was joking. Looking back, all these things made Ted look pretty creepy. The key was to make him look like a more ordinary stuffed animal that wasn’t quite so raggedy. Worn and old, yes, but not dirty. We also played his expression more sardonic and less angry. That helped us relate to him more instead of being repulsed.

"Ted 2" animation and visual effects by Tippett Studio and Iloura

What about Ted’s eyes? They’re really just blanks. How do you stop them looking dead?

COLIN EPSTEIN: It would be fair to say that Ted’s eyes got more attention per shot than just about any other element – except perhaps his fur. Since we’re so trained to focus on someone’s eyes to glean emotion, we did everything we could to make his eyes not just look physically real, but to really add to the intent of the scene. Every choice was based on that goal, whether he needed an angry glare, a stoned vacancy, or a mischievous twinkle.

GLENN MELENHORST: Naturally, having no whites in the eyes means Ted has to turn his whole head to make eye-lines work. With no moving parts to his eyes, we had to achieve sidling glances, stares, astonishment, and so on, with body language and subtle eyebrow animation. It’s very much like Snoopy or other characters who have dots for eyes: the tilt of a brow can really communicate a lot. With a character like Ted, you can have him just stare, completely motionless, and if you add welling music and a slow push in, the audience will conclude that he’s sad. You animate your intention, and the audience fills in the blanks.

How important is the way the eyes reflect their surroundings?

COLIN EPSTEIN: We had reference from a “stuffy” for almost every shot, but we only used that as a starting point. Each sequence environment got its own reflection set-up for Ted’s eyes, using set data and images, and then we’d manipulate that based on the feel his eyes needed. We’d often remove reflected details that were really there but proved visually distracting. For instance, the Comic-Con sequence was predominantly lit with a grid of lights on the stage ceiling. But accurately reflecting those rows of bright pinpoints in Ted’s eyes fought with his eyeline and cluttered things up visually, so we played those down. A key look note throughout the show was to make sure Ted’s eyes looked like old plastic, instead of brand new and shiny, or biologically alive. The practical bear had very clean and shiny plastic eyes, so we used them mostly for reflection and kick position reference.

"Ted 2" animation and visual effects by Tippett Studio and Iloura

How did you achieve the “old plastic” look in the renders?

COLIN EPSTEIN: Ted’s eyes were heavily treated in the comp passes. Since we knew going in that adjusting Ted’s eyes was always going to need specific attention, the comp scripts had a standard set-up that let the compositors control just about every aspect of each eye separately. A stock row of blurs, colour grades and transforms was there in Nuke when a shot was started. If the kicks landed smack in the centre of Ted’s eyes because of the angle of his head, it often gave him a creepy look we called “devil eyes”. In those cases, we would push the kicks off-centre in the comp to keep him appealing. Another issue was that Ted is modelled wall-eyed, so kicks and reflections in one eye would look drastically different in another. This sometimes made it hard for him to look like he was focused on something specific. So elements in his eyes would be pushed around a bit to counter that. Little adjustments like these happened throughout the show to strengthen Ted’s performance.

For the original Ted, Seth McFarlane performed scenes off-camera in a motion capture suit, with a stuffy used as a placeholder in shot. Was the same methodology used for Ted 2?

GLENN MELENHORST: Yes, it was very much the same this time around. Seth’s mocap setup – called Moven – provided us with reference for Seth’s upper body, principally his arms and the angle of his head. The system gave no facial feedback, or legs or body motion. We used the mocap as a basis for keyframing, adjusting the data to fit Ted’s body and modifying it where needed.

JEFF PRICE: For each camera angle, we would receive both a stuffy pass – with the VFX supervisor acting out the scene with the stuffed bear – as well as a greyball pass. We could then compare the stuffy size and placement in each shot with our animated Ted, as well as the light and shadows with the greyball in the scene.

GLENN MELENHORST: Whereas Seth was able to shoot a sideways glance, Ted needed to turn his whole head to look sideways. Often, therefore, the mocap was used only as reference. Combined with captured video, it gave the animators all they needed to understand Seth’s intention for the shot.

JEFF PRICE: As was the case with the first Ted movie, Seth McFarlane’s mocap performance generally served as a rough first animation pass, with our animators altering and fine-tuning Ted’s movements into a believable performance.

"Ted 2" animation and visual effects by Tippett Studio and Iloura

Animators are also actors. Is it important to “cast” the right animator for a particular shot?

GLENN MELENHORST: Animators are definitely actors. That’s something I believe quite strongly. We find a lot of animators who have skilled-up through online training courses, or through some of the bigger animation studios, tend to have learned what I can only describe as “Chaplin-esque 1920’s silent era acting” – overly expressive, hyper-extended, vaudevillian extremes. This has a place in many children’s films, but is not so useful when animating Ted. One of the best animation reels I ever saw was a study of a guy reading a newspaper. The subtlety was so well-observed – I loved it. Ted required that sort of observation, because his performance was so stripped back that every tiny key mattered.

BRIAN MENDENHALL: We spend a lot of time trying to keep consistency between performances. It is very important to the believability of the character as a whole. The worst note you could ever get from the director is to say that he wants to see the animator from one shot handle the animation on another (which did happen once). We don’t want the client to ever think there are any individuals behind the scene. I’d like to think you can’t spot the difference when you watch the movie. Maybe I can – but not you!

GLENN MELENHORST: More importantly than spotting differences between animators, we worked to not let the audience detect differences in Ted between studios. Matching renders and the quality of light and integration is one thing, but continuity in animation across vendors is another beast all together. Naturally, at Iloura we can tell a Tippett shot because we didn’t do it – but I hope it’s a harder task for the audience."Ted 2" animation and visual effects by Tippett Studio and Iloura

What was the most difficult scene you had to work on involving Ted?

GLENN MELENHORST: One of the greatest challenges was a shot where Ted fights a goose. The shot was turned over in the last two weeks of delivery. We had a simple goose asset provided, but we had to rig, animate and feather the goose in short order – no small feat. Also, it was a lengthy shot. We accomplished it by dividing the animation between animators and splicing the acting together. As well as it just being a difficult shot to turn around, Seth was still actively blocking out ideas on the shot during the last week. That meant we needed to run everything in parallel as much as we could, building our lighting pipeline on blocking animation, re-rendering as new animation was fed through the pipe, and compositing with whatever was at hand, just to get the thing done.

And the most complex?

GLENN MELENHORST: The title sequence of the film centres on a Busby Berkeley-style dance number with Ted out front of a hundred or so dancers. The main set piece – an oversized plywood wedding cake – needed clean-up and roto to remove screws and scuffmarks, so we spent time making it look pristine. The same was true of the dance floor, which was mirrored but scuffed, and while we didn’t clean it out entirely, we did remove a lot of the visual clutter while retaining the complex reflections of the dancers. This extended to painting out camera rigs and light rigs, and digitally extending the curtains and floor.

As well as cleaning up the environment, did you also have to edit any of the dancers’ performances?

GLENN MELENHORST: In the last shot of the sequence, the camera tracks back and up as the dancers form a perfect triangle. The girl on the furthest right didn’t hit her mark, and so the back right point of the triangle was a mess. One of our 2D guys, Johnathan Sumner, spent about a month reconstructing the shot by moving the girl, restoring all the arms, legs and flowing gowns that were originally across her, filling in the gaps left by her being slid sideways, and fixing the reflections.

Describe how you got Ted dancing in the opening number.

GLENN MELENHORST: Ted’s animation for the sequence had three real influences. Firstly, we matched mocap provided by the choreographer for the sections where they knew what they wanted. Secondly, there were times when we needed Ted to copy the dancers behind him and dance in sync. Thirdly, totally custom animation was needed when Ted was not the focus of the shot, when he needed to backflip or tumble or act in a way not possible to capture, or simply when the choreographer left it up to us to come up with the performance. We blocked the dance, refining and revising as the choreographer and Seth edited and revised the sequence.

One member of our animation team, David Ward, has a strong background in dance and really took to the sequence. It was a perfect fit, and he and the others on the sequence turned out some really great animation. It was not enough simply to match the dancers or use the choreographer’s mocap, because Ted is so small. We needed to exaggerate his silhouette, poses and timing, snapping up large gestures to help him really pop.

Our lighting team had the task of not only matching the studio lighting, but also creating believable reflections in the scuffed floor. They had to make Ted glow as a bright dress passed him, or darken him when he was shadowed by the passing dancers – we rotoscoped many of the dancers to isolate the jungle of legs. The studio lighting also changed colour and intensity through the shots, all of which all needed to be tracked and accounted for. The compositing team had the arduous task of interleaving Ted behind all the dancers, blending shadows and putting the final spit and polish on the shots. Overall it was a unique sequence and a great team effort."Ted 2" animation and visual effects by Tippett Studio and Iloura

We’ve heard a lot about Ted himself. Are there any memorable VFX shots that don’t feature the bear?

GLENN MELENHORST: The opening shot of the film is a cosmic zoom from the Universal logo orbiting the earth down into a stained glass window of a church in Boston. The shot was one of the first to be turned over and one of the last to be finished. As in the first film, we wanted the camera move to be a single unbroken shot, with no wipes or cheats between takes as is common in these things.

How was the cosmic zoom shot put together?

GLENN MELENHORST: Satellite and aerial photography was sourced and purchased, and our FX lead, Paul Buckley, began to block the camera move along with our match-move department. As the camera approached the city, we decided on what to build in 3D and what to leave as 2D matte painted elements. Eventually, we settled on building ten city blocks of Boston, along with some outlying skyscrapers. This build included all the vegetation, cars, pedestrians, kids on swings, debris in gutters, and so on. Many months of modelling, texturing animation and matte painting went into the shot, and we are pretty proud of how it turned out.

Thanks to everyone for contributing to this Q&A. Any closing thoughts on Ted 2?

GLENN MELENHORST: Well, I do want to say what a pleasure it was for us at Iloura to work with Seth and Blair again. Iloura and Tippett are like sister companies – we really enjoy working together. Seth is always so complimentary about our work, and it is amazing how much that enthuses everyone in every department to put in a huge effort. Likewise, Blair was always so generous with his praise and support, and we all count him as an honorary Ilourian!

NIKETA ROMAN: I know it’s been said, but we at Tippett Studio really want to emphasise what a strong director Seth MacFarlane is and how great it is to work with him. He’s got a background in animation and a very strong vision of who Ted is as a character. He speaks the language of an animator and communicates what he wants very clearly. That clarity and ease of collaboration is really what makes Ted so successful … it’s also just plain fun to work on a movie where you spend so much time laughing. We had a great time!

Special thanks to Niketa Roman, Fiona Chilton, Simon Rosenthal, Ineke Majoor and Anna Hildebrandt. “Ted 2” photographs copyright © 2015 by and courtesty of Universal Pictures, Media Rights Capital, Tippett Studio and Iloura.

“Maggie” – VFX Q&A

Maggie - VFX Q&A with Aymeric Perceval of Cinesite

Buy a ticket for a movie about zombies starring Arnold Schwarzenegger, and you could be forgiven for thinking you’re in for two hours of high-octane, undead action.

Buy a ticket for Maggie, however, and what you’ll get instead is a contemplative independent feature in which Arnie exchanges muscles for melancholy in a dramatic role as devoted father Wade Vogel.

As the world is gripped by a viral epidemic which gradually turns its victims into flesh-eating monsters, Vogel rescues his infected daughter, Maggie (Abigail Breslin), from a clinical “execution” at the hands of the authorities. Devoting himself to Maggie’s care, Vogel is forced to witness her steady decline … and accept the dreadful truth of her eventual fate.

Maggie is directed by Henry Hobson, with Ed Chapman in the role of production VFX supervisor. A number of key visual effects scenes were handled by Cinesite, under the supervision of Aymeric Perceval.

Cinefex spoke to Perceval about the super-subtle digital makeup techniques used to enhance the look of Maggie, and the challenges of creating the broken-down city environments seen at the start of the film.

Arnold Schwarzenegger and Abigail Breslin star in "Maggie"

How did you come to be involved with Maggie?

I had been involved with the bidding team as compositing supervisor, when I was asked to have a look at Maggie and come up with a methodology. Henry and I quickly found ourselves on the same frequency and, because it was mostly 2D-orientated, I was appointed as Cinesite’s VFX supervisor.

This was my first VFX supervisor credit, so they paired me with senior VFX producer Diane Kingston to make sure the house would not burn down! At times I also had the input of VFX supervisors Andy Morley (who also jumped in as CG Supervisor) and Simon Stanley-Clamp.

How closely did you work with the director, Henry Hobson?

We worked very closely. We had bi-weekly cineSync sessions with Henry and Ed to catch up on the work, and emails were flying every day for all the extra questions. It was a very constructive and positive atmosphere. Henry’s extra requests made so much sense within the grand scheme of things that we went with the flow and tried to accommodate as much as we could. Even though there was a lot to do in so little time, everybody managed to keep it very smooth, and I’m massively thankful for that.

What was the scope of Cinesite’s work on the show?

The shots can be split into two bodies of work. The first was applying a “Dead World” look to 49 shots. The second was 81 shots that involved zombifying Maggie and other characters.

In total, 60 artists helped to deliver these 130 shots over a period of two months. The work mostly happened in London, but our Montreal office gave us a very helpful hand with prep and tracking, as the first few weeks were quite a rush. We had been asked to deliver final versions of the 70 most complicated shots in only one month for the Toronto Film Festival. However, Lionsgate picked up the distribution of the film and this deadline was cancelled.

Cinesite enhanced location footage with visual effects to create 49 "dead world" shots for "Maggie" – original plate photography.

Cinesite enhanced location footage with visual effects to create 49 “dead world” shots for “Maggie” – original plate photography.

This final composite "dead world" shot includes digital fire and atmospheric effects.

The final composite “dead world” shot incorporates digital fire and atmospheric effects.

What part do the Dead World shots play in the film?

The movie starts with establishing shots of Maggie wandering at night in the streets of an abandoned Kansas City. When she gets arrested, her father Wade travels out of the countryside to bring her back home. Because the rest of the film is focused on the characters, it was very important to make sure the universe of Maggie was clearly defined by the end of this establishing sequence. The worst of the disease has passed and the world has been left in a state of decay and abandonment.

How did you go about creating the Dead World shots?

Because every shot is happening in a different place, we decided to use a 2½D approach. Photoshop matte paintings were projected in Nuke on to cards, sky domes and basic geometry which we modeled with Maya. We created the cameras with 3DEqualizer, using Google Maps as much as we could as we had very little information from the set.

Environment lead Roger Gibbon and his team destroyed houses, burned cars, tagged walls, killed every hint of life and re-invented locations. They also added abandoned football fields, rusty water towers, empty flyovers, dusty debris on the road and a Kansas City skyline. This allowed us to build a Dead World library, which we then re-used in other shots to make maximum use of the small budget.

How were the digital matte paintings composited into the production plates?

The DMP was passed to comp, who would integrate them into the cleaned-up scans using roto and additional elements from our SFX library. One of the most complex shots needed us to burn an entire field. Senior compositor Dan Harrod did a brilliant job using fire and smoke on cards, and Nuke particles for the embers.

The team also worked on establishing a palette and defining a look for the digital intermediate, which turned the beautiful green landscapes of the original photography into darker, more ominous tones.

Maggie-Cinesite-Interchange-2

There’s a fully CG shot looking down on a freeway interchange. Did you do many shots like it?

No, this is the only one.

Can you describe what part it plays in the sequence?

As Wade drives closer to Kansas City, Henry needed a final shot to mark the transition between the empty highway and the decayed urban area. So he had the idea of using some altitude and showing the desolation from above.

We first approached this shot thinking of degrading an actual area of Kansas City – we would have rebuilt it from satellite views. But we quickly moved towards mixing different areas so as to be sure the transition idea was conveyed.

How did you build up the shot?

Our DMP artist Marie Tricart blocked out the view and the core additional elements. We then projected these on basic geometry in Nuke to work out the camera and the length of the shot. Once blocked, we started layering the interchange, introduced multiple levels and heights to make the parallax interesting.

While Marie was adding more elements, changing, refining and destroying areas in Photoshop, modeller Michael Lorenzo added geometry to project on to. Finally, senior compositor Ruggero Tomasino gave the shot some extra 2D love by adding atmospheric elements – like flying newspapers (Nuke particles) and an animated 2D truck which the production team had shot.

Okay, let’s talk zombies. How does the disease affect people in Maggie’s world?

The zombification is a very slow process. Weeks pass by between the moment Maggie gets bitten on the arm and when she finally loses control. Henry’s idea was to show the disease spreading from the wound by a network of dark veins gradually covering Maggie’s whole arm, her shoulder, her neck and finally her face. At the same time, her eyes would start to cloud, the skin around them would begin to rot, and dry scabs would appear around the veins.

The whole idea is more about disease than gore. With the visual aesthetic of the movie, it had to be nearly poetic. Not a word you often use on zombie movies!

Abigail Breslin as Maggie

How much of the zombification had been done using makeup?

Because we arrived on the project after filming, we had to start from what had been done on set. The makeup team had painted veins on Abigail and used liquid latex for the scabs and the rot. Henry was happy with how the first stages were showing in camera but he felt that the last ones needed more work.

What did you add to the makeup?

The budget and timing did not allow for CG skin, so we worked around the existing makeup. We used the painted veins as a base, adding more and covering them with multiple layers of bruises and scabs. About twenty displacement layers were sculpted in Mudbox layers, together with additional displacement using textures created by senior texture artist Laurent Cordier. This gave us a flexible and non-destructive approach. Throughout our work, we kept track of the spread of the disease so as to keep the chronology accurate.

Cinesite delivered 81 shots in which subtle visual effects were used to enhance the effects of the zombifying disease on Maggie and other characters.

Cinesite delivered 81 shots in which subtle visual effects were used to enhance the effects of the zombifying disease on Maggie and other characters.

There are lots of extreme close-ups of Maggie’s body, often with low depth of field. Was this particularly challenging for you when applying digital makeup?

That visual style was one of the reasons why we enjoyed the project so much. It is very intimate – it felt like fresh air to our greenscreen eyes! But yes, it was definitely challenging. Abigail’s skin was completely baby smooth, so the tracking team did an incredible job, frame by frame. Imagine all these close-ups of body parts subtly twitching, with a focus point constantly traveling … and no tracking markers!  As the disease progressed and more painted veins appeared on Maggie’s skin, this task became a little easier.

Close-ups of Maggie’s feet going in and out of focus, with their subtle twitching, were massively tricky to track, even with the existing makeup. Senior matchmover Matt Boyer came up with sets of distorted planes which proved very helpful, and matchmove lead Arron Turnbull and his team did a magnificent job.

For some parts, we created geometry from the on-set pictures. Our head of assets, James Stone, modeled Abigail’s head based on photos provided to us. Incidentally, when we did receive measurements, our digital model was only 2mm out!

Abigail Breslin in "Maggie"

How many stages of decay were there altogether?

In total, we ended up with five Maggies. “Maggie 0” is the Maggie we meet at the beginning of the movie. She has just a tiny bit of red, flaky skin around the eyes. We did no work on her. “Maggie 1” sees her iris starting to cloud, some veins appearing on her forehead and in her neck, and the redness around her eye getting stronger.

“Maggie 1” is the stage that was lacking the most consistency on set, and so required the most invisible work from us. The effects had to be soft and discreet – very difficult. Because we had all the reels in-house, we managed to create templates to grade the scan with textured irises, veins and marks, based on the shots we were working on as well as the ones we would not touch.

How did the decay progress beyond “Maggie 1”?

“Maggie 2” takes over after she starts eating meat. At that point, we had to start the transition to our final stage Maggie. We pushed the eye cataracts further and used the projected veins pattern of a “Maggie 2.5” stage as mattes to grade, distort and fake a gentle 3D effect on her skin.

“Maggie 2.5” was at first supposed to be a more advanced version of “Maggie 2” – just before she turns into a zombie – but it soon became obvious that at this stage we would have to start using CG, so it earned its own codename. To create it, Laurent simplified down the texture and displacement from “Maggie 3”.

So “Maggie 3” was the final stage?

Yes. “Maggie 3” was the properly CG-enhanced version which we designed as a last stage. Because CG skin wasn’t an option, we blended our V-Ray-rendered CG veins with the live action plate. The lighting had to be spot-on for the transition to work.

Did you pay particular attention to the eyes – those windows of the soul?

Yes, Maggie’s eyes go through a whole evolution over the course of the movie. We would often need to apply some effect to them at the same time as we would apply effects to her skin. We rigged the eyes of our digital model to help the match-move of the skin, so we ended up with usable animated geometry and UV-ed eyeballs to play with.

The eyes of “Maggie 0” have a soft white veil – this was achieved during the shoot with lenses, or in post with grading. We pushed the lens-work a bit further by making a cloudy layer appear within the iris of “Maggie 1” and adding more pronounced little red veins on the edges.

“Maggie 2” goes full-on cataract. These are the most visually surprising. The effect was achieved by mixing the eyeballs with a distorted version of themselves using a network of organic shapes. This allowed us to keep a good part of the performance. We then added some tiny localised grading to give it a warm and humid “yolk” effect.

“Maggie 2.5” and “Maggie 3” see the white cataract disappear. Hundreds of little black veins invade her eyeballs to make it look like the disease is now taking control over her body.

Cinesite tracked production footage of Abigail Breslin as Maggie, then used digital techniques to enhance the practical makeup effects used on-set. Left: original plate. Right: final composite.

Cinesite tracked production footage of Abigail Breslin as Maggie, then used digital techniques to enhance the practical makeup effects used on-set. Left: original plate. Right: final composite.

Tell us about the scene in which Maggie loses her finger.

After Maggie falls and cuts her finger, she starts bleeding a lot; because she doesn’t feel anything any more, she simply cuts it off. On set, they had used a finger prosthetic for the first shots. We started by removing the rig, cleaning up the junction with the hand and removing the wobble. Then we match-moved a cut, sliced a hole at the bottom and regraded the whole to make it look like the inside of the finger was getting soaked with blood. When she cuts it off, we went for a CG stump.

They had done a good job of hiding most of Abigail’s finger on set, but the amount and the dryness of the blood on her hand was inconsistent. So James and Laurent modelled the finger, and lighting and texture artist Peter Aversten created multiple layers of blood textures to aid continuity.

Did you also apply digital make-up to the other zombies seen in the film?

There are not that many of them, but Maggie is certainly not the only zombie in the movie, and they all required enhancement. The best ones to talk about are maybe Nathan and Julia – a father and daughter we meet in the forest just after Maggie cuts her finger. The makeup was good, but unfortunately it was not showing enough on camera. We painted veins over their faces and their hands and match-moved them. And just because we could, we also warped and regraded them to make them look much skinnier and way less healthy.

Maggie was your first project as VFX supervisor. Was it a big learning curve?

Well, because it was only two months, there was little room for error. Luckily, the Cinesite team has been very supportive. Thanks to their talent and positive approach, the ride has been a pretty good one. Hopefully, they’ve enjoyed it too!

Maggie is released on Blu-ray and DVD today, 7 July 2015.

Special thanks to Helen Moody. “Maggie” photographs copyright © 2015 Lionsgate and Lotus Entertainment.

Frankenstein’s Spawn

Frontispiece to the 1831 edition of “Frankenstein; or, The Modern Prometheus” by Theodore Von Holst. From a private collection at Tate Britain, Bath. Public domain via Wikimedia Commons.

Frontispiece to the 1831 edition of “Frankenstein; or, The Modern Prometheus” by Theodore Von Holst.

When Mary Shelley’s novel Frankenstein; or, The Modern Prometheus was first published in 1818, few people could have predicted how deeply its central theme of “man creates monster, monster runs amok” would embed itself into popular culture … especially the movies.

Whether it’s James Whale’s presentation of Frankenstein in 1931, or Steven Spielberg’s adaptation of Michael Crichton’s novel Jurassic Park in 1993, filmmakers have delighted in telling stories about scientists doggedly seeking that elusive “Eureka!” moment … only to be undone by their own misguided ambitions.

In fact, the only thing filmmakers love more than reflecting on human hubris is reaching that part of the movie where they actually get to unleash the monsters!

The familiar Frankenstein conceit is back on our screens this summer, as the dinosaurs of Jurassic World run riot through the lush landscapes of Isla Nublar. Thanks to the first three Jurassic films, we’re already familiar with the idea of genetically engineered velociraptors, but when it comes to man-made monsters, prehistoric super-critters are just the tip of the iceberg.

At Fox Studios Baja, in Mexico, “Deep Blue Sea” director Renny Harlin and special effects supervisor Walt Conti attend to one of the production’s three full-size mechanical sharks.

At Fox Studios Baja, in Mexico, “Deep Blue Sea” director Renny Harlin and special effects supervisor Walt Conti attend to one of the production’s three full-size mechanical sharks.

In Renny Harlin’s Deep Blue Sea, the remote ocean base of Aquatica is home to three hyper-intelligent – and super-violent – mako sharks. The product of a botched attempt to cure Alzheimer’s disease, this toothsome trio – a bubbly blend of both mechanical and digital sharks – chew their way through the majority of the film’s cast before expiring in suitably explosive fashion.

Special makeup effects artist and were-sheep performer Kevin McTurk endures a costume fitting for "Black Sheep" at Weta Workshop.

Special makeup effects artist and were-sheep performer Kevin McTurk endures a costume fitting for “Black Sheep” at Weta Workshop.

Genetic manipulation can transform the most unlikely of creatures. In Jonathan King’s comedy horror Black Sheep, a series of secret experiments transforms the ovine inhabitants of a New Zealand sheep farm into bloodthirsty carnivores … with a little assistance from the mechanical and make-up effects maestros at Weta Workshop.

Actress Delphine Chanéac enacted scenes as the adult Dren in “Splice” wearing blue- or greenscreen stockings, padding on her thighs and a prosthetic tail nub. Visual effects by C.O.R.E. integrated the live-action with animation of a CG tail and legs.

Actress Delphine Chanéac enacted scenes as the adult Dren in “Splice” wearing blue- or greenscreen stockings, padding on her thighs and a prosthetic tail nub. Visual effects by C.O.R.E. integrated the live-action with animation of a CG tail and legs.

All that messing around with the genetic code can bring about far more disturbing offspring than flesh-eating sheep. In Vincenzo Natali’s Splice, a pair of genetic engineers get unnaturally creative by combining animal and human DNA. The result? A bizarre and disturbingly sexy hybrid called Dren who, you guessed it, goes on the rampage and proves that the inevitable by-product of creating life – at least as far as Hollywood is concerned – is death.

As the sharks of Deep Blue Sea prove, even when the genetic meddling is intended to increase mental ability, physical violence invariably ensues. The enhanced chimpanzee Caesar, simian star of Rise of the Planet of the Apes, is brighter than many human beings (not to mention tortured by a very human mix of morality and conscience). For all his smarts, however – or perhaps because of them – Caesar is not above leading a revolution against the very species that raised him above the animals.

In “Rise of the Planet of the Apes”, actor Andy Serkis played the pivotal role of Caesar wearing a facial tracking camera on a headset and a grey suit dotted with LEDs. His performance was closely matched by a team of animators at Weta Digital.

In “Rise of the Planet of the Apes”, actor Andy Serkis played the pivotal role of Caesar wearing a facial tracking camera on a headset and a grey suit dotted with LEDs. His performance was closely matched by a team of animators at Weta Digital.

If you’re a movie scientist, why draw the line at animal experiments? Why not advance computer technology to the point where the machines can think for themselves?

For waist up shots of the T-800 in “The Terminator”, a puppet torso created by Stan Winston Studio was puppeteered by Shane Mahan using a rig similar to a backpack.

For waist up shots of the T-800 in “The Terminator”, a puppet torso created by Stan Winston Studio was puppeteered by Shane Mahan using a rig similar to a backpack.

Cinema is full of thinking machines inspired by Mary Shelley’s seminal novel. HAL 9000, that murderous box of microprocessors from 2001: A Space Odyssey, is a Frankenstein’s monster if ever there was one. So too are the power-crazed computers in Colossus: The Forbin Project and War Games.

More deadly still are the machines that can actually move around. Ultron, the hyper-intelligent mechanoid who seeks to destroy mankind in Avengers: Age of Ultron, even makes a Victor Frankenstein out of good old Tony Stark.

As for the machine intelligences in the Terminator and Matrix franchises, they represent the most terrifying Frankenstein story of all: the spawning not just of a monster that will destroy its maker, but of an entire race of fabricated fiends hell-bent on wiping mankind from the face planet Earth.

In the face of such a threat, maybe we should step away from planet Earth altogether and seek safety on some distant world. Yet, even in the depths of space, Frankenstein’s monster thrives … and loves to hear you scream.

Ridley Scott’s Prometheus (the title of which borrows more than a little DNA from that of Mary Shelley’s novel) tells of a godlike alien race whose experiments with bio-weapons go horribly wrong. And what’s the end result of the tortuous chain of genetic mutations triggered by the godlike Engineers? Only the most iconic Frankenstein’s monster of the modern age: the sleek and deadly xenomorph seen in the classic sci-fi horror Alien and its sequels.

In this climactic scene from “Prometheus”, Daniel James as the Engineer grapples with Weta Digital’s digital trilobite monster.

In this climactic scene from “Prometheus”, Daniel James as the Engineer grapples with Weta Digital’s digital trilobite monster.

Hmm. Considering the jaws on that thing, perhaps we’re better off on Earth after all. At least here we stand a chance of reasoning with the monsters we make. Perhaps the monsters will even reason with us.

That’s exactly what happens in Blade Runner, one of the best cinematic retellings of the Frankenstein story … and the one that most closely echoes Mary Shelley’s original intent.

Rutger Hauer as Roy Batty in Ridley Scott’s “Blade Runner”.

Rutger Hauer as Roy Batty in Ridley Scott’s “Blade Runner”.

Like the monster in Shelley’s novel (which in reality bears little resemblance to the hulking ogres seen in many of the film adaptations) the genetically engineered replicants of Blade Runner are not shambling beasts but articulate innocents, misfits cast adrift in a world they do not understand.

Okay, so Blade Runner‘s replicants aren’t averse to a little physical violence. But they resort to it only because they don’t understand the world into which they have been brought. In that respect, they’re just like the dinosaurs of Jurassic World.

As the monster says to his creator in Mary Shelley’s Frankenstein; or, The Modern Prometheus: “I ought to be thy Adam; but I am rather the fallen angel.”

Fallen angels. That’s exactly what all these monsters are, from the sharks of Deep Blue Sea to HAL 9000, from the ape leader Caesar to Roy Batty and his replicant friends … and yes, even the Indominus Rex currently busting the sub-woofers at an IMAX screen near you.

Frankenstein’s monsters. We create them. We worship them. Yet we have the temerity to grumble when they attempt to rip out our throats.

Will we never learn?


Does the Frankenstein theme still have the power to chill your blood? What Frankenstein-inspired movies would you add to the list? Unleash your genetically modified opinions in the comments box!


“The Terminator” photograph copyright © 1985 by Orion Pictures Corporation. “Deep Blue Sea” photograph copyright © 1999 by Warner Bros. Pictures. “The Matrix Reloaded” photograph copyright © 2003 by Warner Bros. Pictures. “Black Sheep” photograph copyright © 2006 by Live Stock Films and courtesy of Weta Workshop. “Splice” photograph copyright © 2010 by Copperheart Entertainment and Warner Bros. Pictures. “Rise of the Planet of the Apes” photograph copyright © 2011 by Twentieth Century Fox. “Prometheus” photograph copyright © 2012 by Twentieth Century Fox.

Avengers: Age of Ultron – Cinefex 142 Extract

Avengers: Age of Ultron - exclusive VFX coverage in Cinefex issue 142

All this week we’re featuring exclusive extracts from the articles in our latest issue – Cinefex 142, now available in both print and digital editions.

Our final offering is Avengers: Age of Ultron. You might think that once you’ve seen one Hulk, you’ve seen ‘em all. Not so, as proved in this extract featuring interviews with some of the key players at ILM:

“We did some nips and tucks on the original model to make Hulk leaner,” explained ILM animation supervisor Marc Chu. “Hulk is a little more GQ handsome this time around.”

ILM also developed a new muscle rig for Hulk, based on real-life physiologies. “ILM has always taken an outside-in approach to this type of animation,” commented ILM visual effects supervisor Ben Snow. “We would create a detailed skin surface with a lot of the muscle shapes built in, and then we’d create a muscle rig underneath that – but, often, the rig was used mostly for getting fleshy sims to jiggle and bulge.

“We still relied very heavily on the work and detailing done by our sculptors. On this one, creature technical director Sean Comer proposed building an inside-out system that entailed a more realistic set of underlying muscles, which would allow us to transfer more correct motion back onto the skin.”

Read the complete article in Cinefex 142, which also features Jurassic World, Mad Max: Fury Road and San Andreas.

All content copyright © 2015 Cinefex LLC. All rights reserved.

San Andreas – Cinefex 142 Extract

San Andreas - exclusive visual effects coverage in Cinefex issue 142

All this week we’re featuring exclusive extracts from the articles in our latest issue – Cinefex 142, now available in both print and digital editions.

This time it’s the turn of San Andreas. Destruction on the scale seen in San Andreas demands the use of digital techniques. However, when it comes to making a room set, you can’t beat practical effects, as described by special effects supervisor Brian Cox:

“Originally we discussed putting the whole set on a shaker rig. When I learned that Brad was planning to film the scene using a Steadicam – following Carla Guigino through the restaurant, up and out onto the roof – I pointed out that shaking the whole set would make it very difficult for the camera operator. Instead, we made everything move around the camera on individual shaker rigs.

“We put water features on rails and had air bags push and pull those to get the water moving. We rigged bottles coming off the bar, all the tables and the chairs. We had shakers everywhere, with debris dropping from the ceiling and a small explosion in the kitchen. It was quite a big deal. I had 30 effects technicians operating rigs all around that set.”

Read the complete article in Cinefex 142, which also features Jurassic World, Mad Max: Fury Road and Avengers: Age of Ultron.

All content copyright © 2015 Cinefex LLC. All rights reserved.

Mad Max: Fury Road – Cinefex 142 Extract

Mad Max: Fury Road - exclusive visual effects coverage in Cinefex issue 142

All this week we’re featuring exclusive extracts from the articles in our latest issue – Cinefex 142, now available in both print and digital editions.

Today we’re looking at Mad Max: Fury Road. In this extract, stunt supervisor and action unit director Guy Norris describes the massive logistical operation behind the action-packed location shoot in Namibia:

“The desert base camp accommodated approximately 1,000 crew members. “It was a Western on wheels. It was like when John Ford would get together 40 riders and they’d spend three months filming in Colorado. We had 65 stunt people on location for nine months. On our biggest days, we had more than 150 stunt performers.

“We had our training base and gym in a factory in Walvis Bay, and when we rolled out to location it was like a military operation with different divisions — bikes, trucks and cars. Every morning I used a big whiteboard to brief the crew, drawing roadmaps and using Matchbox toys to show what we were going to do.”

Read the complete article in Cinefex 142, which also features Jurassic World, San Andreas and Avengers: Age of Ultron.

All content copyright © 2015 Cinefex LLC. All rights reserved.

The VFX of “Jonathan Strange & Mr Norrell”

Bertie Carvel as Jonathan Strange and Eddie Marsan as Mr NorrellThis month, June 2015, marks the bicentenary of the Battle of Waterloo, the decisive European conflict of 1815 in which the French Emperor Napoleon and his army were defeated by British and Prussian forces.

It can be no coincidence that BBC Television has chosen this season of commemoration to air Jonathan Strange and Mr Norrell, a dark fantasy in which two rival magicians of the 19th century direct their otherworldly powers to aid the war effort at Waterloo. Adapted by Peter Harness from the award-winning novel by Susanna Clarke, the seven-episode show is directed by Toby Haynes, and features around 1,000 visual effects shots created by a team of 50 artists at Soho-based Milk VFX.

“The show’s producer, Nick Hirschkorn, is a long-standing client of ours. About four years ago he told me he’d just acquired the rights to Jonathan Strange and Mr Norrell,” recalled Milk CEO Will Cohen. “I got very excited – I’d read the book when it first came out, and loved it. The show went into preproduction in April 2013, at which point we sat down with the first scripts, and had initial meetings with the production designer [David Roger] and the director of photography [Stephan Pehrsson] to discuss how to bring it alive.”

Rain ships created by Milk VFX for "Jonathan Strange & Mr Norrell"

The Battle of Waterloo

Throughout the series, the magical powers of Jonathan Strange (Bertie Carvel) and Mr Norrell (Eddie Marsan) are made manifest through a number of supernatural set-pieces. One of the most spectacular of these occurs at the beginning of episode five, when Napoleon’s army attacks the British and allied forces at Château d’Hougoumont in Belgium – a key moment in the Battle of Waterloo.

During the sixty-second opening shot, the camera swoops over a smoke-covered battlefield filled with tens of thousands of warring soldiers and resounding with cannon-fire, before finally descending into the fortified garrison of Hougoumont, where Jonathan Strange is using his magic to help repel the enemy hordes.

“Instead of just seeing twenty extras in the scene, with the main battle happening off-camera – which is a very ‘television’ conceit – we wanted to get the full horror of the fighting,” remarked Cohen. “We’ve done similar scale shots for films, like Insurgent, so to me it represents a crossover to what you can do with high-end television in 2015. I’m really proud that the producer and director and execs backed up doing it.”

Watch a video breakdown of the Battle of Waterloo sequence:

Inspired by the battle scenes in Sergei Bondarchuk’s 1970 film Napoleon, the Milk team set about crafting the Waterloo sequence. “We worked out that in Napoleon there were about 40,000 extras,” commented visual effects supervisor Jean-Claude Deguara. “That film was also a great point of reference for us in terms of terrain and textures.”

To re-create the garrison at Hougoumont, a set was constructed at an airport location in Canada. The final frames of the grand opening shot were photographed as a crane move, with the camera descending from high level down into the set. Deguara and his team incorporated this live-action into the latter half of their shot, digitally replacing the airport surroundings with the appropriate terrain. The front end of the shot was fully digital.

For the most part, the terrain was historically accurate, based on both documents from the period and data gleaned from Google Maps. Further research served up facts and figures about how the troops were deployed on the day. While the environment and troop movements were reproduced as accurately as possible, artistic and technical considerations made necessary the occasional bit of historical revisionism. “We had to cheat it slightly to get everything in,” Deguara confessed.

Editorial and budgetary pressures meant that a shot as ambitious as the Waterloo flypast was constantly under threat of being trimmed – or even cut altogether. “We knew that this shot could potentially be chopped at any point,” remarked Deguara. “So we started off at a very basic previs level – the armies were just blue squares and red squares. We began with the movement of the camera, then gradually built it up, step by step. With a TV budget, you have to be so regimented in how you do it.”

CG supervisor Nicolas Hernandez added: “On a feature film, you can create an asset straight away that will be photorealistic for a full-screen close-up. On this show, we had to put in more time towards the end to get the assets looking nice.”

"Jonathan Strange & Mr Norrell" visual effects by Milk VFX

The massive 60-second aerial shot that opens the Battle of Waterloo sequence took around three months to complete. The 50,000 digital soldiers were controlled using Golaem Crowd.

Trumping Bondarchuk, the Milk team populated their digital battlefield with no less than 50,000 soldiers. During production, performers and extras were photoscanned wearing period costume, with the resulting data being used to create multiple types of CG double, ranging from Napoleonic grunt to Regency officer.

To control these huge numbers of digital extras, Milk turned to Golaem Crowd, their crowd management tool of choice, which they had first used on Brett Ratner’s 2014 feature Hercules. “We had soldiers, captains, all with different guns, different uniforms, different props,” said Hernandez. “They were all procedurally managed by Golaem.”

Similar attention to detail was used when re-creating the firepower of the digital troops. “Every cannon had five soldiers around it, which was historically correct,” observed Deguara.

Simulation systems permitted each cannon to “fire” automatically. “The simulation worked out the projection of the cannonball, and where it would hit the ground,” explained Hernandez. “We imported it into Maya, where we used a library of explosions that triggered procedurally. The first version we did of the shot looked like a Michael Bay film! We had 25,000 explosions, and the whole screen was covered in smoke!”

Ultimately, the choreography of the cannon-fire – together with the rest of the action in the shot – was determined through a number of meetings with the director. “It was all about Toby’s subjective taste,” Cohen commented. “That included questions about whether the camera should get interfered with by the explosions and cannon fire, and at what point in the shot that should happen.”

The opening Battle of Waterloo shot took around three months to complete. During the refining process, the shot proved too complex for one person alone to check. “We had four people watching it back in dailies,” revealed Deguara. “They each took a quadrant of the screen. Each time we’d notice little mistakes and glitches – like a soldier who’s running through a tree – and we’d go back and fix them. We could still be working on that shot today!”

Jonathan Strange & Mr Norrell - visual effects by Mlik VFX

The digital matte painting of the battlefield seen at the end of the Battle of Waterloo was inspired by wartime paintings in London’s National Gallery.

Magic at Hougoumont

Once the camera has finished its sweep of the battlefield, the action continues inside the Hougoumont garrison. As the French soldiers attack, Jonathan Strange uses his magical powers to help counter the onslaught. First, he brings to life the vines which ramble over the garrison’s fortified walls. Snake-like, one of the vines plucks an enemy soldier from the ground and flings him to his doom.

To achieve the effect, a stuntman was photographed on location, suspended from a wire rig. Hero vines, modelled and animated in Autodesk Maya, were tracked to his body. Just before the throw, the live-action performer was replaced by a digital counterpart.

Jonathan Strange & Mr Norrell - visual effects by Mlik VFX

Milk used both procedural and keyframe animation to bring the vines of Hougoumont to life.

In a classic piece of movie misdirection, the transition from stuntman to digi-double was concealed by the body of another soldier moving briefly in front of the camera. “You see the top part of the vine gearing up for the throw, then as soon as the other soldier wipes across the frame, we’re into our digi-double getting flung out of the way,” explained Deguara. “Actually, that was a reference to an episode of The Simpsons, when Bart and Homer are trying to trap a rabbit! It’s a comedy moment that we tried to slip in there.”

When the building behind him catches fire, Strange summons a giant waterspout from a well and uses it to extinguish the flames. “We had big fire hydrants and hoses there on the day,” recalled Deguara, “and we had real water pitching down over the doorway – interactive elements for when the soldiers run out. We did the shot with two camera moves, which we joined together. Our digital waterspout comes out of the well, then splits into five sections to put out the fire.”

The main body of the waterspout was procedurally generated in Houdini, with extra detail built up using liquid flip and white water simulations, plus layers of mist. Additional procedural tweaks were used subsequently to allow artists to choreograph the animation of the liquid. The central column was rendered in Maya using Arnold, while Mantra handled the more finely detailed effects in Houdini.

As the French soldiers press home their attack, the resulting hand-to-hand combat was enacted by actors and stunt performers. Weapon strikes were enhanced by Milk, using a combination of practical and CG effects to increase the blood quotient. “We got them to send costumes over,” stated Cohen. “We would put a costume on a dummy, and cut it as if it had been hit by an axe. Then we would photograph that and comp it into the shot.”

Jonathan Strange & Mr Norrell - visual effects by Mlik VFX

At the height of the Battle of Waterloo, Jonathan Strange combats a French soldier using a giant replica of his own hand made from mud.

Towards the end of the sequence, Jonathan Strange finds himself at the mercy of a French soldier. Staring death in the face, he causes a giant facsimile of his own hand to rise up from the ground and uses it to crush his attacker to death. By the time the live-action for the scene was shot, however, the Hougoumont set had become a quagmire, causing Milk to reconsider their initial design for the effect.

“We originally thought the hand would be made of dry, crumpled mud,” Deguara commented. “When we got the plate back the place looked like a swamp, so we had to re-evaluate it. We turned the hand into something more muddy and slimy. I think it worked in our favour, that fluid look.”

The geometry of the CG hand was deformed to give it a liquid appearance, with viscous fluid particles being emitted from the more distorted areas to produced falling chunks and streams of mud. Additional displacements and variations in colour, applied at render time, added the interactive effects of the rain which falls throughout the scene.

Jonathan Strange & Mr Norrell - visual effects by Mlik VFX

The mud hand was animated to match the movement of the performer playing the French soldier, who was suspended on a wire rig.

As with the vine sequence, the actor playing the doomed French soldier was held aloft using a wire rig. “We couldn’t afford a digi-double for that shot,” noted Hernandez. “The hand is animated to the guy on a wire – it was a bit tricky, but we did get it to work in the end.”

Equally tricky was making the soldier’s death gruesome, yet without upsetting squeamish viewers. “There were a lot of taste decisions to be made,” Cohen remarked. “Originally, as Jonathan Strange squeezes the life out of the soldier, his head was going to pop off. But even though it’s after the nine o’clock watershed, there’s still a line you can’t cross!”

The Waterloo scenes are augmented by a number of environment extensions and matte paintings, including the final shot in which the camera cranes up to reveal a wide shot of the battlefield. “It’s very painterly, inspired by the wartime paintings in the National Gallery,” commented Cohen. “We rendered out some of the topography that was created for the battle sequence, and then painted on top of that.”

Jonathan Strange & Mr Norrell - visual effects by Mlik VFX

The burning windmill scene was one of many digital environments created by Milk VFX for “Jonathan Strange & Mr Norrell”.

A Round-Up of Regency Magic

Jonathan Strange’s manipulation of the elements during the Battle of Waterloo is characteristic of the way magic is portrayed throughout the series. Referred to by the characters as “practical magic”, these supernatural spectacles of the Regency era are earthy and visceral – no Harry Potter pyrotechnics here.

“That comes from the book,” Deguara asserted. “It was a key part all the way through: if you want to use magic, you have to negotiate with the elements. Norrell has to read about it from a book, whereas Strange has more of a natural feel for it.”

Some of the other magical set-pieces featured in Jonathan Strange and Mr Norrell include:

Mr Norrell brings the statues of York Minster to life

Milk delivered thirty shots for the York Minster sequence. Their cast of animated statues included a line of seven stone kings, musicians and a Latin-speaking bishop. Bertie Carvel stepped briefly out of his role as Jonathan Strange to perform as the bishop while wearing a green chromakey leotard. His presence on set gave Dr Foxcastle (Martyn Ellis) a physical presence to react to, and provided Milk’s animators with crucial reference for the statue’s performance.

Jonathan Strange conjures a herd of giant sand-horses

Though comprising just eight shots, the sequence offers considerable spectacle, from an aerial shot pursuing cracks racing across the beach, to the explosive impact of the galloping sand-horses with a stranded warship. The sand-horses themselves – modelled to combine the features of both sleek racehorses and sturdy shire horses – were rendered as volumes, enhanced with particle effects and rigid body simulations.

Sandhorses created by Milk VFX for "Jonathan Strange & Mr Norrell"

Jonathan Strange’s sand-horses emerge from an English beach and gallop towards a stranded warship.

Mr Norrell creates a fleet of illusory rain-ships

Answering director Toby Haynes’s request for a sequence that resembled a Turner painting, the Milk team combined live-action rowing boats (shot against greenscreen in a Yorkshire pond), simulated ocean environments and CG ships. Swirling displacement effects and interactive rain reinforced the ghostly navy’s subtle, dreamlike appearance.

Rain ships created by Milk VFX for "Jonathan Strange & Mr Norrell"

French scouts cautiously approach the illusory fleet of rain-ships conjured by English magician Mr Norrell.

Jonathan Strange creates a road for British troops

A close-up shot of stones spawning outwards from a central point – dubbed the “popcorn shot” – was created with a rigid-body simulation in Houdini. Matte painting techniques, combined with a CG dust trail, were used to show the road extending rapidly over distant hills.

Milk created the eerie netherworld of "Lost Hope" as a fully 3D environment, enhanced with rolling mist and illuminated with volumetric lighting.

Milk created the eerie netherworld of “Lost Hope” as a fully 3D environment, enhanced with rolling mist and illuminated with volumetric lighting.

Small Screen, Big Ambitions

Jonathan Strange and Mr Norrell contains more visual effects shot than many feature films, even allowing for the show’s seven-hour running time. Squeezing such a VFX quart into the pint pot of a television budget demanded precision planning.

“It’s a massive logistics and communications exercise,” Cohen observed. “We had pockets of people working all over the place for months at a time. They would park a certain piece of work, then come back to it weeks later.”

Just as important as being organised was being able to roll with the punches. “You have to pre-plan as much as you can,” Deguara agreed. “But when you get out on set, obviously things change and you have to adapt. Then we would all get in a room and have serious chats about an edit that might be five or six shots over what it should be. That’s when you have to make hard decisions.”

Hernandez added: “We were compromising to stay in budget, so whenever the director said, ‘I want this,’ half the time we would have to say, ‘You can’t afford it.’ We did a lot of stretching to make everything look good.”

Reflecting on the series as a whole, Cohen concluded, “We went on quite a journey. It was a very collaborative and enjoyable process for eighteen months, and we’re all very proud of it. From Peter Harness putting the words on the page, to Toby’s seven hour marathon of shooting it, there was a feeling all the way through of serving the material, and not letting it down.”


Jonathan Strange and Mr Norrell is currently airing on BBC America in the US, and BBC One in the UK.

Special thanks to Jenny Burbage. “Jonathan Strange & Mr Norrell photographs copyright © 2015 by BBC Television.

Jurassic World – Cinefex 142 Extract

Cinefex 142 "Jurassic World" coverAll this week we’re featuring exclusive extracts from the articles in our latest issue – Cinefex 142, now available in both print and digital editions. First up is Jurassic World.

Describing a scene in which Owen Grady (Chris Pratt) releases the genetically-engineered velociraptor Blue from her harness, Image Engine visual effects supervisor Martyn Culpitt describes the attention to detail required to ensure close interaction between the human and digital performers:

“There was a gray-shaded raptor maquette with a harness on it in the plate, and we replaced that with our animated raptor. Our biggest challenge was creating the very subtle interaction between the raptor and Chris Pratt, especially in closeups, which had to show Blue’s connection to Chris Pratt’s character. We were always pointed toward nature reference of real animals in creating raptor facial expressions.

“The difficult thing with the anatomy of the raptor’s head is that there is a lot of bone above its eyebrow line and along the snout – so all of that had to remain rigid. What we could move was the soft tissue around the edges of the mouth and eyes. We could make the nostrils flare to suggest emotion, too. We couldn’t push any of this too far, though, because they had to look like animals. The facial animation was very subtle.”

Read the complete article in Cinefex 142, which also features Mad Max: Fury Road, San Andreas and Avengers: Age of Ultron.

All content copyright © 2015 Cinefex LLC. All rights reserved.

Now Showing – Cinefex 142

Cinefex 142 - From the Editor's Desk

Roll up! Roll up! The new issue of Cinefex is now officially open for business!

Just like that theme park with all the dinosaurs, issue 142 of the world’s premier visual effects magazine is filled with white knuckle rides – sorry, make that “reads”.

First up is Jurassic World. For the safety of our readers, we’ve caged this article behind a ninety-foot titanium-steel electrified fence, so that none of the in-depth details about the making of the movie can possibly escape. We think.

Also on parade in Cinefex 142 are Avengers: Age of Ultron, the earth-shaking San Andreas and the eye-popping Mad Max: Fury Road. We’re 100% committed to your security so rest assured that, as you read these articles, there’s only a very small chance you’ll be smashed by the Hulk, swallowed by a hole in the ground, or torched by a gang of crazed post-apocalyptic punks.

Here’s Cinefex editor-in-chief Jody Duncan to talk about the high-octane attractions igniting the interior of issue 142 …

Jody Duncan – From the Editor’s Desk

It was 1974 when a date and I drove the 45 minutes to Palm Springs to experience Earthquake in the highly marketed “Sensurround” audio system that had our seats, teeth and nerves rattling. San Andreas, one of the four films covered in Cinefex 142, is probably the biggest earthquake movie to hit the theaters since that time, and as a resident of Southern California who has often driven over sections of the San Andreas fault line, I look forward to the film with particular interest. Joe Fordham wrote our San Andreas story, as well as our coverage of Mad Max: Fury Road, the first George Miller-directed “Mad Max” film in 30 years.

I spent the past three months writing the effects story for Avengers: Age of Ultron, which Marvel Studios allowed us to see twice before the film’s opening. The first screening was so early, there were virtually no completed visual effects shots (which is why I had to screen it again, many weeks later) – but at least I got the gist of the storyline.

I believe Avengers: Age of Ultron set my personal record for the number of visual effects companies interviewed for one article: 20 effects studios, and a total of 26 interview subjects! The large pile of transcripts was daunting, but the resulting story leaves no Sokovian cobblestone unturned, no Ultron nuance unexplored, no Hulk muscle twitch unexplained.

Then there was Jurassic World.

Jurassic World felt a bit like a homecoming to me. I wrote the cover story for the first Jurassic Park in Cinefex 55, as well as the books The Making of Jurassic Park, which I wrote with Don Shay, and The Making of The Lost World, for which I went solo. I’ve been tracking those genetically engineered dinosaurs for a very long time …

So, I felt pleasure – and a small measure of relief – when the visual effects artists at ILM spoke of those films with such respect and reverence. I had wondered if a new generation of effects artists – people accustomed to more digital firepower on their cell phones than the Jurassic Park crew had in their entire arsenal – might dismiss that pioneering work as “less than.” To the contrary, they all spoke as if fully aware that they were standing on the shoulders of giants.

Thanks, Jody! All that remains is for me to open the gates and declare this issue of Cinefex well and truly … wait a second … is that a hole in the fence? Where did those gigantic footprints come from? And why are all those people screaming?

There’s only one thing for it. Grab your copy of Cinefex 142 right now … and run!!!

 

The Visual Effects of “Sense8”

Sense8 - a Netflix originalEver since the release of The Matrix in 1999, filmmaking siblings the Wachowskis have constantly pushed the boundaries of the motion picture medium. Always ready to embrace innovative – and frequently mind-bending – narrative techniques, they have now turned their attention from big screen to small, with the release of the new Netflix series, Sense8.

Created by the Wachowskis and World War Z screenwriter J. Michael Straczynski, Sense8 explores the interwoven lives of eight people whose minds are irrevocably linked by a single, extraordinary event. Throughout the course of the show’s twelve episodes, these interconnected individuals – known as “sensates” – must not only come to terms with seeing the world through each others’ eyes, but also evade the hunters who are trying to track them down.

Visual effects for Sense8 were overseen by production VFX supervisors Dan Glass (who also directed the story set in Seoul) and Jim Mitchell. The production planned to cover most of the work with an in-house team, with a few larger sequences being commissioned to Deluxe’s Encore TV. However, as the show’s visual requirements grew, additional vendors were brought on board.

Watch the Sense8 trailer:

VFX Q&A – Dan Glass

When did you get involved with the project?

The first discussions of involvement began in 2012, while we were in pre-production on Jupiter Ascending.

Creatively speaking, what was your overall approach to the show’s VFX?

The show had an incredibly tight budget and timeline, so we made every effort to capture what we could in-camera. Whilst the Wachowskis are known for their epic visual style, they are actually very pragmatic, and this show ran incredibly smoothly as a result. We decided early on to play the transitions between the sensates as largely sleight of hand, having them physically in each others’ spaces when communicating. Whilst this made for simpler VFX, the shooting style often required covering the dialogue up to four times – once in each location with the two sensates interacting, and again with each individual sensate acting alone in their respective environment.

How did you go about assembling the VFX team?

We set up an in-house team in Chicago, where editorial was based. This was led by digital effects supervisor Ryan Urban, who has worked with us for a number of years. The in-house team ended up completing over 700 shots for the series. For a small number of sequences, we knew early on that we would need the help of an external vendor: the Nairobi bus chase, and weather augmentation in Iceland for the finale. For the more complex work, and to help with bandwidth for the quick turnarounds, we enlisted the support of Encore VFX, Locktix and Technicolor VFX, with additional help from Studio 8 FX, Trace VFX and Almost Gold.

What were the key VFX scenes?

Most of the VFX is pretty invisible: split-screens (some very complex!), crew and rig removal, weather augmentation and screen inserts. The more visible work includes age manipulation of actors, more dramatic weather, a few greenscreens, CG blades, blood and wounds. For the most part, we aimed to shoot everything for real, and enhance later where appropriate.

How did you find the transition from VFX supervisor to director?

Directing was a great and satisfying challenge – if nerve-racking at times! There are a lot of skills you learn as a VFX supervisor towards the craft of telling stories, but nothing compares with showing up in a country where limited English is spoken, and having to deal with the feature film ambitions of a project on a TV shooting schedule.

Would you do it again?

Of course!

Daryl Hannah in "Sense8". Photograph by Murray Close.

VFX Case Study – Locktix

One of the vendors hired to accommodate the growing needs of Sense8 was L.A.-based Locktix. “They got in touch with us and said they needed extra firepower,” explained Locktix VFX producer Gresham Lochner. “Originally it was only for one episode – maybe five or ten shots – but then they realised how much the shot count was growing. It didn’t make sense for them to scale up internally, so when they started to divvy out work, we were kind of a shoe-in to continue on with the rest of the episodes. We’d love to thank specifically Dan Glass and Ryan Urban – they were some of the best clients we’ve ever worked with.”

Locktix was set up in 2011 by Lochner, following a stint as senior compositor at Digital Domain. Lochner had previously worked at a number of other effects facilities including Rhythm and Hues, Method Studios, Rising Sun Pictures and MPC. Matthew Bramante became his business partner in 2013, following a similar tour of duties around the world. “We’ve been growing steadily for the past couple of years,” Lochner stated. “We’ve just moved into a new space in downtown L.A., about five times the size of the original office space we had in Santa Monica.”

Fire enhancement was among the many invisible effects used by Locktix to enhance scenes in “Sense8”.

Fire enhancement was among the many invisible effects used by Locktix to enhance scenes in “Sense8”.

With just six full-time staff, ramping up to around twenty-four during busy periods, Locktix encourages its VFX artists to take on multiple roles. “If we have, say, a comp shot that requires some 3D tracking, I absolutely want to have an artist who can shepherd it all the way through,” Bramante remarked. “Also, it gives the artists ownership of their work, and their own creative input. That’s something that was really important to me when I was coming up as an artist, so I try to give it back to them.”

Ultimately, Locktix found themselves working on every episode of Sense8, with work ranging from digital fixes such as wire removal and split-screen effects to combine different actor performances from multiple takes of a scene, through to set extensions and atmospheric enhancements. “It definitely grew in scope as we went along,” commented Bramante, Locktix’s VFX supervisor on the show. “By the end we were up to 160-180 shots across all twelve episodes.”

To fill out this scene at a movie premiere, Locktix duplicated extras from the plate to expand the crowd, as well as adding extra dressing including spotlights and limos.

To fill out this “Sense8” scene at a movie premiere, Locktix duplicated extras from the plate to expand the crowd, as well as adding extra dressing including spotlights and limos.

In true Netflix fashion, all twelve episodes of Sense8 were released simultaneously in all Netflix territories around the world. Indeed, in a recent interview with TVLine, star Daryl Hannah stated, “You may want to refrain from calling Sense8 a TV series at all. It was shot like a twelve-part movie. It is an incredibly cinematic, massive, epic-scale film.”

Despite the massive scale, however, Sense8 was delivered according to a relatively conventional television production schedule. “We’ve now done a couple of shows with Netflix, and they keep to a weekly or bi-weekly schedule, depending on what the production decides, and when editorial starts locking,” Bramante revealed. “On Sense8, we had a general understanding of how much was going to happen across the series, so we could hedge our bets and start work early on effects-intensive stuff that might be coming down the pipe later on. But schedule-wise it was still handled and delivered episodically, week by week.”

One of the major sequences Locktix worked on features a hazardous car journey to a hospital in blizzard conditions. While the sequence forms an important part of the series finale, time-shifts within the show’s overall narrative mean that references to it appear throughout the entire series.

Locktix added falling snow and atmospheric effects to around 40 shots in the climactic "Sense8" blizzard sequence. Top: original production plate. Bottom: final composite shot.

Locktix added falling snow and atmospheric effects to around 40 shots in the climactic “Sense8” blizzard sequence. Top: original production plate. Bottom: final composite shot.

“They wanted to increase the danger of the scene, to add some more snow and really make it look like there’s a blizzard going on,” commented Bramante. “That was definitely our biggest sequence – somewhere in the vicinity of forty shots, from the beginning of the show all the way through to the end.”

One of the main tasks facing the Locktix team was filling the air with snowflakes. “There was snow on the sides of the roads, and there were one or two shots where they got some practical snow to fall, but for the most part the falling snow was added in,” Bramante stated. “We went through a bunch of iterations with our CG effects, exploring different types of snow and asking, ‘Does this look like just a flurry? Is this too blizzardy?’ We also added skies, clouds and various atmospheric effects.”

Demanding though the blizzard sequence was, shots for which Locktix were asked to combine actors from multiple takes were not without their challenges. “Funnily enough, most of the ‘straightforward fixes’ ended up being some of the more difficult shots to do!” observed Bramante. “As with most modern filmmaking, the cameras were moving all over the place, so with the split-screens there might be one take that really worked, and another take that really worked, but the cameras would be in wildly different places for each one. That was where we had to get creative.”

However, the ultimate challenge for the Locktix team came in the form of a humble chicken.

“There was this chicken, and it was supposed to be the same bird in two different shots,” Bramante recalled. “But they couldn’t get the same chicken both times, so in one shot it was all white, and in the other it was white with black feathers.”

Bramante’s solution was to push the 2D distortion tools in compositing software Nuke to the limit. “We have one or two Nuke guys who love to come up with new tools,” explained Bramante. “They created a whole suite of distortion tools based on Nuke’s IDistort, which let us do different skews and transforms, all connected to multiple trackers. We used those tools to create some fantastic looking feathers. For something that nobody will ever think of as being visual effects, it was actually a pretty big shot!”

To create this composite shot, Locktix integrated the wintry exterior view with a greenscreen shot of the boy, adding the window glass and the falling snow beyond.

To create this composite shot for “Sense8”, Locktix integrated the wintry exterior view with a greenscreen shot of the boy, adding the window glass and the falling snow beyond.

VFX in Los Angeles

Working as they do in Los Angeles, the Locktix team are uncharacteristically optimistic about the future for the visual effects scene in a city whose once-thriving VFX industry has suffered a decline, with major effects facilities moving elsewhere or even closing down altogether. Lucrative subsidies continue to attract studios to whatever country offers the biggest financial advantage, forcing VFX houses – not to mention the “pixel gypsy” artists who are employed by them – to set up shop wherever the work is.

“Contrary to popular belief, there’s plenty of work here,” Bramante asserted. “There’s a lot of stuff that’s really quick turnaround, which definitely suits our particular company ethos. So, in the past couple of years, we’ve been specialising in doing what we call ‘911 effects work’.”

While providing an emergency service to filmmakers can be stressful, due in no small part to the short notice and tight deadlines, being local can be a distinct advantage. “A lot of the filmmakers are here in L.A.,” Bramante explained, “and what happens is that, at the last minute, editorial finds fifty shots that have to be done in a week and half. That’s happened for us on some pretty major movies. On the show we’re working on right now, we’re going back and forth with editorial, and they’re just a twenty minute drive across town. So it’s really easy to interface with them. With a phone call, or even on Skype, things can get lost in the shuffle.”

Lochner added, “When you get a larger company, there’s maybe twelve people who have to touch something before you can bring in work and get it back out. At Locktix we’re lean and mean. So just one person can bring a shot all the way into the pipeline and back out again.”

For “Sense8”, Locktix artists digitally augmented this facial makeup, adding texture to the wound and skin detail to the prosthetic patch, and matching the colour of the prosthetic to the actor’s skin as his face became flushed.

For “Sense8”, Locktix artists digitally augmented this facial makeup, adding texture to the wound and skin detail to the prosthetic patch, and matching the colour of the prosthetic to the actor’s skin as his face became flushed.

Personal contact also brings creative benefits. “I like to be in the room with the editor, or creative director, or producer,” Bramante commented. “I love coming up with creative options that give people other ways to achieve their goals, so at the end of the day they’re not saying, ‘Oh, these guys just want to make money off of us.’ I really want our visual effects to help the production.”

Having built a business in a city which many others have fled, Lochner has his own views on how best to tackle the thorny subject of subsidies. “My personal opinion is that it’s just a symptom of a poor business model – which is kind of rhetorical at this point,” he observed. “But we’ve structured things internally to get around that, so we can still compete even when people are being subsidised. So when those subsidies go away, and people are chasing them to a new area, we’re still going to be right here. It won’t disrupt our operation at all.”

Sense8 is now exclusively streaming on Netflix.

“Sense8” photographs copyright © 2015 by Netflix.