The Making of Robot Overlords

Callan McAuliffe in "Robot Overlords"

In the low-budget British sci-fi adventure, Robot Overlords, alien automatons rule the streets and ordinary citizens are locked under curfew in their homes. Just by stepping outside, you risk being vaporised by a hulking Sentry, or picked off by a lethal Sniper.

Amid the ruins of civilisation, Sean Flynn (Callan McAuliffe) leads his group of young friends on a quest to join the resistance forces that are standing against the robot invaders. Hot on their heels, however is their old teacher – now treacherous robot collaborator – Robin Smythe (Ben Kingsley) and his captive Kate (Gillian Anderson). Who will prevail? Man or machine?

Directed by Jon Wright, Robot Overlords features some 265 visual effects shots delivered by Soho-based Nvizible, supervised by the company’s founder and co-owner, Paddy Eason. The film’s complex tracking requirements were fulfilled by Peanut FX, with additional compositing support being provided by Boundary VFX, under the supervision of Nick Lambert.

Watch a wide selection of Nvizible’s VFX shots in the official video for Robots Never Lie, written by Matt Zo for the Robot Overlords end credits:

Origins of the Overlords

The original story concept for Robot Overlords came to director and co-writer, Jon Wright, in a dream. “I dreamed that I was playing cowboys and Indians with a little boy,” Wright recalled. “A huge two-storey robot came marching up the street and swung its laser cannon arm towards him, and a voice boomed out, ‘Citizen, drop your weapon immediately!’ I assumed I was just recycling a movie that I’d already seen, but eventually, I came to the conclusion that maybe it was an original idea.”

After expanding the dream concept into a two-page treatment, Wright began developing a script with co-writer Mark Stay. “Jon and I are both big fans of those 1980s Amblin movies like E.T.: The Extra-Terrestrial and The Goonies,” said Stay. “But those films were always set in American suburbia – we wondered why nobody had ever set a film like that in the UK.”

“It came together very quickly, especially for a British film,” Stay continued. “Our elevator pitch was, ‘It’s like The Goonies, but with robots blowing stuff up!’ We took Jon’s treatment to Piers Tempest – one of the producers from Jon’s previous film, Grabbers. The BFI put us under their wing, and before we knew it we had some development money.”

VFX supervisor Paddy Eason was also consulted at this early stage. “I knew Jon because we did the effects for his first two films – Tormented, and an Irish monster movie called Grabbers.” Eason commented. “The first thing I received was the treatment, and the next throw of the dice for me was getting invited to be part of a group that critiqued the script – Jon calls it the ‘Tiny Think Tank’.”

Sentry – "Robot Overlords"

The Tiny Think Tank

The Tiny Think Tank is an initiative conceived by Wright as a means by which a group of filmmakers can gather together periodically, in order to review scripts in development. “It’s a gang of writers, actors, directors, visual effects people,” Wright explained. “There aren’t any producers or financiers involved – it’s purely creatives. We come together, read a script completely cold, then have a round-table discussion. It was inspired by Pixar. They have their team of creatives – who are also shareholders – who come together and critique each other’s films. The Tiny Think Tank is our low-budget UK version of that, and I think it’s one of the reasons Robot Overlords got made. Our script accelerated in terms of quality much more quickly than it would have done had we been writing it in isolation.”

Nvizible's VFX breakdown for a key sequence in "Robot Overlords"

Nvizible’s VFX breakdown for a key sequence in “Robot Overlords”

As part of the Tiny Think Tank, Paddy Eason was able to offer a level of opinion and criticism not commonly afforded people in the visual effects profession. “I had some suggestions from my own visual effects point of view,” Eason remarked. “But I also put some story ideas out to Jon and Mark. As a visual effects person, that’s not normally part of your remit. In fact, on many shows, to put forward story ideas would be viewed as a breach of etiquette. So it was delightful to be invited to give that kind of feedback.”

Nvizible’s association with Robot Overlords developed further when the company became a production partner on the project. Nevertheless, they were still required to bid for the work. “Robot Overlords was a regular production with a bond company and other financiers, so it had to be seen not only that our bid was value for money, but also that our costs weren’t unrealistically low,” Eason pointed out. “Also that, if the unthinkable happened and we dropped the ball, there would be somewhere else to go to get the work done. Obviously it didn’t come to that!”

Robin Smythe (Ben Kingsley) and the Mediator (Craig Garner) visit the alien Cube in "Robot Overlords".

Robin Smythe (Ben Kingsley) and the Mediator (Craig Garner) visit the alien Cube.

How to Design a Robot

Key to the success of Robot Overlords was the convincing creation of the alien machines that have descended from outer space to occupy planet Earth. Design concepts for the mechanical marauders developed naturally alongside the story.

“The Robot Empire is stretched across the galaxy,” explained Stay. “If you compare it to the Roman Empire, the Earth is like Caledonia – it’s the last place any of them want to be. Resources are stretched thin. So the Sentries, for example, are quite sorry figures. They walk with a kind of stoop, and they’ve got chips and dents and dinks in their armour, because they’ve been here for three years and survived a war.”

According to the storyline, the Robot Empire studies the inhabitants of each planet they plan to subjugate, and designs an automated invasion force specifically to tap into the psychological profiles of that planet’s inhabitants. This bespoke engineering approach is evident in the design of the Sentries: two-storey, bipedal robots with small heads and enormous, hulking arms.

According to co-writer Mark Stay, the robot Sentries are designed to tap into our instinctive fear of "people with tiny brains and big muscles".

According to co-writer Mark Stay, the robot Sentries are designed to tap into our instinctive fear of “people with tiny brains and big muscles”.

“In studying us, the robots have discovered that we’re frightened of people with tiny brains and big muscles,” explained Stay. “That informed the design of the Sentry. Our other robots – like the Snipers and Air Drones – are very functional, each with a specific purpose. ”

A different psychological insight led to the design of the Mediator – a kind of robot diplomat which acts as an interface between human beings and their automated oppressors. “The Mediator is made to resemble a child, because they’ve observed that, on the whole, adults don’t behave aggressively towards children,” Jon Wright remarked. “But it’s been designed by an alien species who aren’t familiar with the minutiae of how we react, how our culture is organised. So they’ve got it a bit wrong. They’ve attempted to make something likeable, but they’ve actually made something that’s extremely creepy.”

Early conceptual designs for the robots were developed by storyboard and concept artist, Gabriel Schucan. “Jon and I sat with Gabriel,” Stay recalled. “We scribbled ideas, and he would develop them and send drawings back to me and Jon. That was still very early on in the writing of the script.”

This slideshow requires JavaScript.

Also at this early stage, the team created the “Robot Compendium”, a manual containing a piece of key art for each robot, drawn by Jack Dudman, along with a description of the machine’s capabilities. “The Robot Compendium, helped us to sell in the script to investors,” remarked Stay. “So it was a really important document for us.”

During the story development process, visual effects constraints influenced what might be achievable, both practically and financially. “While we were still talking about the script, we started doing visual effects breakdowns in a very loose, back-of-an-envelope way,” said Eason. “Some scenes would turn out to be looking very expensive, and that was fed back into the writing process. For example, there was a scene in the woods with our heroes being chased by robots called Octobots. That was quite heavy-duty in terms of visual effects, and wasn’t particularly important in the story. So it was elbowed fairly early on.”

"Robot Overlords" concept art by Paul Catling.

“Robot Overlords” concept art by Paul Catling.

The Overlords Become Real

Once Robot Overlords was greenlit, the initial robot concepts were handed over to designer and illustrator Paul Catling, known for his work on films including Guardians of the Galaxy, Captain America: The First Avenger and the Harry Potter series.

“Paul is incredibly productive,” Eason remarked. “He did lots and lots of pen-and-ink designs, just exploring how the robots might look. Early on, some of them were much more organic-looking, and skeletal, and frightening in a slightly Satanic kind of way. Ultimately, we decided the robots should be slightly crude-looking. We wanted to avoid comparisons with Transformers – Jon wanted our robots to look much more industrial. We also wanted them to be slightly reminiscent of WWII Nazi technology.”

This slideshow requires JavaScript.

Although the design of the Sentry was inspired directly by Wright’s inspirational dream, it nevertheless went through a lengthy development process. “One of the original tests we did for the Sentry was a lot more like Robby the Robot,” Wright revealed. “Then he developed into more of an Iron Giant character – we just pushed that design to make it slightly more peculiar. He’s slightly lop-sided, and made up of different geometric shapes. That was partly driven by the plot device of having them fold up into cubes, which was our way of showing that they’ve been deactivated.”

The Mediator robot was played by Craig Garner, whose appearance was enhanced in post-production to give the actor's skin an unsettling plastic sheen. For some scenes, frames were removed to create staccato movements.

The Mediator robot was played by Craig Garner, whose appearance was enhanced in post-production to give the actor’s skin an unsettling plastic sheen. For some scenes, frames were removed to create staccato movements.

The Mediator, in contrast, was conceived from the beginning as a machine that looked like a human – but not too much like a human. “Right from the beginning, we wanted deliberately to put the Mediator in the uncanny valley, rather than try to avoid it,” Wright noted. “We gave the actor, Craig Garner, perfect false teeth, trimmed his hairline to a perfectly straight line, gelled his hair rock-solid, and tried to remove all of his imperfections and blemishes with make-up. Then that was enhanced in post-production with clean-up, skin glows, making his eyes uniform. Sometimes we’d take out every other frame to give him a kind of clockwork, staccato movement.”

“The bulk of the Mediator work was done in the grade by the colourist, Gareth Spensley, on his Baselight,” Eason elaborated. “However, the software we use at Nvizible for skin work is a suite of NUKE tools that we use as part of our ‘Nhance’ service. These allow us to selectively filter out features of a certain size – for example, wrinkles or spots – while retaining everything else, like image grain, shadows and regular skin texture.”

Robots at Nvizible

With the exception of the Mediator, Nvizible further developed the robot designs in their CG workspace. This process was facilitated by initial 3D work done by Paul Catling. “As Paul progresses his designs, he starts working in 3D,” explained Eason. “He’ll do a render, then go on top of it in Photoshop and paint lighting effects and so on to turn it into an illustration. But the underlying geometry is there. He even rigs it a little so he can move the arms and legs around to come up with certain key poses. So he ended up with some pretty well-realised CG models, which he then gave to us. We had to rebuild them, but they gave us a head-start.”

This slideshow requires JavaScript.

Nvizible’s robot models were built, rigged and animated primarily using Maya, with additional sculpting work done using ZBrush. “The Sentry ended up being a very complicated rig, with hundreds of moving parts,” Eason noted. “But it does play the biggest part in the film.”

Texturing was applied in MARI, using photographic reference chosen to support the story point that the robots have been operating in a hostile environment for many years, and are therefore both weathered and battle-worn. “There was no decoration on the robots – no painted symbols or information; it was all slabby, raw, metal, moving parts,” Eason noted. “We took photographs of similar unadorned, mechanical things, like old steam engines, big industrial lifts, big heavy things made out of cast iron. Part of the fun was making them nice and weathered – chipped and broken – which also helped to give them scale.”

Sentries await deployment in "Robot Overlords"

Ncam and the Air Drone

Many key scenes featuring the robots were shot using a real-time camera-tracking system provided by Nvizible’s sister company, Ncam. The Ncam system allows a camera operator to view pre-visualised CG assets through the viewfinder, adjusted to conform to the parameters of the lens and superimposed in the correct position over the live-action.

“Ncam works without any tracking markers. It lets you see a previs version of your visual effects assets through the camera, or at video village,” Eason explained. “Ncam combines the live video feed from the camera with a render from a workstation, all in real-time, showing your VFX asset in situ. It could be anything from a set extension to a creature.”

Ncam was used during the film’s opening sequence, in which a crazed man runs down a suburban street shouting defiance at his robot oppressors. An Air Drone flies down, warning the man to return to his home; when the man ignores the robot, it vaporises him.

"My view with visual effects is to say, ‘Go ahead and do everything practically that you can, and we'll deal with it in post.’ Rain? No problem!" - Paddy Eason, Nvizible.

“My view with visual effects is to say, ‘Go ahead and do everything practically that you can, and we’ll deal with it in post.’ Rain? No problem!” – Paddy Eason, Nvizible.

The sequence was shot on location in Bangor, on the outskirts of Belfast, Northern Ireland, using primarily hand-held cameras. Following Wright and Schucan’s storyboards, Nvizible created previs for the sequence, using survey data of the location gathered by members of their Belfast office.

During the night shoot, interactive lighting was coordinated by Fraser Taggart, director of photography, to simulate the glow from the Air Drone’s blue hover-jets, while practical rain simulated a torrential downpour. “Rain gives you a huge amount of visual texture, so it’s a lovely thing to have,” observed Eason. “Generally speaking, my view with visual effects is to say, ‘Go ahead and do everything practically that you can, and we’ll deal with it in post.’ Hand-held cameras? No problem! Rain? No problem! We’ll add CG rain, and we’ll make sure it’s composited behind your rain. But there were a couple of shots where the camera was looking up into the sky, where it would have been too much of a nightmare to have rain falling into the lens, so I did ask them to shoot dry versions to give us the option.”

Practical pyrotechnics were used for the moment when the man is blown up by the robot, somewhat in the manner of a human firework. “We shot the plate with the actor, a clean plate with no actor, and a third plate with the pyro going off,” Eason revealed. “We used a similar camera move each time, but there was no motion control – which is fine as long as everyone’s on the same page. The practical pyro gave us real interactive lighting, and a good clue of how the rain is lit up. We enhanced it with additionally-shot bluescreen pyro elements afterwards.”

For the shots featuring the Air Drone, Ncam was used extensively to aid the shooting of the live-action plates. “It was exciting to see the reaction of the camera operator,” Eason recalled. “Like a lot of people, he was a bit sceptical about Ncam – sometimes people view it as a bit of an encumbrance. But when he realised he was actually able to see the hovering Drone through his viewfinder, and walk around it and choose angles, he was really excited and impressed.”

Robot gun barrel

In one close-up shot, the camera executes a 180° move around the muzzle of the Air Drone’s gun, with the camera racking focus on the barrel. “I don’t think you’d have shot that plate without Ncam,” Eason asserted.

While the live-action plates were used for most scenes featuring the flying robot, Nvizible created some backgrounds using 360° HDRI panoramas shot on location by VFX assistant Erran Lake, using a DSLR camera. “Sometimes the plate done with the movie camera might be nice, but technically very difficult,” Eason explained. “So we would create our own version of that using my stitched stills, and put the rain in ourselves.”

NCam was used for a scene in which a giant robot Sentry pursues Connor (Milo Parker) down a suburban street.

Ncam was used for a scene in which a giant robot Sentry pursues Connor (Milo Parker) down a suburban street.

As director, Jon Wright appreciated the freedom afforded by Ncam, notably during a sequence in which a gigantic Sentry chases a boy down a suburban street. “You’re able to shoot something as if it was actually there, so it unlocks your shooting style,” he commented. “We were able to imbue those shots with a hand-held quality, and be quite cavalier about the way we shot. Normally when you shoot a visual effect like that, and there’s nothing there, you’re worried about not framing it up properly.”

“It was quite difficult to set up, positioning the robot and getting good tracks on the camera,” Wright continued, “but it meant we were able to do a running hand-held shot backwards with the boy, framing up the robot above him. The Sentries are massive, so they tend to go out of frame. If there’s nothing there, it makes the operator nervous and they tend to err on the side of caution and give you a slightly peculiar, lop-sided frame. If they can see the thing, they frame it much more naturally, and you get a much more interesting shot.”

“I imagine in years to come this will be fairly standard practice,” Wright concluded. “The days of tennis balls on sticks are numbered.”

This slideshow requires JavaScript.

Watch a video of the live NCam feed as seen by the camera operator:

Robots in Post

By the time post-production began, Robot Overlords already existed in rough-cut form, since editor Matt Platts-Mills had been present on location throughout production, cutting each day’s rushes together into a rough assembly. “It was very helpful to have Matt with us, cutting as we went,” Eason observed. “By the time the shoot was done, we were in a good position to hit the ground running on certain key visual effects scenes.”

Since much of the film had been shot hand-held using anamorphic lenses, one of the first challenges for the visual effects team was tracking. Peanut FX handled all the film’s tracking requirements, delivering 3D layouts for roughly 115 shots, mostly with 3D camera and/or object tracks for set extensions and robot integration.

“We mainly used 3DEqualizer,” commented Amelie Guyot, matchmove supervisor at Peanut. “We find it great for solving anamorphic camera tracks when lens distortion grids and camera info are available. We were provided with a lidar scan model for scenes involving a castle, which we used to perfectly match the plates – 3D scans are always very helpful, especially with anamorphic shots. We also used Autodesk’s Matchmover to calibrate sets; it’s still one of our favourite tools for image-based 3D reconstruction.”

“Tracking was a concern early on,” Eason admitted, “but the guys at Peanut absolutely rocked it and did a brilliant job.”

Tracking for "Robot Overlords" was handled by Peanut FX, who worked meticulously to deal with director Wright's extensive use of hand-held cameras, and some challenging sequences in which the robots interact directly with humans and their technology.

Tracking for “Robot Overlords” was handled by Peanut FX, who worked meticulously to deal with director Wright’s extensive use of hand-held cameras, and some challenging sequences in which the robots interact directly with humans and their technology.

During post-production, Jon Wright made himself available to the Nvizible team on a regular basis. “I made a point of remaining on the film throughout the time that Nvizible were working on the visual effects, so I could go in and out of their offices at my leisure,” the director stated. “It saves a lot of work if you’re seeing updated shots regularly – you tend to have less waste. I don’t know how to run Nuke or anything, but I think I’m quite geeky, and have a good understanding of how effects work. I can tell if something’s going to be quick to do, or time-consuming. I think maybe some directors don’t have that.”

With regard to the animation of the robots, Wright paid particular attention to the way the movement of the metal invaders expressed their character – or rather, their lack of it. “There’s a tendency to anthropomorphise robots,” Wright observed. “Our robots are quite implacable, so it was important to strip out emotion.”

Even given this direction, nevertheless Wright found that individual animators inevitably had their own take on how the robots should move. “After a bit I could guess quite accurately who’d animated what, because they bring their own personality to it,” he commented. “It got to a point where I would say, ‘I know this shot would be great for this animator,’ on the basis of what they’d done before.”

Lighting the CG robots brought its own set of challenges, due to the difficulty of making metallic objects look both highly reflective and convincingly three-dimensional. “It’s a bit of a double-edged sword,” Eason remarked. “If you’ve got good HDRIs for everything, you get good reflections, but because most of what you see with metal is based on reflections, it’s hard to make stuff that’s physically juicy, with a nice keylight and a nice fill light. You’re always fighting that chrome robot thing, where it all looks a bit floaty and unlit.”

Skyship concept art by Paul Catling.

Skyship concept art by Paul Catling.

Riding the Skyship

One of the most difficult visual effects sequences occurs at the film’s climax, when Sean finds himself riding through the skies on the front of the enormous robot Skyship.

“They were really hard shots, because they were extremely ambitious,” Wright admitted. “I’d imagine that even the likes of ILM working for J.J. Abrams would have found those shots tough to get right.”

Once again, NCam was used to combine a previs version of the Skyship with the live-action as it was being shot on a small studio set. “The set had a slight bounce, which made things interesting,” Eason recalled. “We ended up replacing all the practical set with CG, although it was useful to have it there for reference, shadow catching and so on.”

The film's ambitious climactic scenes, during which Sean (Callan McAuliffe) rides on the exterior of the massive Skyship, proved particularly challenging.

The film’s ambitious climactic scenes, during which Sean (Callan McAuliffe) rides on the exterior of the massive Skyship, proved particularly challenging.

In post-production, the low-resolution Skyship model was replaced by its high-resolution counterpart. “The main challenges with the Skyship came in look-dev, working out how to make a big shiny collection of metal blocks look big and scary and cool,” Eason commented. “In reality, large steel things tend to look like concrete, so we had to cheat the physics of the renders quite a bit. We had a lot of fun with the Skyship engines – big jets of dirty orange and blue fire. Our FX lead, Wayde Duncan Smith, did a great job of simulating all that stuff in Houdini.”

Because the sequence featured multiple camera angles, with moves that were very specific to the animation of the various characters and craft, it was deemed unfeasible to shoot live-action background plates. Instead, generic aerial plates of landscapes were shot using an array of DSLR cameras.

“We flew the route in a chopper several times, shooting 160° panoramas using three Canon 5Ds, shooting stills at 3fps,” Eason revealed. “We fed these image sequences through a pipeline of undistortion, stitching, stabilising, optical flow and re-projection, to make beautiful background plates of moving land for all the flying Skyship shots. It was hard work, but our Nuke artist, Antoine Jannic, absolutely nailed it.”

Watch a video of a completed shot from the Skyship sequence:

Enter the Spitfire

Also featured during the film’s finale is that most British of icons – a Supermarine Spitfire. The old-school technology of the WWII fighter plane proves pivotal in the battle against the robots, whose jamming devices have grounded more sophisticated jet aircraft.

The production shot live-action of the historic plane on the ground using a mock-up provided by Gateguards UK. An authentic Spitfire was also made available for one day of the shoot by The Aircraft Restoration Company, Duxford. In addition, Nvizible built a CG Spitfire which they used for additional aerial shots.

Jon Wright (right) directs the action around the museum-piece Supermarine Spitfire, on location in the Isle of Man.

Jon Wright (right) directs the action around the museum-piece Supermarine Spitfire, on location in the Isle of Man.

“We used the mock-up Spitfire in a remote, disused quarry location on the Isle of Man,” explained Eason. “Then we also had the real Spitfire fly out to us for the day. We positioned it on the airfield and dressed it similar to how the mock-up had been in the quarry. It was a huge privilege to have a real museum-piece there, and a dream for us in terms of creating our CG Spitfire for the flying shots, because we were able to get perfect reference, taking thousands of photographs, right down to every rivet.”

Eason was doubly delighted when he found himself booked in for a flight in a two-seat Spitfire. “The producer, Piers Tempest, knew I was a big fan of Spitfires, and he made a casual comment: ‘We’ll get you a flight in it.’ I didn’t believe it would ever happen, but a couple of months later it was arranged for me to go to the Imperial War Museum at Duxford. I got to fly around, and control the Spitfire, and everything. Ostensibly it was to record audio inside the plane – I was wired for sound – but also it was just a marvellous day out.”

ORO-Robot Overlords

Robot Reflections

Looking back at his experience of working on Robot Overlords, Mark Stay noted the benefits brought by the collaborative process: “I’ve been involved in the whole procedure in a way that many other writers aren’t usually. I was writing the novelisation at the same time, so I would email Paddy to ask questions like, ‘What colour are the engines?’ He would invite me over to Nvizible so I could sit in on VFX test screenings, and I would be there furiously scribbling all this stuff down for the novel. I’ve been really blessed with the access that I’ve had.”

“It was a delightful experience, because of our creative involvement from beginning to end,” Paddy Eason added. “I love the fact that it’s quite an unusual British film – very charming and quirky, starting at a very small personal scale and ending up quite epic and extraordinary.”

The film’s ambition is something that Jon Wright is proud of. However, he remains sanguine about the challenges involved in bringing such a film to the screen.

“It’s a bit of a shame that we aren’t really making these kinds of movies in Britain and Ireland, and that British cinemagoers almost prefer to go and see American films over British films. And we’ve got all our best filmmakers going across the water to do remakes,” Wright commented. “But the truth is, when you make a movie like this, you go up against massive Hollywood blockbusters, and it’s difficult to deliver in the way that they deliver. So we tried to focus on character, and give the film a different tone to a typical Hollywood movie at the moment – not dark and moody, but kind of light-hearted and optimistic. Our kids swear a lot and are playful, rather than being the earnest, soul-searching types you get in typical American YA movies.”

Echoing Wright’s thoughts, Eason recalled, “I remember one meeting we had early on, where we pitched the project to quite a well-known film producer. This producer said, ‘If you pull it off, I’ll eat my hat.’ So now we’re going to get a nice hat-shaped cake made for him!”

Having enjoyed limited theatrical release across the UK through Signature, Robot Overlords may also be gaining momentum as a TV series. “We have strong interest from two big broadcasters, in the US and the UK,” Wright revealed. “In 2015, television is probably the way to go with this sort of project. We could really explore into the human politics of the situation – life under enemy occupation. Obviously, the occupation is going on across the entire planet, so there’s a million different directions you could expand the story into. And ten hours of television is probably more exciting than another 90-minute film. It could be very good – watch this space.”

Robot Overlords will be soon to be released on DVD. In May 2015, the production team will be hosting panels at MCM Comic Con Belfast and MCM Comic Con London.

Special thanks to Marek Steven, Alex Coxon, Piers Tempest and Phil Guest. “Robot Overlords” photographs and video copyright © 2015 and courtesy of Tempo Productions and Nvizible.

Inspiring MPC

What drives people to work in the visual effects industry? The glamour? The technology? All those ravening monsters and exploding spaceships? Or is it just another job? In an ongoing series of articles, we ask a wide range of VFX professionals the simple question: “Who or what inspired you to get into visual effects?”

Here are the responses from the staff at MPC.

Starting with Stop-Motion

Many of today’s visual effects professionals are suckers for a little old-fashioned stop-motion animation. Richard Stammers, production VFX supervisor at MPC, is no exception. “Growing up, I had a real love of Ray Harryhausen’s work like Jason and the Argonauts and Sinbad and the Eye of the Tiger,” he enthused. “But seeing Phil Tippett working behind the scenes on the AT-AT Walkers for the Hoth battle from The Empire Strikes Back was a turning point for me.”

For “The Empire Strikes Back”, Phil Tippett and Jon Berg employed the Lyon-Lamb animation system (right) to provide instant replay capability as they animated the AT-AT Walkers for the principal VistaVision cameras. Image copyright © by Industrial Light & Magic. All rights reserved.

For “The Empire Strikes Back”, Phil Tippett and Jon Berg employed the Lyon-Lamb animation system (right) to provide instant replay capability as they animated the AT-AT Walkers for the principal VistaVision cameras. Image copyright © by Industrial Light & Magic. All rights reserved.

Paul Chung, animation supervisor, is a fan of the old school too. “I came from that generation when VFX meant back-projection, stop-motion and optical,” he noted. “My inspiration was Ray Harryhausen and Disney films. That was all I knew.”

Harryhausen also inspired Dan Zelcs, lead rigger, who included the stop-motion legend in his round-up of early influences: “As a kid, my inspirations and interests ranged from Bugs Bunny, Star Wars and Ray Harryhausen’s stop-motion skeletons, to my own activities of re-creating film characters and spaceships in Lego, and playing computer games.”

Ray Harryhausen applies glycerine to skin of the Kraken puppet used in “Clash of the Titans” to make it appear wet.

Ray Harryhausen applies glycerine to skin of the Kraken puppet used in “Clash of the Titans” to make it appear wet.

Digital Delights

Ask any group of VFX professionals which film inspired them to get into the business, and a good percentage of them will come up with a movie from the 1990s – that fast-moving decade during which the digital revolution was beginning to sweep through the entire visual effects industry.

Jurassic Park just blew my mind!” stated Joan Panis, head of FX. “The Tyrannosaurus Rex chase scene was incredible. I re-watched the movie recently, and although it wouldn’t get a PG-13 nowadays, the CG still holds up pretty well. Kudos to ILM for that. When I discovered that most of the dinosaurs in the movie were computer-generated, my interest in CG grew exponentially and I started becoming obsessed with VFX.”

Dinosaurs were also responsible for chasing Rob Pieke, software lead, into career in VFX. “I was 14 years old when Jurassic Park came out. I didn’t even know how to appreciate what I was seeing, but it triggered inside me a strong sense of ‘I know what I want to do when I grow up’ – especially since I was crazy about dinosaurs as a kid.”

A turning point for Ferran Domenech, animation supervisor, was the moment when, as a teenager, he left the cinema after seeing Jurassic Park. “I clearly remember telling my father that they had almost made you believe the dinosaurs were real,” he recalled. “I researched Jurassic Park in specialist magazines, and learned that they’d changed from a computer-controlled miniature system called go-motion to fully-rendered CG for the wide shots of the dinosaurs. I was blown away by what could be achieved with computers. This truly sparked my passion for 3D and VFX.”

Sophie Marfleet, lead envirocam artist and compositor, found the 1990s just as inspiring as her colleagues. “I’ve been obsessed with movies since I watched Star Wars and Indiana Jones as a kid,” she commented, “but I was inspired to work in film by watching movies like Terminator 2: Judgment Day, Jurassic Park and Fight Club. The visuals blew me away, and I knew I wanted to be a part of that.”

The developing potential of computer graphics continued to inspire wannabe VFX professionals right up to the end of the decade. Reminiscing about a certain mind-bending classic from 1999, Damien Fagnou, global head of VFX operations, said, “Since I was ten years old, I’ve been fascinated with computers and their ability to produce graphics of all kinds. My path was set in that direction when The Matrix came out. That film was really the trigger that made me think, ‘This is what I want to do in life: contribute to making amazing movie experiences like the one I’ve just experienced.’”

But on the subject of the digital revolution, it was Marco Carboni, crowd supervisor, who picked out what many experts regard as the watershed moment when computer-generated characters truly came of age. “I had a blast when, as a kid, I saw the stained-glass knight in Young Sherlock Holmes,” he commented.

Stained glass knight - "Young Sherlock Holmes". Image copyright © by Industrial Light & Magic. All rights reserved.

ILM modelshop crew member Jeff Mann was photographed in costume against a grid to provide reference footage for the computer animation. Standing on the right are Pixar artistic supervisor John Lasseter and visual effects supervisor Dennis Muren. A clay and glass maquette of the knight was digitised using Pixar’s Polhemus three-space digitiser, with the resulting geometry rendered in vector form on an Evans and Sutherland monitor. Image copyright © by Industrial Light & Magic. All rights reserved.

Believing Anything Can Fly

Of course, there are plenty of people at MPC whose memories stretch back further than the 1990s. Take Tony Micilotta, R&D lead, who remembers a time when superheroes had to save the world without the help of digital doubles. “Superman (1978) really made me believe that a man could fly!” Micilotta remarked. “As I grew older, it was pioneering technologies such as the Zoptic front-projection system used in Superman that inspired me to join the VFX industry, where I could develop new techniques to create imagery that had never been seen before.”

Meanwhile, Scott Eade, head of layout, has fond memories of the 1980s: “As a kid I grew up watching such imagination-building films like Blade Runner, E.T.: The Extra-Terrestrial, Tron, Ghostbusters and Star Wars. So I’ve always been drawn towards the magic in visual effects.”

But for Matt Packham, 2D supervisor, one movie stands head and shoulders above the rest: “Which film inspired me to get into VFX? Simple – Stanley Kubrick’s 2001: A Space Odyssey. Seeing this for the first time in the late 1980s was a sublime experience. And when I learned what it took to make a movie like this in 1968, it continued to amaze me!”

Starchild - "2001: A Space Odyssey"

Inspired by a series of intra-uterine photographs, the Starchild seen in “2001: A Space Odyssey was sculpted by Liz Moore and mechanised so its eyes could move. The Starchild was filmed through multiple layers of gauze, with immense levels of backlight. As a finishing touch, visual effects supervisor Douglas Trumbull airbrushed an enveloping cocoon on to a piece of glossy black paper, which was aligned to the model and filmed on an animation stand.

Realising the Dream

Inspiration is all very well, but how did the staff at MPC develop their youthful enthusiasm into actual careers? For Richard Stammers, the practical exploration of VFX techniques started early. “My school art teacher saw my enthusiasm and lent me his Bolex camera to film my first stop-motion project,” he revealed. “I used the articulated joints of my camera flash to push my camera, caterpillar-style, on to my prone tripod, which then stood up, walked and bowed to camera. I was hooked! This enthusiasm fuelled education decisions and career aspirations.”

For Paul Chung, it was a passion for art that ultimately drew him into the VFX business. “I grew up drawing a lot,” he remembered, “but I was also into filmmaking, so I ended up at film school in London. After that, I got into hand-drawn animation, combining my two interests together. Some 20 years later, I went to Dreamworks, and that was the beginning of my life in digital.”

PCjr photograph by Rik Myslewski, via Wikimedia Commons (own work – CC0)

PCjr photograph by Rik Myslewski, via Wikimedia Commons

Visual effects relies as much on science as it does on art, as demonstrated by Rob Pieke. “My entry into the industry was really fostered by two influences,” Pieke reflected. “The first was my parents, who both worked for IBM and taught me how to write computer graphics programs in BASIC on the PCjr in the 1980s. I originally aspired to be an animator, but programming and problem-solving is clearly in my genes, so an R&D role was the natural fit for me.”

Dan Zelcs also got his start during the early days of home computers. “I think my journey into visual effects really started at age ten,” he commented, “with me programming my Sinclair ZX Spectrum, using BASIC to animate a pixelated version of the Teenage Mutant Ninja Turtles – complete with the cartoon’s theme tune, executed in 8-bit beeps! This formative experience, entering lines of code to produce a living piece of animation, was the ‘2001 monolith’ moment of my career. It made me realise I could use mathematics to make art, and turn my imagination into reality.”

Zelcs went on to recall the dizzying leap from his ZX Spectrum to a computing machine with far more number-crunching power. “Later, I’d play with software like Deluxe Paint III on my Amiga 500,” he explained. “I would animate simple cut-out characters, and render zooming camera moves through Mandelbrot fractal sets. Then I found Sculpt-Animate 4D – free modelling software that came on the cover of a magazine – which enabled me to model and ray-trace simple objects. This experience influenced my choices of classes at school – mixing mathematics and art – and then the degree that I chose.”

While Catherine Mullan, head of  animation, acknowledges the art/science debate, she prefers not to take sides. “I was always drawing as a kid,” she remarked. “I loved art and drama but also maths and science. I wanted to pursue a job that was a mix of all these things, but the usual options didn’t fit the bill.  When applying for university, I stumbled across a computer animation course and was immediately excited. I spent the next few years discovering the delights of computer graphics, and was especially drawn to animation.”

On even the best-planned career path, however, there’s always room for a little blind chance to play its part, as in the case of Sophie Marfleet: “It was actually a conversation with an old compositor friend called Tim, who I bumped into on a year abroad in New Zealand, that convinced me to take the visual effects route.”

End Results

So, regardless of what brought them into the field of visual effects, are these members of the MPC team happy with the path they’ve chosen? It’s clear that Marco Carboni wouldn’t want to be anywhere else: “I’ve always loved VFX – after seeing the “crossing the sea” scene from Prince of Egypt, I knew that I wanted to be part of that magic world.” And Scott Eade seems happy with the company he’s keeping: “I’ve had the opportunity to work with, and for, people who share the grand vision of making the unreal real. My first experience directly working as a visual effects artist was on James Cameron’s Avatar, and I’ve been lucky enough to continue to work on great films since.”

Catherine Mullan summed up this collective enthusiasm by concluding, “To this day I get a huge kick out of my job and my inspiration is continuously renewed by the amazing work happening around the world.”


Watch MPC’s 2015 film reel:

MPC is currently working on movies including Disney’s The Jungle Book, Batman V Superman: Dawn of Justice, Terminator: Genisys, Spectre, Goosebumps and The Martian. Thanks to all the staff from MPC who contributed to this article.

Special thanks to Jonny Vale.

Orphan Black and Twinning in the Movies

This article was first published in slightly different form on the Cinefex blog on 8 April 2014.

Orphan Black - Season 3

What’s the best visual effect of them all? Which camera trick brings everything together to make a perfect whole – conceptual elegance, technical expertise, editorial sleight of hand, dramatic performance? Which cinematic illusion wins the grand VFX prize? My answer may split opinion.

It’s the twinning effect.

I know. You’re scratching your head in puzzlement. How is creating twins more impressive than blowing up a planet? Does a pair of chatty clones really beat a ninety-foot robot grappling a multi-tentacled mutant from another dimension?

Yes. And yes. Let me tell you why. But first, let me explain what I’m talking about.

By “twinning”, I mean the process whereby a single actor plays two or more roles in the same film. For the performer, it’s a delicious challenge. For the visual effects artist, the challenge comes with the shots where both (or multiple) incarnations of said actor appear on screen at the same time.

"Orphan Black" clone strangling scene

One of the latest productions to use this time-honoured trick is the TV series Orphan Black, the second season of which begins its run on BBC America later this month. In the show, Tatiana Maslany plays a woman who encounters several cloned versions of herself and becomes caught up in a deadly conspiracy, in a remarkable performance that saw her nominated for a Golden Globe. Orphan Black’s visual effects are by Intelligent Creatures; according to visual effects producer Che Spencer, their mandate was “to push the effect and not settle for what was easy.”

Watch an Intelligent Creatures breakdown video of the extended “clone dance party” from the Orphan Black season 2 finale (including a surprise unaired ending):

We’ll hear more from Intelligent Creatures about Orphan Black in a moment, including a breakdown of one of Season One’s most daring multi-clone shots. Before then, let’s take a brief look at the history of the twinning effect.

Old-School Double Acts

A good early example of twinning is the 1944 Bing Crosby musical Here Come The Waves, in which Betty Hutton stars as identical twins Susan and Rosemary Allison. The film uses a fairly standard range of twinning tricks including a body double with her back to the camera, and judiciously-placed split screens.

"Here Come the Waves" trailer image

In many of the shots in Here Come The Waves, it’s easy to spot where split line is (the binary Betties are generally positioned on opposite sides of the screen, with plenty of empty set gaping between). Some shots – such as the one where both characters leave the stage after a dance number (at 1:45 in the clip below) – make effective use of a moving split, allowing the twins to occupy the same physical space, albeit after a small but convenient time interval.

For some journalists of the time, such trick photography was akin to witchcraft, as evidenced in a contemporary article from May 29, 1944 by Frederick C. Othman of Associated Press – here’s an extract:

This piece is going to be complicated; it involves two Betty Huttons and how can anybody expect you to understand what’s going on, when the writer doesn’t exactly understand himself? … The boys are making with the double talk about split screens and synchronous recordings. … If one Miss Hutton is a squillionth of an inch off her marks when she gets out of her chair, the other Miss Hutton is a blur. And, of course, vice versa. That’s because of the split screen (says Othman, who has only the vaguest idea of what he’s talking about).

Olivia de Havilland faces herself in "The Dark Mirror"

If Here Come The Waves exemplifies the early frivolous use of twinning techniques, The Dark Mirror, released two years later in 1946, is its shadowy counterpart.

In the film, Olivia de Havilland plays twins Terry and Ruth Collins, both suspected of murder and both possessing an alibi for the night the crime was committed. While this psychological melodrama uses similar techniques to Here Come The Waves, director Robert Siodmak exploits its darker themes with shots like the one at 1:20 in the clip below, in which moody lighting is used to conceal the use of the ever-reliable body double.

Before we run forward in time, let’s quickly wind the clock even further back to 1937 and take a look The Prisoner of Zenda, in which Ronald Colman plays both the king of Ruritaria and his English lookalike.

The Prisoner of Zenda contains an early example of twins not only appearing side by side, but also physically interacting, in a shot where the two Ronald Colmans shake hands. This quote from David O. Selznick’s Hollywood by Ronald Haver*, makes the intricate matte work used to pull the shot off sound deceptively straightforward:

The camera shot through a plate of sheet glass that had been taped to cover the area of the double’s head and shoulders. After exposing the action, the film was rewound in the camera, the plate glass was retaped to cover everything except the area of the double’s head and shoulders, and Colman changed costumes and stood in. Colman’s head and shoulders were then photographed in perfect register with the double’s body.

Attack of the Clones

Throughout the 20th century, there was a regular flow of twinning films, most of which relied on these familiar visual effects techniques – perhaps most famously when a young Hayley Mills played identical twins in The Parent Trap (1961). Then, in 1988, came a matched pair of twinning films that upped the ante and doubled the stakes.

The first was Big Business, which starred Bette Midler and Lily Tomlin as two sets of identical twins. The second was David Cronenberg’s Dead Ringers (1988), in which Jeremy Irons played twin gynaecologists Elliot and Beverly Mantle. Both films made a bold leap by using motion control to introduce camera moves into their split-screen shots.

Jeremy Irons doubles up in "Dead Ringers" (1988)

Luckily for us, when dissecting the revolutionary visual effects of Dead Ringers in Cinefex #36, Don Shay demonstrated a little more understanding of the twinning process than Here Come The Waves reporter Fred Offman did back in 1944:

The most difficult of the motion control setups was a reverse tracking shot of the twins walking towards camera. To compensate for normal arm and body sway, Film Effects of Toronto had to develop matting sequences that constantly shifted the split from side to side. And since diffused splits of varying widths were required – depending on background light levels – different splits were dissolved in and out as the scene progressed. From start to finish, the shot required four separate split-screen mattes – each with an average of four dissolves.

Once the Pandora’s Box of motion control twinning effects had been opened, there was no going back. From Back to the Future II through Multiplicity to Adaptation and beyond, filmmakers have experimented with ever-more elaborate ways of duplicating the talent. In The Social Network, Lola raised the bar higher than ever when they created the Vinklevoss twins by mapping Armie Hammer’s face on to that of fellow actor Josh Pence. Read all about how they did it in this excellent article at FXGuide.

These recent refinements mean filmmakers can now do proper justice to that staple of science fiction: the clone story. In The City of Lost Children, Pitof/Duboi presented us with more copies of Dominique Pinon than we knew what to do with. More recently, Moon pitted Sam Rockwell against, er, Sam Rockwell, in a stunning variety of clone scenes that showcased not only Rockwell’s acting chops, but Cinesite’s invisible digital effects.

In planning Moon’s judiciously-used clone shots, director Duncan Jones studied both Dead Ringers and Spike Jonze’s Adaptation.  “[Spike] told me that when you’re working through scenes, you need to choose which character really leads the scene, and shoot that one first,” Jones remarked in Estelle Shay’s article Moon Madness (Cinefex #118).

Sam Rockwell checks his counterpart's temperature in "Moon"

The same article has the following to say about the above Cinesite split-screen shot in which “Sam1” feels the forehead of “Sam 2”:

In the hero pass, Rockwell as the ill Sam1 performed to a stand-in serving as Sam 2, with a C-stand used to record the position of the double’s left shoulder. In the second pass, Rockwell performed as Sam 2, aligning his shoulder with the marker and using his body to occlude that of the double. A third pass allowed for the removal of extra lighting, cameras and floor markers, and the shadow cast by the C-stand and opposing action. In post, Cinesite attached the double’s arm to Rockwell’s Sam 2 through careful rotoscoping and warping of clothing.

Orphan Black

All this talk of clones brings us neatly back to Orphan Black and Intelligent Creatures. Early on, the show’s producers told visual effects supervisor Geoff Scott that the budget wouldn’t allow for motion controlled camera moves, prompting Scott to explore other ways of taking away the curse of the locked-off twinning shot. In the end, however, motion control won the day, as described here by the Intelligent Creatures team:

Before production began we considered many different techniques from simple handheld camera moves to repeatable slider rigs, but ultimately it came down to a full motion control system.  In fact, we shot the scene from the pilot where Sarah meets Katja on two different motion control rigs before settling on what became the go to rig for the series – the Super TechnoDolly. The first of its generation, the TechnoDolly is a robotic camera system, essentially a smart Technocrane. It allowed us to create movements of unlimited length and complexity, and more importantly, repeat those moves with incredible precision. We shot the entire scene with Tatiana playing Sarah alongside a stand in actor to work out blocking and eyelines. Then we repeated the scene with Tatiana alone following carefully placed eyeline markers. Finally, Tatiana changed over to Katja and we did the whole thing over again. The passes were later combined in compositing using Digital Fusion to create the seamless effect.

The TechnoDolly proved adaptable enough in operation to give the director flexibility on set, and – crucially in a show where ADR needed to be kept to an absolute minimum – it was near-silent in operation.

This slideshow requires JavaScript.

Orphan Black uses every trick in the twinning book to help create Maslany’s various clone characters, from old-school over-the-shoulder shots to complex composites involving moving cameras and selected body parts from one actor stitched on to those of another.

With each episode the challenges grew. The one main request was that once an episode the clones would touch. Sometimes we had as many as three clones in the room all interacting with each other, delivering dialogue and making eye contact. We used the Super TechnoDolly for these really complex movements in order to maintain image integrity and repeatability. In the penultimate episode, we had one clone pour wine for two others, and another hug one in a deep embrace. As the episode continued we saw clones strangling each other, head butting, and eventually shoot the other. In a single episode we had a season’s worth of visual effects.

This slideshow requires JavaScript.

The Intelligent Creatures team is adamant that the general lack of attention drawn to their work on Orphan Black is in fact a great compliment:

The truest testament to our skill is how little the audience notices it. If people can immerse themselves within the plot enough to forget that this shot was done with VFX, then our jobs are done. We used visual effects to help do what the show’s creators intended to do: tell a story. The rest might as well be magic.

Watch the Intelligent Creatures sizzle reel for their work on Orphan Black:

Two Are One

There’s one twinning technique I haven’t discussed here. That’s because it puts visual effects artists out of work. I’m talking about those rare occasions when the director needs to double up the lead actor … and that actor just happens to have a real twin.

The example that springs into my mind (and probably into the minds of most regular Cinefex readers) is Terminator 2: Judgment Day, in which the shape-shifting T-1000 makes a last-ditch attempt to fool John Connor by mimicking his mother’s physical form. Director James Cameron placed the two Sarah Connors on screen simultaneously not with visual effects, but by drafting in actress Linda Hamilton’s twin sister Leslie. (Cameron used the same trick with twins Don and Dan Stanton, who played Lewis the Guard and his deadly doppelganger respectively.)

Audiences who don’t realise that Hamilton and Stanton are twins undoubtedly assume they’re seeing a camera trick, which only underlines just how tough it is for any visual effects artist to take on the twinning challenge. Why is it so hard? Because the audience knows.

They know the famous actor they’re seeing doesn’t have a twin. They know it’s a trick. When presented with a twinning effect, the average Joe Schmoe in the second row will put down his popcorn, sit forward in his seat and try his damndest to spot the join, even if ordinarily he has no interest in VFX whatsoever. Nowhere are the creators of visual effects placed under greater scrutiny than when they’re giving birth to twins.

And that’s why, of all the illusions a filmmaker might choose to put on screen, the twinning effect is undoubtedly in the running for my all-time number one.

Season 3 of Orphan Black is currently airing on BBC America:

*Published by Bonanza Books, 1987, quote sourced via The Ronald Colman Appreciation SocietyMoon image copyright © 2009 Lunar Industries/Sony Pictures. Orphan Black images copyright © 2014 Intelligent Creatures/Temple Street Productions.

Ex Machina – VFX Q&A

"Ex Machina" - Cinefex VFX Q&A with Double Negative

Ever since Fritz Lang’s 1927 film Metropolis, the concept of a robot with artificial intelligence has held movie audiences in thrall. Now, as the science fiction dream of AI becomes ever-more plausible in the real world, so a new generation of filmmakers has begun to explore its tantalising possibilities.

The latest addition to this recent crop of AI movies – which includes Caradog James’s The Machine and Spike Jonze’s Her – is Ex Machina. Written and directed by Alex Garland, the film introduces young computer coder Caleb (Domhnall Gleeson) into an experiment designed to establish whether sexy and cerebral android Ava (Alicia Vikander) is truly self-aware.

Vikander’s on-set performance as Ava was meticulously preserved during the post-production process. While much of her body was replaced by a digital counterpart, Vikander’s face and hands were retained throughout. The result is a seamless blend of live-action and CG animation that remains convincing in a film characterised by long takes and intricate dialogue.

The production visual effects supervisor for Ex Machina was Double Negative’s Andrew Whitehurst, who was assigned to the project for around 16 months. Under Whitehurst’s supervision, Double Negative delivered over 300 robot shots, with an additional 250 VFX shots being provided by Milk VFX, Utopia and Web FX.

Watch the trailer for Ex Machina:

How did you get involved with Ex Machina?

Double Negative were approached by DNA films and the writer/director, Alex Garland. We quickly worked out that we saw things very similarly, and that we would be able to work together.

How did you divide up the visual effects work among the various vendors?

The work on Ex Machina was divided amongst four facilities. Double Negative created the android, Ava. Milk VFX created Ava’s brain, “Ava vision” and did a number of monitor inserts and clean-up effects. Utopia created a CG mobile phone for Caleb and added extra CG buildings to the Norwegian location plates. Web FX provided a number of clean-up and monitor composites.

This is Alex Garland’s feature debut as director. How was he to work with?

An absolute pleasure. Alex is visually very driven, and has a great appreciation of art, comics, film, and games – all of which he brings to bear on his work. He’s a very collaborative director, and we had a great many design meetings in which the two of us did quick sketches with a pad of copier paper and a fistful of Sharpies, to work out design issues. It’s a very fast way of working; it also allowed us make better decisions, because we could rule many things out before working up more finished designs.

Did you previs the film?

No, we didn’t do any previs. It was always very character-led, which meant that the blocking was something that only the actors – working with Alex and the director of photography, Rob Hardy – could do. We had to make sure we could work with anything they decided to shoot.

Alicia Vikander stars as Ava in Alex Garland's "Ex Machina", with visual effects by Double Negative, Milk VFX, Utopia and Web FX.

Alicia Vikander stars as Ava in Alex Garland’s “Ex Machina”, with visual effects by Double Negative, Milk VFX, Utopia and Web FX.

How did you go about designing and developing the character of Ava?

When Alex came to Double Negative, he’d already had some concept paintings done by Jock, who’s an amazing artist. As the film had to come in at a certain budget, we needed to design a character that could be realised for that amount of money.

Once we knew our practical constraints we began to work up painted concepts. We included O-rings and connection studs in the design. They gave us something to track and provided clean edges to roto around, so we could layer in CG behind the in-camera costume. We worked very hard to give Ava a plausible mechanical quality that also had a lot of feminine beauty: the audience had to believe that a human could fall in love with her.

What other sources did you draw visual inspiration from?

The only rule I set for anyone who was working on Ava’s design – including myself – was that we weren’t allowed to look at other robots, especially androids. Instead, I put together a collection of images of Formula One car suspensions, high-end road bicycles, lightweight aircraft airframes, human anatomy and procedural sculpture. These gave us inspiration for both form and materials.

I think that – possibly subconsciously – the design of Ava owes a lot to the French comic artist, Moebius. Both Alex and I are big fans of his work, and even though we didn’t explicitly refer to his images during production, when I look at Ava now I see a lot of Moebius’s influence there.

This slideshow requires JavaScript.

How did you translate the concept art into three dimensions?

We soon reached a point where it was clear that we’d pushed painted concepts as far as they could go, and that we had to start modelling in 3D. We began by building an arm, starting with simplifying human bone shapes and changing pivot points to more mechanical forms. Then we started adding musculature and cabling. Richard Durant and Alexis Lemonis did the majority of the modelling of Ava.

We worked hard to make sure that we designed something which could work practically, which looked like it had the right weight distribution, and which still had “form follows function” beauty. We continually removed pieces that seemed superfluous. The great industrial designer, Dieter Rams, has a motto: “Less, but better.” We constantly kept that in mind. In fact, when the design was 3D-printed for the laboratory set, it did all fit together beautifully. That was a proud moment!

Was it a complex model to rig for animation?

We started rigging Ava as soon as we began modelling, and the rig developed throughout the show – it was still being tweaked right up to the end. Mark Ardington and Fernanda Moreno rigged Ava, with Mark carefully building versions of the rig that could be used by the body track artists to duplicate Alicia’s movement. We also had a higher res version of the rig that could be swapped out at render time; that one had cables, muscles bulging, and all the secondary animation that gives Ava weight.

How did you capture Alicia Vikander’s on-set performance as Ava?

Almost every scene involving Ava features lengthy dialogue shots. So, from the beginning, it was clear that Alicia would need to be on set, and that her physicality would have to drive the performance and give the other actors something to respond to. That immediately ruled out performance capture in post as an approach, so we opted for body tracking.

The majority of the shots in the film are around 200 frames long, with one clocking in at 1600 frames. It was a huge challenge to track them as the precision needed to be maintained throughout. It really was a herculean effort by all involved.

Additionally, the film was shot using Xtal Express anamorphic lenses which, whilst beautiful, provided a number of challenges to the tracking artists. The lens distortion was often not even, and changes in focus could radically change the lens geometry. Alex Maciera led the tracking team and I think it’s no exaggeration to say that body tracking Ava was the hardest task on the show.

Describe the workflow for a typical shot in which Ava appears.

Our approach to most Ava shots was to allow Alex, Rob and the actors to shoot what they wanted on set. When a set-up was completed, we would step in and shoot HDR lighting reference with bracketed stills, and shoot clean plates for the shot with the main unit camera.

In post, this allowed us to rotoscope the parts of Alicia we wanted to keep: always the face and hands, often the shoulders and feet too, and sometimes the shorts. Then we used the clean plates to prep out the rest of Ava, giving us a plate to composite with.

Meanwhile, camera- and body-tracking gave us a mesh that could be rendered with the HDR lighting captured on set. We used PRMan for rendering.

Finally the CG, the roto, and the clean plate were handed over to the compositing artists to final the shots. The 2D team – under the supervision of Paul Norris – not only graded the CG to perfectly match the plate, but often also did tracking tweaks and stabilisations. They also added the incredibly complex lens distortions, flares, and other optical effects.

It sounds like a lot of work for the roto department. Did you use greenscreens to help with the keying?

Well, the shoot was scheduled to last eight weeks, and we knew that we would be shooting between 15 and 25 set-ups every day. This pushed us to design a character that wouldn’t require heavy amounts of bluescreen or greenscreen on set; at the pace we were shooting at, there just would never have been time to light them properly. In fact, Ex Machina is the first show I’ve done where we didn’t use a single greenscreen.

It’s also worth pointing out that, because the sets had a lot of glass in them, we would often be tracking, prepping, rendering, and compositing several Avas at once. Some shots included as many as three reflections of Ava, in addition to the Ava in front of the camera.

Alicia Vikander as Ava in "Ex Machina"

What’s your favourite Ava shot?

I’m especially proud of the work that was done on the scene where Ava puts on clothes. On set, Alicia dressed herself over the top of the Ava costume. We not only had to prep in the background where Ava’s arms, legs and torso were, but we also had to paint in other details – the back side of the stockings, for example. Greg Shimp did great paint work on those shots.

The lighting in the scene was very subtle, and we ended up adding a lot of additional CG lights to give the look we wanted. Michael Ranaletta composited the shots, dealing with the shifting, shallow focus, and subtle edge treatments around the garments to get everything to sit together perfectly. It’s a such an intimate, delicate scene. It’s a treat when VFX is allowed to make something beautiful.

How do you feel about the film, now that the work is done?

Ex Machina is a really beautiful film. The imagery is – and I realise I’m biased – fantastic. I really believe that this is the result of every department working closely together, helping each other out, and making good practical decisions every step of the way. The collaboration with Alex, the art department, camera, make-up, costume and editorial was exceptionally deep on this show, and I think it shows in the finished result.

If you’re lucky enough to have great crews – as I did at Dneg, Milk, Utopia and Web – then you can make some very beautiful pictures. I’m very proud of all of them.

Special thanks to Sarah Harries and Karl Simon Gustafsson. “Ex Machina” photographs copyright © 2015 by Universal Pictures.

Vertical Cinema

Vertical Cinema, photograph by Sascha Osaka

Photograph by Sascha Osaka and courtesy of Sonic Arts.

Widescreen! Cinemascope! Panavision!

Since the early days of cinema, movie screens have been getting steadily wider. From the squat 4:3 aspect ratio of early 20th century silent movies, through the explosion of sprawling widescreen film formats that began in the 1950s, to today’s ever-expanding domestic TV screens, the trend is clear: bigger is better … but only if you stretch things in the horizontal dimension.

But what happens if you turn this thinking on its head?

Or rather, on its side?

That’s the question posed by Vertical Cinema, a Sonic Acts art project comprising ten specially commissioned films made by experimental filmmakers and audiovisual artists. Vertical Cinema presentations have been held since 2013 at locations across Europe and in the USA, with the films frequently being projected in churches. The movies are projected using a custom-built 35mm film projector in vertical Cinemascope. No landscape images here. In Vertical Cinema, everything is portrait.

Here’s what Vertical Cinema has to say about this unusual twist on traditional cinematic conventions:

For the Vertical Cinema project, we “abandoned” traditional cinema formats, opting instead for cinematic experiments that are designed for projection in a tall, narrow space. It is not an invitation to leave cinemas – which have been radically transformed over the past decade according to the diktat of the commercial film market – but a provocation to expand the image onto a new axis. This project re-thinks the actual projection space and returns it to the filmmakers. It proposes a future for filmmaking rather than a pessimistic debate over the alleged death of film.

With its mission to challenge established conventions, Vertical Cinema wears its experimental heart firmly on its sleeve. But what’s to stop someone making a full-blown narrative feature film in this unusual vertical format?

On the face of it, the challenges seem considerable. The entire movie industry is built around the landscape image. Even if you could get such a film made at a technical level, would the vertical format clip your storytelling wings? And would audiences actually want to see it?

To answer these questions and more, Cinefex spoke with six filmmakers and visual effects experts: Douglas Trumbull (filmmaker and VFX innovator), Tim Webber (creative director and VFX supervisor, Framestore), Rajat Roy (global technical supervisor, Prime Focus World), Paul Mowbray (head of NSC Creative), Marc Weigert (president and VFX supervisor, Method Studios) and Charles Rose (CG supervisor, Tippett Studio).


What technical hurdles would you have to jump in order to make a narrative feature film in vertical format?

Tim Webber – “If anything, shooting vertically is more of a practical irritation rather than anything challenging from a technical standpoint – monitors and the user interfaces are designed to be viewed in landscape, for example. The bulk of industry cameras are bottom-heavy, with buttons on their sides, so as soon as you rotate them everything becomes trickier. But there are no truly complex problems to solve.”

Marc Weigert – “It’s not a problem mounting your cameras at a 90° angle on to your camera platform of choice. The main challenge – or limitation – would be lighting and set-building. Where do you hide your lights? Or the opposite: where do you put your netting to diffuse sunlight or bounce light? How tall will you have to build your sets?”

Vertical Cinema at the Kontraste Festival 2013, photograph by Markus Gradwohl

Vertical Cinema at the Kontraste Festival 2013, photograph by Markus Gradwohl

Rajat Roy – “In terms of technical challenges, there really isn’t anything that can’t be done in film. To achieve a vertical effect, you can shoot 6K and mask the image to use only the central portion of the frame, for example. However, there would be considerations if we were presenting stereo imagery in this medium, in that you are really accentuating the left and right borders – something we’re very conscious of when working in stereo. Objects breaking frame in shot can cause issues in stereo, and this would be exacerbated by the vertical aspect ratio. There would also need to be consideration given to the viewing angle, as the fall-off at the top of frame could cause issues, depending on the viewer’s proximity to the screen.”

Marc Weigert – “There would be problems with special effects too. Rain and snow rigs would be tough to hide. Interestingly, some of these problems are also prevalent in the 360° films now starting to appear for VR formats like the Oculus Rift, so we’ll probably have to come up with smart ways to overcome these challenges anyway in the near future.”

Charles Rose – “Technically, the challenges aren’t many, beyond the obvious. I think the biggest challenge will be finding narratives suited to the vertical format. Most of the vertical cinema films I’ve seen are very abstract, non-narrative.”

Tim Webber – “But the challenge doesn’t stop with making the film. Projecting vertically has its own issues, even down to the requirements from the room. That’s why vertical cinema is often shown in churches, where the tall architecture really complements the format.”

Marc Weigert – “Also, human eyes automatically create a horizontally-weighted aspect ratio. A vertical ratio – I would guess – would create an enormous strain on either your eyes or your neck muscles for a two-hour movie.”

Doug Trumbull – “Our company created a special venue attraction for the Luxor Pyramid Hotel, called Secrets of the Luxor Pyramid, directed by myself and Arish Fyzee. The third act was a vertical screen show called the Theater of Time. The movie was very straightforward to design and produce, with everything specifically conceived for the vertical format. There were no significant technical challenges that would not occur with any other format, and the visual opportunities were very exciting. The movie was a combination of live-action, miniatures, and computer graphics, shot in VistaVision at 48fps, digitally composited and rendered out at 6K resolution to film. It was projected on to a deeply curved vertical screen with an aspect ratio of 1.66:1 – the original “full frame” VistaVision format.”

Vertical format cinema photograph courtesy of Douglas Trumbull.

Vertical format cinema photograph courtesy of Douglas Trumbull.

What about the artistic challenges?

Rajat Roy – “Vertical cinema is just a device – it’s affecting the visual language of the storytelling, but it’s not creating artistic challenges in itself. It’s not as if the artist is saying, “Oh no, I want to create this piece of art but now I have this vertical constriction!” The format drives the creation.”

Doug Trumbull – “Working with the vertical format, I felt that storytelling was very straightforward, particularly when cutting between actors and away to their POVs. The vertical format encouraged direct cuts between actors, because over-shoulder shots were not necessary.”

Tim Webber – “The artistic and aesthetic differences are really where vertical cinema is interesting. Landscape filmmaking automatically lends itself to having more people in the frame at once. How people interact, and the relationships they have with one another, are a key narrative point to the majority of films we see today. With a portrait frame, however, it’s much more suited to having one person dominate the screen. As a result, the type of film we’d see with this technique would be different to the norm.”

Marc Weigert – “Action in most stories takes place horizontally (and no, I’m not just talking about those kind of movies!). Whether it’s a car chase, a walk-and-talk, a dinner in a restaurant, or a scene in a conference room, as soon as more than one person is present everything is naturally laid out horizontally. So you would actually have to ‘force’ a vertical framing. Even a mountain rarely lends itself to portrait framing – just leaf though any Ansel Adams book.”

Paul Mowbray – “The vertical cinema movement highlights the frame much more acutely than regular cinema. As soon as you move away from the widescreen format we’re so familiar with, many of the rules and conventions that cinema has evolved over the years need to be reinvented.”

What sort of things might we see emerging with this new, vertical visual language?

Tim Webber – “The filmmaking rules and tropes we use today were developed with a landscape frame, so how we understand composition and narrative would have to be adapted. The basic principles are the same, but the instincts trained into camera operators would have to be re-learned. For example, in vertical cinema the environment tends to have a far greater effect on the character. So, if the sky plays a big role in your film thematically or through the narrative, a vertical frame would be the perfect fit.”

Rajat Roy – “There’s obviously an emphasis of vertical scale. In the same way that you can present a huge panoramic vista in Cinemascope, a vertical image of a landscape with lots of sky can also be very dramatic, giving a sense of human scale to the view. Alternatively, the vertical frame could be used to build suspense: ‘What’s happening just out of frame?’ The constricted window denies the viewer seeing everything they may want to see.”

Marc Weigert – “I would say that vertical cinema is not suited very well for narrative cinema. On the flip-side, just as the portrait format has advantages in certain areas of still photography (think high rises and stairwells), there are some benefits to vertical cinema. But my feeling is that some vertical tilting – as seen in films currently – would be replaced with lots of horizontal panning!”

Tim Webber – “A good pointer would be the Oscar-winning film, Ida. It isn’t a vertical film, but it was shot 4:3, with the characters frequently framed right at the bottom of the frame. The beautiful and original cinematography affected how the audience felt about the emotional and psychological state of the characters.

Doug Trumbull – “With Theater of Time, we did everything possible to give the audience a feeling of flying and vertigo, using wide angle lenses, so that looking down at the ground was rich in detail, and looking up at the sky was very beautiful – all in one shot.”

Visual effects shots for the vertical format film “Theater of Time”, directed by Doug Trumbull and Arish Fyzee, were accomplished using a gantry crane system designed and built by Sorensen Design of Medford, Oregon, and miniature VistaVision cameras and heads, created in collaboration with Donald E. Trumbull. The miniature photography was under Kuper Motion Control, using Nikon 20mm lenses stopped down to f22, with several seconds exposure per frame.

Visual effects shots for the vertical format film “Theater of Time”, directed by Doug Trumbull and Arish Fyzee, were accomplished using a gantry crane system designed and built by Sorensen Design of Medford, Oregon, and miniature VistaVision cameras and heads, created in collaboration with Donald E. Trumbull. The miniature photography was under Kuper Motion Control, using Nikon 20mm lenses stopped down to f22, with several seconds exposure per frame.

So, what’s the future for vertical cinema?

Tim Webber – “Ultimately, it will always be a niche area of filmmaking. There are many good reasons why films have developed with wide screens. Not least that, because of the horizontal setting of our two eyes: we see a widescreen view of the world in our everyday lives.”

Rajat Roy – “There’s a place for vertical cinema as an interesting alternative to conventional cinematic presentation, if it’s used positively. If you tried to create a narrative film that doesn’t use the device effectively, then it would be annoying – just like any poorly used device. But, in the right hands, I could see it being very effective. It delivers the same kind of effect as a tall, thin window in architecture – it’s a pleasing aesthetic. ”

Paul Mowbray – “As the VR movement is about to explode, it seems quite bizarre to think that a movement with such a restrictive field of view would be able to gain popularity. A filmmaker’s goal is usually to immerse their audience in a story, so I believe an all-encompassing, 360° canvas is the logical evolution of cinema. Vertical cinema introduces a significant restriction to this goal of immersion. However, by doing this, it also introduces some interesting constraints which force the filmmaker to consider things in new ways. I can already think of a few interesting ideas that would lend themselves to this format. But it feels like more of a novelty than the future of cinema.”

Doug Trumbull – “Since much of our world is vertical – consider the printed page, for example – it’s actually very comfortable to frame shots in the vertical format, and very natural to watch. I believe there are still many opportunities to continue to explore the vertical format for special projects, theme parks, rides, and other projects that want to offer a unique movie experience.”

Tim Webber – “Overall, I think it’s an interesting development that opens up the process to an entirely different type of filmmaker. Personally, I’d like to make a film that uses various aspect ratios, and plays around with these conventions from scene to scene as another way to draw the viewer into the characters and story. It’s remarkable how changing aspect ratio doesn’t take you out of the movie, as seen when Christopher Nolan switches from Cinemascope to IMAX in some of his movies.”


Have you seen a Vertical Cinema presentation? What did you think? And do you like the idea of settling down to watch a two-hour feature film presented in this novel way? If so, what kind of movie might it be? Is this a credible art form, or just another tall story?

Special thanks to Annette Wolfsberger, Stephanie Bruning, Liam Thompson, Melissa Knight, Tony Bradley and Mark Stetson.