About Graham Edwards

I'm senior staff writer at Cinefex magazine. I also write novels. In a former life, I produced animated films for theme park rides and science centres. If you offer me a cold beer, I won't say no.

Blade Runner Returns

"Blade Runner" Returns

In the autumn of 1982, I seated myself in a darkened cinema and waited for Ridley Scott to transport me 37 years into the future. I was seventeen years old, a card-carrying movie geek, and almost beside myself with excitement. Would the film I’d been anticipating all year live up to my high expectations?

The film, of course, was Blade Runner, and it didn’t disappoint. Even before the titles had finished rolling, I’d been seduced by the haunting tones of Vangelis’s sultry score. When the opening shot faded up – a spectacular moving vista in which flying cars soared above a fiery, industrialised future Los Angeles – I gasped.

Come to think of it, I think that most of Blade Runner’s future city shots made me gasp. Through the course of the film, I felt myself swept bodily into that glistening, neon-lit metropolis. I could feel its rainfall stroking my face. I could feel its smoke choking my lungs. I’d never felt so immersed in an imaginary world.

I embraced the people of the future, too. Rick Deckard was Harrison Ford like I’d never seen him before: no dashing, romantic hero this, but a cynical, downtrodden gumshoe. I fell head-over-heels in love with Sean Young as the immaculate Rachel, and with Daryl Hannah as the primal Pris. Most arresting of all was the leader of those renegade replicants, Roy Batty, played by Rutger Hauer in a performance that moved effortlessly from chilling to heartbreaking and back again.

Yet it was the ground-breaking visual effects work that spoke to me most clearly. Created by Douglas Trumbull, David Dryer, Richard Yuricich and the rest of the team at Entertainment Effects Group, they were simply breathtaking. What’s more, they blended seamlessly with the live-action that had been shot on the Warner Brothers backlot in Burbank. Was that real rain I was seeing, or some kind of animated effect? Where was the join between the full-scale set and the matte painting? The futuristic environment conjured by Blade Runner was so soaked in atmosphere, and so unbelievable in its complexity, that I fell for it hook, line and sinker. I’d never seen anything like it before.

Frankly, I don’t think I’ve seen anything like it since.

This miniature cityscape for "Blade Runner" was constructed on its side, so as to be aligned correctly for the camera

“In order to get aerial views of some of the cityscapes, the miniature structures were tilted sideways and aligned individually at varying angles so as to appear correct to the barrel distortion of the camera’s wide-angle lens. Numerous in-camera passes were required to balance external and practical lighting. Separate multi-pass film elements were also created for the various billboard and spinner insertions. Like most of the other miniature work, the cityscapes were filmed in smoke and augmented optically with rain.” Original caption: “2020 Foresight” by Don Shay, Cinefex 9, July 1982.

Now, in the spring of 2015 – just four years before Blade Runner’s predicted future is due to arrive, and just weeks after Alcon Entertainment announced that Ford would return in a sequel to be directed by Denis Villeneuve – I find myself anticipating the film all over again. As part of its Sci-Fi: Days of Fear and Wonder season, the British Film Institute is bringing Blade Runner back to cinemas across the UK.

In advance of the release, the BFI has prepared a brand new trailer. Here’s what director Ridley Scott had to say about it:

The Final Cut is my definitive version of Blade Runner, and I’m thrilled that audiences will have the opportunity to enjoy it in the way I intended – on the big screen. This new trailer captures the essence of the film and I hope will inspire a new generation to see Blade Runner when it is re-released across the UK on 3 April.”

The version that’s being released theatrically is the 2007 digitally remastered Blade Runner: The Final Cut, which is different to the 1982 original in a number of crucial respects. For example, it lacks both the tacked-on happy ending and the controversial Deckard voiceover (regarded by many as clumsy and unnecessary). Equally controversial is the most notable addition: a Deckard dream sequence featuring a unicorn. The unicorn’s appearance suggests – via Deckard’s uneasy relationship with his detective colleague, Gaff – that our hero may be a replicant himself …

Blade Runner: The Final Cut also features myriad other changes, including tweaks to both edit and soundtrack, a dusting of new shots, and a number of “fixes” and upgraded visual effects, executed primarily by The Orphanage, supervised by Jon Rothbart, with additional shots supplied by Lola VFX.

I asked Stu Maschwitz, co-founder of The Orphanage, what it was like treading on the hallowed ground of Los Angeles, 2019:

I’m very proud of The Orphanage’s work on Blade Runner: The Final Cut. We all truly felt a sense of reverence, working to preserve a film that meant a lot to us, and everyone involved was completely committed to doing the work at the highest possible quality. It’s a touchy thing, trying to tastefully update a classic and beloved film, but The Final Cut is, in my opinion, a perfect example of how to do it right.

One of the first questions I asked when I found out we were doing the work was, “Are we going to paint out the Steadicam shadow in the final chase through the Bradbury building?” Being a huge fan of Blade Runner, that camera shadow was something I’d seen, and wondered about, a hundred times. The answer was yes, and it was an incredibly difficult shot, replacing an entire wall behind layers of shadow and aerial haze, tracking through the complex warp of an anamorphic lens.

We did all that work in Flame, on a 4K scan of an interpositive that was the highest quality original they could find. We were almost done when the production managed to locate the original negative. We scanned that at 4K and started the work completely over from scratch! That’s how committed to doing it right everyone was on the production.

A police spinner comes in to land in Ridley Scott's "Blade Runner"

Are all the changes in The Final Cut necessary? The question is moot. This version exists, so live with it. Personally, I like The Final Cut best out of all the versions of this timeless classic. But I’d be just as happy to watch the original when Blade Runner appears on cinema screens again next month. I’m just glad of the chance to submerge myself once more in that dark and dazzling world of future noir.

The reason for my enthusiasm is simple. When you leave a showing of Blade Runner, the only possible thing you can say is to echo the words of Roy Batty during the film’s closing scenes, as he sits on the rooftop beneath the tears of that endless, future rainstorm:

“I’ve seen things you people wouldn’t believe.”

What are your memories of Blade Runner? Were you there in 1982, or are you one of the millions who discovered this sci-fi classic later on home video, DVD or Blu-ray? Which version do you prefer?

And here’s the biggie … IS Rick Deckard a replicant?

L is for Lidar

L-is-for-LidarIn the VFX ABC, the letter “L” stands for “Lidar”.

Making movies has always been about data capture. When the Lumière brothers first pointed their primitive camera equipment at a steam locomotive in 1895 to record Arrivée d’un train en gare de La Ciotat, what were they doing if not capturing data? In the 1927 movie The Jazz Singer – the first full-length feature to use synchronised sound – when Al Jolson informed an eager crowd, “You ain’t heard nothing’ yet!”, what was the Warner Bros. microphone doing? You guessed it: capturing data.

Nowadays, you can’t cross a movie set without tripping over any one of a dozen pieces of data capture equipment. Chances are you’ll even bump into someone with the job title of “data wrangler”, whose job it is to manage the gigabytes of information pouring out of the various pieces of digital recording equipment.

And in the dead of night, if you’re very lucky, you may even spy that most elusive of data capture specialists: the lidar operator.

Lidar has been around long enough to become commonplace. If you read behind-the-scenes articles about film production, you’ll probably know that lidar scanners are regularly used to make 3D digital models of sets or locations. The word has even become a verb, as in, “We lidared the castle exterior.” Like all the other forms of data capture, lidar is everywhere.

But what exactly is lidar? What does the word stand for, and how do those scanners work? And just how tough is it to scan a movie set when there’s a film crew swarming all over it?

To answer these questions and more, I spoke to Ron Bedard from Industrial Pixel, a Canadian-based company, with incorporated offices in the USA, which offers lidar, cyberscanning, HDR and survey services to the motion picture and television industries.

Ron Bedard lidar scanning in Toronto for "Robocop" (2014)

Ron Bedard lidar scanning in Toronto for “Robocop” (2014)

What’s your background, Ron, and how did you get into the lidar business?

I was a commercial helicopter pilot for 17 years, as well as an avid photographer. During my aviation career, I became certified as an aircraft accident investigator – I studied at Kirtland Air Force Base in New Mexico. I also got certified as a professional photographer, and following that as a forensic photographer.

At my aircraft accident investigation company, we utilised scanning technology to document debris fields. We used little hand-held laser scanners to document aircraft parts, and sent the data back to the manufacturers to assess tolerances.

How did you make the leap then into motion pictures?

The transition wasn’t quite that abrupt. Local businesses started to find out that I had scanners, and we began to get calls, saying, “Hey, we make automotive parts, and we have this old 1967 piston head, and we want to start machining them. Can you scan this one part and reverse engineer it for us?” Or there were these guys who made bathtubs, who said, “We don’t want to use fibreglass any more.” So we scanned their tubs to create a profile for their CNC machine.

Carl Bigelow lidar scans a Moroccan market set

Carl Bigelow lidar scans a Moroccan market set

Let’s talk about lidar. What is it, and how does it work?

Lidar means light detection and ranging. It works by putting out a pulse, or photon, of light. The light hits whatever it hits – whether it’s an atmospheric phenomenon or a physical surface – and bounces back to the sensor, which then records the amount of time that it’s taken for that photon to return.

Does a lidar scanner incorporate GPS? Does it need to know where it is in space?

Only if your lidar sensor is physically moving, or if it is incorporated into the scanner system, because the lidar is always going to give you the XYZ information relative to the sensor. Most terrestrial-based lidar systems are predicated on the sensor being in a single location. If you’re moving that sensor, you have to attribute where that sensor is in three-dimensional space so you can compensate the XYZ values of each measurement point. That’s commonly used in airborne lidar systems.

What kind of data does the scanner output?

Every software suite does it a little differently, but they all start with a point cloud. We do offer a modelling service, but primarily what we end up providing our clients is an OBJ – a polygonal mesh created from the point cloud – as well as the raw point cloud file.

It sounds like a lot of data. How do you manage it all?

Our scanner captures over 900,000 points per second. And a large movie set may require over 100 scans. That generates a massive amount of data – too much for a lot of people to work with. So we provide our clients with the individual point clouds from each of the scans, as well as a merged point cloud that has been resurfaced into a polygonal mesh. So, instead of making the entire model super-high resolution, we create a nice, clean scene. Then, if they want some part at higher resolution, they let us know and we create it from the original raw point cloud. If they have the point cloud themselves, they just highlight a certain area and work from that.

So you’re effectively giving them a long shot, together with a bunch of close-ups.


Is lidar affected by the weather?

Rain can create more noise, because anything that affects the quality of the light will affect the quality of the scan data. And wet surfaces have a layer of reflectivity on top. Then there’s the effects of the weather on the technology itself. Our modern system has a laser beam that comes out of the sensor and hits a spinning mirror, bouncing the light off at 90°. So if you get a raindrop on that mirror, that can certainly affect where the photons are travelling to.

How do you get around that?

Well, here on the west coast, if you can’t scan in the rain, basically you’re not scanning from November until April! We’ve built a rain umbrella system for our scanners, so we can scan in the rain. We obviously can’t scan directly straight up, but we can point upwards at about a 60° or 70° angle, and all the way down to the ground.

Is cyberscanning an actor the same as lidar?

No, it’s completely different. You have to think of lidar as a sledgehammer – the point cloud generated is not of a high enough resolution to be able to capture all those subtle details of the human face. So when it comes to scanning people, there are other technologies out there, such as structured white light scanning or photogrammetry, which are better suited to the task.

Do you find actors are used to the process of being scanned now?

For the most part, I think they are. I think there’s still some caution. It’s not that the technology is new – it’s more about the ability to re-create somebody digitally. There are some people who have cautions about that, because they’re never sure how their likeness might be used in the future.

Do they worry about safety?

When laser-based systems first started being utilised on film, there was a lot more hesitation from a personal safety point of view. But the amount of ordinary white light that’s being emitted from our little hand-held scanners is less than a flashlight. I have had people say, “I can feel the scanner entering into me!” And I say, “No, you can’t!” So there is still a little bit of magic and mystery to it, but that’s only because people don’t know exactly what it’s doing.

A mailroom is lidar scanned to capture the conveyor system prior to creating digital set extensions

A mailroom is lidar scanned to capture the conveyor system prior to creating digital set extensions

Tell us about photogrammetry.

With photogrammetry, you take enough photos of a subject that you have a lot of overlap. Then you use software to look for common points within each of the images – the software can tell where that pixel is in each image, and its relationship to each neighbouring pixel.

One of the challenges with photogrammetry is that there is no sense of scale. If you have one set of images of a full-scale building, and another of a miniature building, the software isn’t smart enough to figure out that one is smaller than the other. It just re-creates the three-dimensionality.

So you have to cross-refer that with survey data?

Yes. Or you try to place something in the images, like a strip or measuring tape, so that when you’re creating your photogrammetric model, you can say, “Hey, from this pixel to that pixel is one metre.” You can then attribute scale to the entire model. Lidar, on the other hand, is solely a measurement tool and accurately measures the scale.

When you’re working on a feature film, would you typically be hired by the production, or by an individual VFX company?

Every job is a little different. It usually works out to be about fifty-fifty.

Is there such a thing as a typical day on set?

No. Every day is a new day, with new challenges, new scenes, new sets, new people. That’s part of the beauty of the job: the variety. You’re not showing up to work Monday to Friday, 9 to 5, sitting in a cubicle and pushing paper.

Do you get a slot on the call sheet, or do you just scurry around trying not to get in people’s way?

If we’re doing lidar, nine times out of ten we’re there when nobody else is there. If we’re trying to create our digital double of the set with people running around, that creates noisier data and possible scan registration issues. So we do a lot of night work, when they’ve finished filming.

If we’re on location, scanning an outdoor scene downtown for example, usually the night-time is best anyway, for a couple of reasons. First, you’re going to get a lot less interference from people and traffic. Second, if there are lots of skyscrapers with glass facades, you can get a lot of noise in the scanning data as the sun is reflecting off the buildings.

You must be constantly up against the clock, having to get the scans done before the sets are struck.

Yes. A lot of times, we’ll actually be in there scanning while they’re breaking the set down! We just try to be one step ahead. We’re used to it – it’s just the nature of the business. There’s such a rapid turnaround now as far as data collection is concerned. You’ve just got to get in and get out.

So it’s all about mobility and fast response?

Exactly. One of the things that our customers really appreciate is our ability to be very portable. All of our systems – whether it’s cyberscanning or lidar – pack up into no more than three Pelican cases. And we can be on a plane, flying anywhere in the world.

Ron Bedard and Ian Galley lidar scan the 300-foot naval destroyer HMCS "Haida" in Hamilton, Ontario

Ron Bedard and Ian Galley lidar scan the 300-foot naval destroyer HMCS “Haida” in Hamilton, Ontario

Is it hard to keep up with scanning technology as it develops?

Oh, absolutely. We’re dogs chasing our tails. With today’s rapid advancements, if you can get three years out of a technology, maybe four, you’re lucky.

Is there any future or near-future piece of technology you’ve got your eye on?

I think photogrammetry is really making a comeback. It’s been used ever since cameras were invented, and from an aerial survey point of view since long before World War II. But it’s made a real resurgence of late, and that really has to do with the resolution of the sensors that are now available. Now that you’re talking high numbers of megapixels, you’re able to get much finer detail than you were in days past.

As these high-density sensors come down in price, and get incorporated into things like smartphones, I think we’ll see 3D photography – in combination with a sonar- or a laser-based system to get the scale – really getting market-hard.

And what does the future hold for lidar?

I think flash lidar will become much more prevalent. Instead of a single pulse of light, flash lidar sends out thousands of photons at once. It can fire at a really rapid rate. They use flash lidar on spacecraft for docking. They use it on fighter jets for aerial refuelling. You’re starting to see low-cost flash lidar systems being incorporated into bumpers on vehicles for collision avoidance.

So what are the benefits of flash lidar for the film business?

When you’re trying to do motion tracking, instead of putting balls on people and using infra-red sensors, you can use flash lidar instead. It is much more versatile in long-range situations. You can create an environment with flash lidar firing at 24 frames per second, and capture anyone who walks within that environment. That’s something I know we’re going to see a lot more of in the future.

Alex Shvartzman uses a handheld structured light device to scan a horse

Alex Shvartzman uses a handheld structured light device to scan a horse

What’s the weirdest thing you’ve ever had to scan?

Everything’s weird. We’ve scanned horses. We’ve scanned dogs. The beauty of working in film is that one day we can be scanning a Roman villa, and that evening be scanning the set of some futuristic robot movie.

Animals are tricky because each one is different, and you never know how they’re going to react to the light source. We scanned around thirty horses for one particular job, and some of them were happy and docile, and some of them reacted as soon as the scanner started.

Another challenging question we were asked was, “Can you scan a boat that’s floating out in the open sea?” I thought about it and said, “Sure you can. You’ve just got to have the scanner move the same way the boat’s moving.” We built a custom rig so that the scanner was constantly moving with the boat, and we hung it out over the edge of the boat and scanned the whole hull.

Lidar providers are among the many unsung heroes of movies. Do you ever crave the limelight?

No. In the end, our job is to provide solutions for our customers. For us, that’s the reward. When they’re happy, we’re happy.

Cinefex 141 Cover Reveal

Cinefex 141 Cover

The wait is almost over. The brand new issue of Cinefex hits the newsstands next week. On the cover is the cybernetic star of Chappie, Neill Blomkamp’s action/thriller about a robot child prodigy kidnapped by criminals and raised within their dysfunctional family. Weta Workshop provided practical props and effects, along with special effects supervisor Max Poolman. Visual effects were created by Image Engine, Ollin VFX Studio and The Embassy VFX.

The other big movies featured in Cinefex 141 are The Hobbit: The Battle of the Five Armies, Jupiter Ascending, and Unbroken. So what are you waiting for? Order your copy now!

“ATROPA” – Q&A with Eli Sasich

"ATROPA" - science fiction short by Eli Sasich

Lone off-world detective Cole Freeman stumbles on a giant vessel adrift in the depths of space. It’s the ATROPA, the very ship he’s been pursuing, but something is wrong. He shouldn’t have caught up with it for another 98 days. Waking the vessel’s crew from cryosleep only deepens the mystery, and when the ATROPA collides with something unimaginably strange, confusion turns to disbelief … and the real adventure begins.

This cliffhanger note marks the end of ATROPA, a sci-fi short directed by Eli Sasich. The suspenseful climax is deliberate, because the film is in fact merely a proof-of-concept for a feature film currently being pitched to studios.

In this Q&A for Cinefex, Sasich discusses the making of ATROPA, his first film project since festival favourite HENRi, in which a robot discovers there may be more to life than mere artificial intelligence.

Eli SasichWhere did the story concept for ATROPA originate?

The story really came out of a thought experiment: what would happen if you (literally) crashed into yourself in space? How might something like that be possible, and how would you deal with it? What would be the physical and emotional toll? It’s like a Twilight Zone episode – one of those classic sci-fi genre tropes – but the writer, Clay Tolbert, and I found a unique way into it.

Taking that as a jumping-off point, we then formed the story around our main characters, Cole and Moira. Strip everything else away, and it’s a love story: how two people rekindle a dying relationship under the most extreme circumstances – kind of like The Abyss. We have these big sweeping ideas about time and fate, but it’s the character story that really excites me.

You made the short film as a way of advancing the feature-length project. Why go to all that trouble?

As a director without a feature credit, I felt like I needed something to show I could effectively work with actors and small budgets. HENRi was like a master’s program for me – I learned a ton, and it took two years to complete – but it was a very different type of film. We only had a few actors, and I was dealing mostly with miniatures and effects. It was more precision craft and less spontaneous problem-solving. It’s all the same process, but the challenges were different. I really wanted to get back and shoot something more conventional again.

It’s risky – you have to make it look and feel like a professional feature film, but you have a fraction of the time and money. Being able to show an executive or financier what you are talking about is a huge advantage, as long as it lives up to expectations. Luckily, we had an awesome crew who stepped up to the challenge, and we were able to make something that portrayed the tone and mood I was going for.

"ATROPA" concept art

“ATROPA” concept art

Can you tell us anything more about the feature?

I can say that the film explores the ideas of fate and free will, actions and consequences. That’s one of the things I love about science fiction – the ability to explore big philosophical ideas in an organic way.

What stage are you at with the feature development?

We are actively pitching the feature right now – I’m really excited to say that we are in discussions with Pukeko Pictures to develop and produce it. Pukeko Pictures is a sister company to Weta Workshop; it was founded in 2008 by Sir Richard Taylor, Tania Rodger, and Martin Baynton. I couldn’t be more excited by the possibility of collaborating with such an amazingly talented and creative team.

What budget and timescale did you work to for the short film?

There was very little money for the short. We had three weeks of pre-production, we shot it in two days, and finished the entire film for right around $10,000. Most of that money came from HENRi sales, which I’m very thankful for.

During the production of the ATROPA short, did you apply any lessons you’d learned while making HENRi?

You learn from every project – sometimes the hard way. Keeping calm, trusting my intuition, and adapting to unexpected problems were important lessons from HENRi that will stick with me for the rest of my career. In a more technical realm, the post-production effects workflow was something I really got a good handle on with HENRi, and that helped immensely when it came time to finish ATROPA quickly.

"ATROPA" concept art

“ATROPA” concept art

Let’s talk about the design of the film. Who did the concept art?

We had eight or nine pieces of concept art done by artists from around the world. Ioan Dumitrescu worked mostly on the designs for the ATROPA and Cole’s ship, the Morinda. Mike Sebalj and Roger Adams did some really nice character and environment designs for us. We also had storyboards done for all our exterior space sequences – artist Jean Claude De La Ronde provided those.

Did you use any specific design cues for the two spaceships?

For the Morinda, I had a vague idea of the shape I was after. The articulated dual engines or nacelles were a concept I had for a different type of propulsion/steering mechanism. I thought it would be a unique and intuitive way to depict roll and pitch. That was really born out of the idea that there is no “up” or “down” in space. When the Morinda approaches the ATROPA, it’s not on the same plane, meaning Cole has to reorient his ship to the proper course. I really wanted to depict that, because it seems that every time I see an approach sequence in sci-fi, the two vessels are always perfectly aligned with each other.

The ATROPA was more difficult. We explored many different shapes and sizes – some were pretty out there in terms of design. Ultimately, it came down to finding something that fit within our world. We needed the ATROPA to quickly read as a large industrial ship to the audience. Obviously we took some design cues from the Sulaco from Aliens – but that bold, elongated shape seemed to read the best in the short amount of screen time.

You shot the film on the standing spaceship set at Laurel Canyon Stages in LA. Why did you choose that particular location?

I had visited the Laurel Canyon stages years earlier as a possible location for HENRi, long before we decided to use quarter-scale miniatures. I had always wanted to shoot there – it has that wonderfully gritty and grimy look that I love. It also perfectly invokes ‘70s and ‘80s sci-fi, which was the ideal world for ATROPA.

Did you adapt the Laurel Canyon set to give it your own personal stamp?

That set has been used in thousands of projects, so it’s pretty easy to spot once you know what you’re looking for. Since we couldn’t change the set in any way, other than some rearranging of props, it was really important to use lighting and shot composition to make the space our own. Our director of photography, Greg Cotten, did an amazing job of capturing the set in a different way than it’s usually shot.

"ATROPA" was shot on the standing spacecraft set at Laurel Canyon

“ATROPA” was shot on the standing spacecraft set at Laurel Canyon

For example, we made the conscious choice not to light from the ceiling grates, which creates amazing texture and patterns, but it’s also how everyone seems to light that set. We weren’t afraid of letting things fall off into darkness either – it created mystery and tension and hid some of the less camera-friendly elements of the space.

In terms of shot composition, we tried to keep things wide and cinematic wherever possible, and we motivated camera moves when we could, to keep things interesting. Specific original set pieces were also built and utilised to give us our own unique identity.

Tell us about some of the set pieces you brought in.

Our production designer, Alec Contestabile, designed and built the hologram board in Cole’s ship, the cryosleep pods, and the table in the mess hall, matching the look and feel of what already existed at Laurel Canyon. The hologram board was a wooden box with a glossy tabletop, and practical LED lights hidden in the inner lip. The LEDs helped sell the effect of the chessboard by providing interactive lighting on Cole’s face.

Production designer Alec Contestabile created a number of bespoke set pieces to enhance the Laurel Canyon set, including the holographic table in the cockpit of the "Morinda"

Production designer Alec Contestabile created a number of bespoke set pieces to enhance the Laurel Canyon set, including the holographic table in the cockpit of the “Morinda”

The cryosleep pods were made out of foam-core board, cardboard, and various pieces of junk – tubes and wiring – all attached to a wooden frame. They only weighed about 15 pounds, so one crewmember could move them. The table in the mess hall was fashioned by bolting two plastic pallets together and covering them with scrap pieces of Plexiglass.

We lit the table practically from the inside. Alec carefully placed six iPads on a shot-by-shot basis, which we used to loop tech graphics. It was nice to get some in-camera data screens, and it gave the actors something to look at and interact with. Alec worked wonders with the production design, all with essentially no money. He used junk and whatever odds and ends we could find lying around to create a completely believable world.

Eli Sasich and Anthony Bonaventura on the set of "ATROPA"

Eli Sasich and Anthony Bonaventura on the set of “ATROPA”

The performances are uniformly calm and measured. Did the cast get to read the whole feature script before working on the short, to help them build their characters?

It was important to me that everything seemed fairly routine to these characters, until the big reveal at the end – which is certainly not routine. Because the short had to set so many things up, it’s a heavily truncated version of the actual first act of the script. The dialogue had to get more information across, and things certainly happen faster. I did speak with each actor about the journey their character takes in the feature, but they never read the full script. The cast was fantastic, and they were great to work with. We had to shoot fast, down and dirty, and they were always prepared and willing to do so.

Visual effects for "ATROPA" were created by The Light Works

Visual effects for “ATROPA” were created by The Light Works

What was the workflow for the visual effects?

Tobias Richter and his team at The Light Works did all the exterior spaceship effects. After working with Jean Claude to storyboard the space sequences, we handed those boards off to Tobias and his team. They would return with low-resolution animatics of each shot, which I would then give notes on. The Light Works is located in Germany, so all of post was coordinated via email and Skype. It was a truly seamless process, and they did an amazing job for us.

For the few shots that included live action elements – like the pull-back from Cole’s ship – we coordinated ahead of time, providing photo reference and measurements. In terms of direction, I provided examples of various shots I liked. I would reference lighting and compositing elements from different films, and we would work off of those ideas. Interestingly enough, the finished shots wouldn’t feel right until we added imperfections – slight camera shake, lens distortion, grain and tasteful flares.

That pull-back shot from the Morinda is quite complex. How did you put it together?

It was a difficult shot for several reasons. First and foremost, we didn’t have the space to dolly back from the cockpit set, let alone make the turn around the side. So we shot the plate of Cole in the cockpit as a wide lock-off. Tobias and his team then projected that live action footage onto a 2D card within a low-poly CG cockpit set which they had modelled.

The pull-back was done in the computer, but since we didn’t have a perspective shift on our live-action footage, wrapping around the side of the ship posed many challenges. We ended up doing a hand-off to a CG double of Cole, using the cockpit window strut as a natural wipe once we reached about a 45-degree angle. The effect works fairly seamlessly, and was a really crafty way to fake a very complicated move.

How did you track the 3D chessboard graphics into the live-action plate?

Our VFX supervisor, Ryan Wieber, did the 3D chessboard and proximity alert display, as well as all of our compositing. He was able to track and create these completely believable shots without greenscreen – because there wasn’t enough room to light the screen – and usually without any tracking markers.

The chessboard hologram was an incredible effect. We had a set piece with built-in practical lighting, but the LEDs were visible in many shots, and quite distracting. Ryan ended up replacing the top of the chessboard in every shot, so the lights were covered. He then built the hologram projection in Adobe AfterEffects, utilising Element 3D for the chess pieces, together with layer upon layer of compositing tricks. The opening shot, where we pull out of the holographic chess piece, was a completely digital camera move up until the tilt-up to Cole.

For the cockpit shots, Ryan rotoscoped Cole and added in glass and stars. He has an incredible eye, and an amazing design sense – he also created all of our display graphics and screens. Ryan was instrumental in helping create a believable sci-fi world – I call him the magician!

The music has an grand, epic quality. Was that a deliberate creative choice?

I love film music – especially big orchestral scores. I think we’ve lost a bit of the art of film scoring today. Music has become filler noise, and the use of strong themes has strangely gone out of style. Our composer, Kevin Riepl – who also scored HENRi – feels the same way. Kevin and I share a passion for the same types of film scores, as well as the philosophy that themes should develop just like characters over the course of a story.

We certainly didn’t have the budget to record a live orchestra for ATROPA, but I really wanted that epic feel. Fortunately (and somewhat ironically), Kevin did the score for the videogame Aliens: Colonial Marines, which was live orchestra. We had access to all the stems from that recording session, so he rearranged and remixed them, and added some electronic elements to create something new. The music is barely recognisable from the game, and it gave us the big, live, cinematic sound I was looking for. It’s another example of working within low-budget constraints and still finding ways to get what you want creatively. A bit of Aliens obviously snuck through, which is fine with me – it’s one of my favourite movies, and certainly an inspiration.

The film contains a reference to the Valley Forge, a cap that might have been worn by a member of the Nostromo’s crew … what other sci-fi in-jokes did you put in there?

Good eyes catching those Easter eggs! I like adding little in-jokes wherever possible. It’s a fun way to both acknowledge the projects that inspired you, and add little hidden elements for yourself and your friends. Most will never be seen, but there are a few more. For example, the hand-held case-file used by Cole displays “VL-426”, which is a reference to the planet “LV-426” from Alien and Aliens. In fact, all the crew ID numbers on the case-file bio pages are the original ID numbers of the Nostromo crew from Alien.

You mentioned Valley Forge – Cole’s last name, Freeman, also comes from Silent Running, in reference to the main character, Freeman Lowell, played by Bruce Dern. The logo for the ATROPA is actually the same logo for the Pythagoras ship from HENRi, only turned upside down with different colours. The sound of the Morinda’s engines was inspired by the speeder bikes from Return of the Jedi. Our sound designer, Michael Ault, created his own take on that classic sound by pitch-shifting elephant trumpets.

Holographic chess board - "ATROPA"

As well as opening the film, the chess game also appears during the end credits. Is that significant?

Chess is a battle of wits and a game of strategy; I liked the symbolism. I also wanted to foreshadow the ending in a subtle visual way: the chessboard is an exact mirror image of itself with the two sets of opposing pieces. The idea that Cole will be squaring off against himself becomes literal by the end of the short.

How do you feel about the short, now that you’re pitching the feature?

ATROPA was made possible by a very passionate and hardworking crew. The end result has definitely opened doors for us, and the response to the release online was overwhelming. We hope we can make the feature – the story goes to some really thought-provoking and unexpected places.

Do you have any other projects in development?

I have a few other projects at varying stages. I’m working with another writer on a really fun action/adventure film, which follows the oddball friendship of a couple of historical figures. It has the tone of Ghostbusters and Sherlock Holmes, with some steampunk design sense thrown in for good measure. It would be an absolute blast. I’ve also written a smaller indie film that deals with another historical figure, and a little-known fact about his death. It’s a passion-project for me, and something I’ve been kicking around for years. I’m interested in strong character stories, regardless of genre, time period, or setting.

ATROPA photographs and video copyright © Corridor Productions 2015.

Inspiring ILM

What drives people to work in the visual effects industry? The glamour? The technology? All those ravening monsters and exploding spaceships? Or is it just another job? In an ongoing series of articles, we ask a wide range of VFX professionals the simple question: “Who or what inspired you to get into visual effects?”

Here are the responses from the staff at Industrial Light & Magic.

Prehistoric Encounters

When asked what inspired him to get into visual effects, Michael DiComo, CG technology supervisor (ILM, San Francisco), was in no doubt, answering, “Simple: Jurassic Park. Seeing the effects in that movie made me realise that THAT was what I wanted to do for a living. The great thing is that, not only did I get hired by ILM only three years after Jurassic Park was released, but I also got to work on The Lost World: Jurassic Park as my second film project ever.”

Beverley Joy Ang, production engineer (ILM, Singapore), was also inspired by dinosaurs, but of a more cuddly kind.

“I remember stumbling upon this program called CorelMove during grade school,” Ang recalled. “I didn’t know how to do animations back then, but the CorelMove library had this pre-animated purple dinosaur that you could import into a scene. I had a lot of fun changing the backgrounds, adding shrubs and trees, and moving the dinosaur around. I think it’s that little purple dinosaur that kick-started my love for computer graphics.”

The prehistory of cinema has also had its influence today’s VFX professionals.

The Wonderful World of Disney

“Colouring characters in all day – who wouldn’t love that job?” Betsy Mueller, ILM

“Growing up, my family and I used to watch The Wonderful World of Disney movies on Sunday evenings,” commented Betsy Mueller, lighting technical director (ILM, Vancouver).

“One episode was introduced with an old black-and-white clip in which Walt Disney explained what some of the traditional animation departments did. The Ink and Paint department fascinated me! Colouring characters in all day – who wouldn’t love that job? That was when I became hooked on the magic of making movies.”

Mad About the ’80s

Many people currently working in visual effects have a soft spot for the films of the 1980s, that golden age of fantasy and adventure movies that marks the high point of optical and photochemical effects.

One such person is Danielle O’Hare, senior manager of technical training (ILM, San Francisco), who remarked, “The reason I started working in VFX was because of movies like Indiana Jones, Back to the Future, and E.T.: The Extra-Terrestrial.”

Craig Hammack, visual effects supervisor (ILM, San Francisco) shared O’Hare’s love for this era, and also added a favourite from 1962 into the mix: “I loved the escapism and thrill of Star Trek: The Wrath of Khan, and the absolute beauty of Lawrence of Arabia.”


“Gremlins” – actress Phoebe Cates serves her unusual clientele, while Chris Walas’s crew of puppeteers hunker down out of shot.

Kate Lee, layout artist (ILM, Vancouver), recalled a formative ‘80s moment with a little extra bite. “My earliest memory of a VFX movie scene was Gremlins – launching Mrs. Deagle out of the window in her wheelchair. I was too young to understand that the gremlins weren’t real, and so terrified that I couldn’t go anywhere on my own after dark for a while!”

You can’t ask a group of VFX professionals what inspires them without stumbling over at least one person who loves Star Wars.

“For me, it’s an insatiable appetite for watching movies, a love of the filmmaking process from conception to projection,” stated Daniel Cavey, production manager (ILM, San Francisco). “And of course my first movie theatre memory: my Mom taking me to see The Empire Strikes Back!”

Milestone Movies

Jurassic Park isn’t the only game-changing film that’s proved inspirational to the staff at ILM. Recalling a summer afternoon in 1992 in South Africa, Mike Jutan, animation/creature R&D engineer (ILM, San Francisco), said, “I was a 10 year-old, enthusiastically eating a snack and watching Terminator 2: Judgment Day. Somehow, I’d convinced my parents it was okay to watch it: ‘Don’t worry Mom, this version was edited for TV!’ The moment where the T-1000 melts up from the checkerboard floor cemented my life goals in a matter of seconds. I loved computers, I loved movies, and with this – the single-most awesome visual effect of all time – I loved ILM.

“From there, my career and education goals revolved around combining math, movies and computer science. Fifteen years later, an ILM recruiter called and asked if I’d ever ‘considered working at ILM’. Not attempting to hold back any glee, I laughed, ‘Yes – only since I was 10 years old!’”


“The moment where the T-1000 melts up from the checkerboard floor cemented my life goals in a matter of seconds” – Mike Jutan, ILM

Colette Mullenhoff, R&D engineer (ILM, San Francisco), was also seduced by James Cameron’s metallic assassin from the future: “I always had my sights set on computer graphics for entertainment, but watching the T-1000 liquid metal cyborg in Terminator 2: Judgment Day sealed the deal.”

Another seminal movie for many VFX professionals is The Matrix. “When The Matrix came out, it inspired my high school class to re-create the 360 degree bullet-time scene,” explained Kate Lee, layout artist (ILM, Vancouver). “But we could only improvise with one person holding a camcorder in a car that was spinning around another person, who was leaning back and ever-so-slowly throwing his arms in the air. Clearly there was much to be learned about the technical side of the amazing images!”

Outside the VFX Box

It isn’t always a love of visual effects that draws people into visual effects. Craig Hammack might be a Star Trek fan, but he’s also inspired by architecture. “I have a love for the kind of experience that can be evoked by the light and form of architectural spaces. Architects like Louis Khan, Le Corbusier and Carlos Scarpa capture my imagination through design. While searching for a way to create experiences myself – without the need for understanding civil engineering codes and electrical schematics – I discovered computer graphics, and then visual effects.”

For Johan Thorngren, CG supervisor (ILM, San Francisco), it was all triggered by a passion for military aircraft. “I grew up near a military airbase and loved watching the jets from afar,” he remembered. “I built the kit-models of military airplanes that were available at the time. Due to a random chance in my previous career, I got hold of a copy of 3DSMax and picked up model-building as a hobby again – this time digitally. This led me to explore rigging and the FX aspects related to the models, as well as propelling me into the rendering and shading side of things. So I very quickly started to get more interested in making synthetic things look real.”

An aptitude for computer science can help lead to a career in visual effects, but on its own it may not always be enough – as the experience of Wajid Raza, technical director (ILM, San Francisco) proves. “In the late 90’s, when I was in middle school, my parents got me a Pentium computer,” Wajid recalled. “I was instantly hooked on the graphics packages – including an early version of Adobe Photoshop. A few years later, still fascinated with computers, I enrolled in an undergraduate computer science program.

“But during my second year of college, I started to get bored – it was not as creatively satisfying as I had hoped it to be. That was when Peter Jackson’s The Lord of The Rings came out. I was floored with the whole experience – the CG characters and epic battle scenes. I spent the next two years finding out everything about VFX, and eventually landed my dream job at Industrial Light & Magic.”


“The Lord of the Rings” – “I was floored with the whole experience – the CG characters and epic battle scenes” – Wajid Raza, ILM

However, the ultimate outside-the-box story comes from Jon Alexander, compositing supervisor (ILM, San Francisco), who puts it all down to, well, a higher power.

“I bounced around four universities, studying different sorts of engineering, but my heart was not in it,” commented Alexander. “My parents suggested I enrol at Mary Manse, the small Catholic college where my Mom was on the faculty, because tuition would be free. But I wanted to go to a film school. My Dad said, “Your Mom’s praying that you just get a degree.” I snottily replied, “I’m praying to go to film school.” But I enrolled anyway.

“A month or so later the Ursuline nuns who ran the school said that, after much prayer and because of the financial situation, that it was God’s will that the school was closing after 50 years. My Dad called me up and said, “Okay I guess its God’s will you go to film school, but you’ve really pissed off a bunch of nuns!”

The Final Effect

Whatever inspired these ILM-ers to get into the business in the first place, it’s clear that, years later, their passion remains strong.

“To this day, whenever we get a new Jurassic movie in house, I get all jazzed up by the artwork, dinosaur maquettes, motion and lighting tests,” remarked Michael diComo. “It makes me want to roll up my sleeves and be a shot-lighter again.”

“In 2003, ILM called me up and asked if I wanted to come and work on Star Wars: Revenge of the Sith,” recalled Johan Thorngren. “I was a bit hesitant, until I heard that ILM had a department that allowed people to work in many different areas at once. Now I get to jump into many different aspects of the work, and I find it really satisfying being part of a team responsible for a given shot or sequence of shots.”

Danielle O’Hare concluded, “The reason I’ve stayed at ILM is because of the incredibly talented and generous population of artists, producers, and engineers. These are people at the top of their game, who are more than happy to share what they know with their peers. It makes my job as a training manager very easy, and a whole lot of fun.”

For some, however, working in VFX poses one problem that can be insurmountable. As Kate Lee quipped: “The biggest challenge is to get my parents to understand what I do for living.”

ILM LogoIndustrial Light & Magic was founded by George Lucas in 1975. Since then, ILM has created visual effects for more than 250 feature films, notably the movie franchises Transformers, Iron Man, Harry Potter, Indiana Jones, Jurassic Park, Pirates of the Caribbean and, of course, Star Wars. ILM has offices in San Francisco, Singapore, Vancouver and London. Thanks to all the staff from ILM who contributed to this article.

Special thanks to Greg Grusby. “Terminator 2: Judgment Day” photograph copyright © 1991 by Carolco Pictures, Inc. “Gremlins” photograph copyright © 1984 by Warner Bros., Inc. “The Lord of the Rings: The Fellowship of the Ring” photograph copyright © 2001 by New Line Cinema.

“Interstellar” Wins VFX Oscar

Cinefex 140 "Interstellar" cover

Christopher Nolan’s space epic Interstellar has won the Academy Award for Best Visual Effects at the 87th Academy Awards, in a glittering ceremony held at the Dolby Theatre, Hollywood & Highland Center on February 22, 2015.

The award was collected by Paul Franklin, Andrew Lockley, Ian Hunter and Scott Fisher in recognition of the ground-breaking visual effects images created by Double Negative, with on-set special effects orchestrated by Scott Fisher, and other practical effects – notably a suite of large-scale spacecraft miniatures – by New Deal Studios.

Complete list of nominees:

  • Interstellar – Paul Franklin, Andrew Lockley, Ian Hunter and Scott Fisher
  • Guardians of the Galaxy – Stephane Ceretti, Nicolas Aithadi, Jonathan Fawkner and Paul Corbould
  • X-Men: Days of Future Past – Richard Stammers, Lou Pecora, Tim Crosbie and Cameron Waldbauer
  • Captain America: The Winter Soldier – Dan DeLeeuw, Russell Earl, Bryan Grill and Dan Sudick
  • Dawn of the Planet of the Apes – Joe Letteri, Dan Lemmon, Daniel Barrett and Erik Winquist

You can read the full story behind Interstellar‘s award-winning visual effects in Cinefex 140. Oh, and while you’re at it, why not catch up on the rest of the nominated films in our previous two issues – links below.

K is for Kinematics

K is for KinematicsIn the VFX ABC, the letter “K” stands for “Kinematics”.

If you’ve ever animated a creature using a piece of 3D software like Autodesk’s Maya or 3DSMax, you’ll know all about kinematics.

If you haven’t, here’s a quick primer …

Imagine you’re going to animate a troll. Just an regular troll: twelve feet tall with massive hands and a bad attitude. Let’s call him Tarquin.

The scene you’re going to animate requires Tarquin to reach out his hand and throttle a nearby dwarf. The dwarf’s name, by the way, is Doug.

Forward Kinematics

One way to perform the task is by using forward kinematics.

This is very tricky. Forward kinematics requires you to animate Tarquin’s arm starting from the shoulder and working your way out. In other words, you move his upper arm a bit, then adjust his forearm, then proceed to his hand, and finally manipulate those fat troll fingers.

The reason this is tricky is because what you really want to do is make sure Tarquin’s fingers connect with Doug’s neck in exactly the right place, at exactly the right time. That’s a tough call when those fingers are always the last appendage on the list of things you move.

Inverse Kinematics

If all that sounds like too much hard work, you might prefer to fall back on inverse kinematics.

This is much more satisfactory. With inverse kinematics, you get to focus entirely on Tarquin’s hand, grabbing it with your cursor moving it from its starting position to, you guessed it, the waiting neck of the poor, doomed Doug.

As for the rest of Tarquin’s arm, you simply rely on the software knowing how all the joints are interconnected, and trust it to move the entire limb accordingly. It’s like moving the hand of a jointed puppet and letting the laws of physics do the rest.

Of course, it’s not quite as simple as that. If you want to get any kind of expression into the movement (and, being an animator, that’s precisely your aim) you have to go in and make adjustments to the gross movement that’s been determined by the software. You know, all those small accelerations and delays, not to mention the unexpected muscle twitches caused by Tarquin’s predilection for strong ale.

If you have friendly rigging team, they might cause certain secondary movements to happen automatically. But it’s still up to you to coax a performance out of your troll.

If you want to learn more about kinematics, you’ll need to dig deep into a software tutorial, like the one in this Maya training video:

However, before you get lost down a rabbit hole filled with IK handles and pole vectors, let’s take a moment to consider the long and illustrious history of the word kinematics. Along the way, we might learn what it actually means.

Kinetographs and Kinetoscopes

One of the earliest devices developed for the presentation of moving pictures was the Kinetoscope. Both it and its counterpart, the Kinetograph, were created in the US at the Edison Lab during the 1890s.

Edison's Kinetoscope was developed during the final decade of the 19th century.

Edison’s Kinetoscope, one of the earliest motion picture viewers, was developed during the final decade of the 19th century.

The Kinetoscope presented moving images to its viewers by causing a strip of perforated celluloid film to pass in front of a light source at high speed, with each successive image on the film being isolated by a moving shutter. If that sounds like a movie projector to you, you’re nearly right.

Although the Kinetoscope contained all the essential components of a typical film projector, it was actually a peephole device, and thus could be viewed by only one person at a time.

As for the Kinetograph, that was the camera used to create the celluloid images in the first place.

By the turn of the century, a number of other inventors had jumped on the motion picture bandwagon, including Louis and August Lumière , whose Cinématographe machine was capable of displaying projected moving images to a large audience.

Kinematics in the Kinema

The root of all these words is the Greek word “kinema”, which means “motion”. As with many Greek words, during its journey across Europe and beyond, the “k” has been transformed into a “c”.

Which is why nobody goes to the kinema any more.

However, you can still find the word “kinema”, in all its derived forms, if you look hard enough. The technical and craft organisation that is the British Kinematograph, Sound and Television Society (BKSTS) – originally formed in 1931 as the British Kinematograph Society – is still going strong.

Then there’s the American Society of Cinematographers (ASC), a non-profit organisation dedicated to the art of filmmaking. Its magazine, American Cinematographer, was first published in 1920. It’s still going strong too.

Finally we have kinematics, that esoteric aspect of the modern art of animation, the mastery of which demands the application of both artistic sensibilities and technical smarts.

And the name of which contains a pleasing echo of the long history of the motion picture craft.

“The Crossing” – VFX Q&A

John Woo's "The Crossing" features environmental visual effects shots by Tippett Studio

The Crossing is a two-part epic disaster movie directed by John Woo (Red Cliff, Mission: Impossible II, Face/Off), set during the Chinese Civil War, and climaxing with the sinking of the luxury liner Taiping in 1949. In the first film, multiple plotlines track the interweaving fortunes of a number of characters through romance and war, setting them up to converge with catastrophe in the closing instalment. Part 1 was released in China in December 2014, with Part 2 due for release in May 2015.

Tippett Studio contributed a number of visual effects shots to The Crossing: Part 1, most of them involving large-scale environments. Key sequences include a re-creation of the bustling city of Shanghai, scenes of the Taiping at sea, and a dramatic battlefield flypast.

In this exclusive VFX Q&A, Tippett Studio’s environment art director, Kent Matheson, reveals the painstaking process of bringing to life the historic events revisited by Woo in the film that’s been described by some as “the Chinese Titanic”.

How did you get involved with The Crossing?

Tippett Studio had worked with John Woo on Red Cliff, so we were on his radar. The connection for The Crossing was made by his post-production supervisor in Beijing, Andy Chen. We had an initial kick-off with John before he was consumed with all the other facets of the film – after this we worked through Andy, who would show our work to John and get his notes back to us.

How many shots did you deliver, and who was on the team?

We delivered ten shots during a two-and-a-half month period. It was a very tight schedule in which we had to research, develop, and execute the shots somewhat simultaneously.

We had a crew of about 20 on the project, divided into separate teams for each environment. The visual effects supervisor was Chris Morley, with Ken Kokka and Yimi Tong producing. On the big Shanghai shot, we had Ross Nakamura as the lead compositor, and Chris Paizis was our head layout guy and scripts wizard. Ben Von Zastrow was our lead technical artist, and Darin Hilton and Pamela Saad contributed immensely.

Let’s talk about the big environment shots. What did they comprise?

There’s a shot that lifts up from the street level of Shanghai’s slum area and travels completely across the city towards the wealthier area of the Bund. It’s kind of a signature image for the film. It describes the contrast between the different levels of the culture – the extreme disparity between the wealthy and the poor areas.

We also created a series of shots of the ship, Taiping, sailing on the open ocean, using plates that were filmed in a studio backlot. In one shot, the camera follows several seagulls as they traverse the length of the ship, passing over the crew and passengers on the various decks before reaching the lead character standing near the prow of the ship. The shot involved water simulations, atmospherics, a full CG ship blended into the plate, and a whole lot of fancy comp work.

In another shot, we explode out from near the barrel of a large cannon to travel over an enormous battlefield full of advancing soldiers and tanks. This began as an aerial plate showing only a few soldiers and a single tank; we filled the land with tanks and CG soldiers, and filled the sky with exploding shells.

Watch Tippett Studio’s video breakdown of the Shanghai establishing shot:

How did you go about researching 1940s Shanghai for the big establishing shot of the city?

Shanghai at the time was an amazing mix of cultures and styles, with aspects of the old world and new mixed together, so we found photos that really captured it well. Hooray for the amazing resource of what people share on the internet!

We collected everything we could – not just the obvious buildings on the Bund, but also old signage, dock photos, posters, people, cars, boats, bicycles, streetlamps, flags and clothing. We wanted to have a very solid idea of each district of the city, so that all the parts we created would properly support the final look and be accurate to the place and time.

Aerial views gave us a sense of the textures and densities of the different regions of the city, while street-level images showed us what we needed to properly detail it. In some cases, we were able to use the reference to directly inform the shot creation. For example, we projected an archival image of the Wusong River on to our CG river plane, and dressed in the boats and supplies in the positions shown on the picture. Chris Morley loved the natural chaos of the photograph, and didn’t want to lose it. You can’t make that stuff up!

How did you begin laying out your CG model of the city?

One of the things we had found in our research was original maps of Shanghai from 1940, so we based our initial layout work on these. We located the key areas needed for our shot, and created a very simple representation of the city, laying in the rivers and bridges in the correct places and using blocks for city buildings and simple models gathered from SketchUp for the landmarks.

We also blocked in the camera move at this stage. We explored various paths, ending up with a move that pulls up and over the secondary river in Shanghai, glides over the commercial district toward the Bund, and finally drops into an upper window of the Peace Hotel.

What was the next stage, once the layout was approved?

We used Esri CityEngine to lay out the streets. We built only what would be seen in the shot, matching to the maps but also fudging a bit where we wanted to create dramatic focuses. This became the base on which we placed the buildings and details.

Shanghai CG model created by Tippett Studio for "The Crossing"

How did you create the various assets needed to bring the layout to life?

We had open categories for assets such as “boats”, “cars”, “trucks” “props” and so on. We divided the buildings into different types – “residential”, “commercial”, “industrial” – and had different styles within those, from older wood structures to more modern ones. These were set up in different configurations and groupings, from single buildings to whole neighbourhood blocks. At first we used a fairly random scattershot approach, but we became more focused as the categories filled up.

In all, we ended up with around 375 separate assets feeding into the shot. All of these were created as fully 3D models, viewable and renderable from all angles. But we were careful to control the level of detail – we built and laid everything out it very much with the mentality of a matte painter. And the requirements of the shot meant that none of the assets needed to be built to what we would usually consider a “hero” level of detail.

What software did you use to build the models?

Many of the models began life in SketchUp, but all were cleaned up, textured and shaded in Maya. Every building and prop was set up with the same mapping inputs, and many shared shader settings – this allowed us to automate and regulate many of the processing tasks.

With so many objects to bring together, we decided to radically simplify the UV and texture process. Therefore, almost all of the objects were baked down to a single UV set. We decided on four basic shader inputs, and then used a single map to drive each of these. This let us speed up not only the creation process, but also the rendering times and memory usage further along the line.

With the models constructed, how did you begin assembling the shot?

We were handed the live-action plates of the head and tail shots – the “bookends” – and given a verbal description of the shot, along with a rough idea of the timing John Woo had in mind.

We broke up the city into different sections and subsections, each of which could be worked on and published separately. Anyone on the team could jump in and work on a specific area as needed, so we were able to “multi-thread” the layout and population effort. We had a few hitches from time to time – cars driving over people or people walking into buildings – but all in all it worked out well.

We added and animated cars and people into the streets as one of the very last stages. The people were realised via very simple models, pre-animated with various walk and talk cycles before being placed into the scene. Additional life was added with flags and other minor animated pieces.

Shanghai CG model created by Tippett Studio for "The Crossing"

Describe the process of rendering out the scene.

All the separate city sections were collected into a master scene file, and the final rendering was done using V-Ray. We made some touch-ups using Glyph’s Mattepainting Toolkit, but most of the scene’s look and detail was handled via the raw CG assets. We also relied heavily on lighting and atmosphere.

The renders were split into various passes and assembled using Nuke. Early on in the process, we had identified several key frames that best represented the major elements of the shot; using these, we’d made well-defined key art representing what Chris Morley wanted to see. That was what we presented to John Woo.

We used the key art throughout the process for reference to lighting and detailing, but it was particularly useful in the comp stage as the rendered layers were assembled. What really helped was having a strong vision of what the shot was going to look like and stickling to it. Chris’s background is in compositing, so he had a strong feeling for how he wanted the elements to come together.

Were any significant changes made to the shot as your work progressed?

We could have adjusted quite easily to camera changes, since we’d built our assets to be used wherever we needed them. But it wasn’t necessary.

However, John Woo liked what was happening in the shot so much that he asked us to extend its duration by about 15 percent. We loved this, because the slower camera let us see more of our work!

Tippett Studio is strongly associated with character and creature work. Will you be doing more of these big environments in the future?

We are known for our rich history of animated creature work, but we’ve been doing environment work going back to The Matrix Revolutions, Constantine and Starship Troopers. However, that’s always viewed secondary to our creature work outside the studio.

During the last five years, we’ve really amped up our desire to accommodate more environmental work, and improved our pipeline accordingly. We have strong artistic vision and processes in place that allow us to bring large shots together quickly and flexibly, and we’re definitely looking to build on that more and more. The Shanghai flyover shot was a fantastic opportunity to prove that Tippett can quickly pull off crafted, large-scale environments without a creature in sight!

Currently we have a really exciting project in-house that involves flying over various incredibly dramatic landscapes, cloudscapes and cityscapes. All in all, it’s about five or six minutes of footage. It’s really fun stuff!

Did you draw any lessons from your work on The Crossing?

It was a whirlwind of an effort, and a lot of work came together very quickly. Some things we would do differently, of course, mostly in the area of what tech we would choose. The render tech we used on The Crossing was V-Ray and while it worked out incredibly well for us we’re currently looking into a program called Clarisse, which functions a lot like Katana, but with many more bells and whistles for environment work. It has everyone very excited.

Overall, I think everyone was really pleased with how smoothly the team worked together. We have experienced people in place who respect each other’s involvement, and I think we made all the right choices for the time and effort.

"The Crossing" poster

Special thanks to Niketa Roman. Images and video courtesy of Tippett Studio.

“Interstellar” Scientific Papers Published

Interstellar Black Hole - Double Negative

Lots of films employ scientific advisors. But rarely do the filmmakers end up advising the scientists.

However, that’s exactly what happened today, 13 February 2015, when the scientific paper “Gravitational Lensing by Spinning Black Holes in Astrophysics, and in the Movie Interstellar”, co-authored by Professor Kip Thorne and Double Negative’s Oliver James, Eugénie von Tunzelmann and Paul Franklin, was published in the Institute of Physics Publishing’s journal “Classical and Quantum Gravity”.

The paper describes the innovative computer code used to generate the film’s images of a wormhole and black hole, together with their backdrop of stars and nebulae. Using the code, the Double Negative team discovered that, when a camera is close to a rapidly-spinning black hole, spatial peculiarities known as caustics create multiple images of both individual stars and of the thin, bright plane of the galaxy in which the black hole resides. This is the first time the effects of caustics have been computed for a camera near a black hole.

In order to effectively visualise the caustic effects without unwanted flickering, the Double Negative team decided to abandon the standard rendering approach of using a single light ray for each pixel. Co-author of the study and chief scientist at Double Negative, Oliver James, explained:

“To get rid of the flickering and produce realistically smooth pictures for the movie, we changed our code in a manner that has never been done before. Instead of tracing the paths of individual light rays using Einstein’s equations—one per pixel—we traced the distorted paths and shapes of light beams.”

Co-author of the study, Kip Thorne, added:

“This new approach to making images will be of great value to astrophysicists like me. We, too, need smooth images.”

Oliver James continued:

“Once our code – Double Negative Gravitational Renderer (DNGR) – was mature and creating the images you see in the movie Interstellar, we realised we had a tool that could easily be adapted for scientific research.”

A second complementary paper by the same authors, “Visualizing Interstellar’s Wormhole”, is available on both the arXiv and Double Negative websites, and will be published soon in American Journal of Physics. It describes the creation of the Interstellar wormhole, and highlights the variety of study opportunities offered by the film for general relativity students.

“Life After Pi” Wins Documentary Award

Life After Pi Main Title

Life After Pi has won the award for Best Documentary Short Film at the 30th Annual Santa Barbara International Film Festival. Directed by Scott Leberecht and produced by Christina Lee Storm, the film chronicles the collapse of the visual effects studio Rhythm & Hues Studios in 2013. In a bitter twist of irony, even as news of the collapse was breaking, Rhythm and Hues won the Academy Award for its VFX work on Ang Lee’s Life of Pi.

Here’s what Cinefex founder, Don Shay, had to say about the film:

“Frank, but not inflammatory, Life After Pi takes us behind the scenes of the ironically-timed demise of Rhythm & Hues, and puts a very human face on an industry-wide tragedy that finds award-winning visual effects companies struggling to survive and talented effects artists leading migrant lives in search of gainful employment – all in crucial support of movies that make billions at the boxoffice.”

Speaking about the documentary’s award win, Leberecht said:

“As a twenty-year veteran of the visual effects industry, I am very happy that our film is being acknowledged and seen by so many. I carry a deep sense of gratitude for the brave individuals who participated in the film and went on record. For a brief moment, we broke through the culture of fear. My hope is that the conversation continues.”

Storm added:

“It’s a bittersweet win. We’re proud to have earned this recognition, but we’re also mindful of the human impact the events had upon the people featured in the movie, our co-workers and friends. We hope the film continues to shine a light and contribute to a better business model for all concerned in the film industry.”

Storm and Leberecht are currently developing a feature length documentary exploring the industry’s transition from optical to digital techniques in movie special effects. You can learn more about this – and Life After Pi – at the Hollywood Ending website.