Launching the Cinefex Classic Collection

Cinefex Classic Collection for iPadStop your grinnin’ and drop your linen! Yes, it’s true: the entire Cinefex back catalogue is now available to download for iPad!

Every issue of the quarterly visual effects magazine – starting with issue 1 from 1980 – has been digitised, with searchable text and the original images restored and, where possible, enhanced. The issues are presented both in their original magazine layout, as well as a tablet-friendly scrolling view. Produced by New Scribbler Press, in collaboration with Cinefex, the Cinefex Classic Collection is the essential resource for visual effects professionals and fans alike.

If you’ve any doubt as to the mammoth scale of the effort it’s taken to bring every Cinefex ever published to the iPad, check out New Scribbler’s stats on exactly what the collection comprises:

  • 126 issues
  • 8,970 magazine pages
  • 31,141 photos/ads
  • 45.7 million characters

Phew! With all that visual effects history at your fingertips, you may never leave your couch again.

Those people who helped fund the Cinefex Classic Collection project via Kickstarter should already have received a code to release their rewards. If you aren’t one of the original backers, don’t worry: you can download the app for free and start your collection now. Each classic issue is priced at just $4.99, and the option to download the complete collection in one purchase is coming shortly!

Here’s what Cinefex publisher Don Shay had to say about the launch:

I tend to be an old-school paper-and-ink kind of guy, but I’m in awe of how the Cinefex Classic Collection has rejuvenated our back-issue catalogue, with restored photos looking better than they ever did, and with a host of digital features – the most awesome being an intelligent search function that can scan the entire collection and put you on the page you want within seconds. With nearly 9,000 pages of articles, that’s an invaluable tool for researchers or effects artists – or even those of us on the Cinefex staff. Our hats are off to New Scribbler for delivering on their vision – in spades!

The CG Elephant in the Old School Room

ElephantWe’ve been hanging out together for a while now. I’ve said some things. You’ve answered back. We’ve drunk coffee, occasionally downed a few glasses of beer. I think we can call ourselves friends. But, if this relationship’s going to move forward, there’s a question we’ve got to get past.

In short, there’s an elephant in the room.

The question’s a simple one. Nevertheless I hesitate to ask it because it frequently sets people off ranting. But fortune favours the bold, so here goes:

“Which are better? Modern CG visual effects, or the old school practical variety?”

I’m going to throw this hot potato high in the air by offering you my own personal opinion. To stop myself rambling on, I’m limiting myself to just one sentence:

When it comes to visual effects, I want it all, which means I want to be amazed by spectacle, entertained by storytelling and misdirected by cleverness, and quite frankly I don’t care whether you achieve that by wielding gaffer tape and piano wire or juggling geometry and pushing pixels, just show me something that looks freakin’ fantastic, and at the same time respects the truth that visual effects as a discipline has a long, long history that informs every single decision a VFX artist makes even today – especially today – and which reflects my own experience as someone who once made short animated films for theme parks using an ancient iteration of 3DS Max, creating digital models and camera setups that I knew – just knew – were influenced by all the old school techniques I’d ever read about in the books and magazines and (of course) in Cinefex, so much so that on one occasion when I had to rework a shot of a space station to incorporate an astronaut waving through a window (don’t you just love those clients who think you can add an entire animated human being into a shot in the blink of an eye?) – a change so last-minute that my best option was to replicate the camera move on a stock figure driven by a motion-captured waving gesture from an equally stock library and use a hasty travelling matte to comp the result into my already-rendered exterior, going in frame by frame with Photoshop to do a little extra tweaking – on that occasion what I had going through my head was not a breakdown of the digital world I was manipulating, but a powerful sense of how this humble shot held echoes of a thousand similar shots created over countless years by practitioners infinitely more skilled than me, whether it be tiny wooden people planted physically on the deck of the Venture in the 1933 King Kong, or live action stage footage of Luke, Leia and the droids projected into the window of the medical frigate for the final spectacular pullback shot of The Empire Strikes Back, or a digital First Officer Murdoch trotting nonchalantly over the deck of James Cameron’s miniature Titanic, and with all that in mind, and while I’m as susceptible to nostalgia as the next man when faced with a glorious Albert Whitlock matte painting or a particularly crafty photochemical comp, and go all gooey at the thought of all those romantic old school artisans toiling with real materials in the real world (much as I might go gooey at the sight of a magnificent and equally romantic tall-masted clipper ship sailing majestically over the ocean), I respectfully suggest that modern CG visual effects are absolutely as inspiring as their old school counterparts, because I firmly believe that by standing on the shoulders of giants you can see one hell of a long way, and I would also add that the behind-the-scenes stories – the VFX creation myths, if you like – are as endlessly fascinating as they ever were, because what drives them is not the gaffer tape, is not the pixels, is neither hardware nor software but wetware, by which I mean the human beings whose artistry, ingenuity and honest sweat continue to solve problems most of us can never dream of solving, and to deliver to our screens the most astonishing visions, the most compelling stories, the most dazzling, glorious moving pictures – in short, visual effects is all about the people, and that, in all this craft’s long and honourable history, is something that hasn’t changed, and never will.

Anyway, that’s what I think. Now it’s your turn to throw your hat in the ring. Just tell me what you think in the comments box below. But please, like me, keep your answers brief. I wouldn’t want anyone to start ranting.

Phil Tippett Talks “Mad God”

Phil TippettAsk any visual effects fan who Phil Tippett is, and they’ll probably answer: “He’s the guy who did all the animation in the original Star Wars trilogy.” Or you might get: “Didn’t he do Dragonslayer and RoboCop?” They might tell you he’s the Oscar-winning stop motion expert who entered the digital age when he became dinosaur supervisor on Jurassic Park. They may also know he’s the founder of acclaimed visual effects company Tippett Studio, based in Berkeley, California.

What they may not know is that, having served up top-drawer visual effects to Hollywood for nearly forty years – including recent work on Cloverfield, the Twilight saga and Ted – Phil Tippett has just released a film of his own. It’s called Mad God, it’s Kickstarter-funded, and it chronicles the journey of a character known only as The Assassin through a bizarre underworld populated by macabre creatures and unholy monsters.

Here’s what Phil Tippett has to say about the movie:

The final form of Mad God isn’t the film itself, but the memory after you watch it. It’s bringing you to that moment just after waking up from a dream, frozen, exploring fragments of your feral mind before they fade back into the shadows. That’s the moment. Mad God is just a way to get you there.

In an exclusive interview for Cinefex, Phil spoke to me about Mad God, independent filmmaking and the craft of stop motion animation …

Phil Tippett animates a shot for "Mad God"

For people unfamiliar with Mad God, could you describe what the film is, and its genesis?

Around the time I was doing the RoboCop movies, Jurassic Park and Starship Troopers, I spent the better part of ten years going around pitching projects. I developed maybe ten different things: lots of key art and scripts. After a period of rejection, Ed Neumeier – who wrote Starship Troopers and RoboCop – told me all my ideas were “art-damaged” … meaning, I guess, they were just too weird. I took that to heart and just stopped.

Mad God - original storyboard

Original storyboard for Phil Tippett’s “Mad God”

One of the projects was Mad God. I’d shot about six minutes worth of film on 35mm, but the project had got too big, and around that time the digital revolution hit, so I had to completely re-gear how I thought about things. Then, about three or four years ago, some of the guys at my studio – including Chris Morley and Randy Link – saw me archiving all this ancient Mad God material. They were really excited and relaunched the project.

It started where I would do a few setups, and then go off and shoot a movie for my day job. There was a great deal of interest from people wanting to volunteer to help, so I put together this team that was really skilled and talented, with good eyes and good thoughts.

Since then we’ve been gaining more and more momentum – albeit glacial, this being stop motion work. I did this Kickstarter thing, so we got some funds that paid for the stage rental and lunches for the crew, and we’ve just wrapped up Chapter 1. I’ve got four segments planned out, and each is going to be about twelve minutes long.

A tortured creature from "Mad God"On the Kickstarter video, Chris Morley describes making Mad God as a “therapeutic process”. Could you explain that?

It’s about making things. As opposed to computer graphics, if you work with objects there’s a “zone” that you get into where the objects start talking to you. You’re not telling them what to do; they’re telling you what to do. It’s a different kind of creative process.

Mad God is the antithesis of my day job, where there’s a lot of filmmaking rules. I thought of it like a painting that I would work on a for a long period of time, maybe shoot something over here, and something over there, and not really know exactly how they were going to link. I studied my dreams. I read a lot of Jung. I wanted to use the unconscious to drive the thing. And that takes more time. You have to let things cook for a while. I wanted to do something without the trappings of story and plot. There’s a narrative, but it’s not a story per se.

Tom Gibbons animates Mad God

Tom Gibbons animates a stop motion shot for “Mad God”

The lead animators that I’m working with – Chuck Duke and Tom Gibbons – are really great, experienced, professional stop motion animators. They’re totally in love with the process. They work for me in the day job now as computer graphics animators, and then we do Mad God at the weekend. Gibby and Chuck and I do most of the animation, and we’ve got a team of maybe six to eight people coming in also that I can task.

A miniature set from "Mad God"

Could you talk us through how a typical stop motion shot from Mad God is set up? How much is it planned out? What’s going through your head during those long hours when you’re animating?

Well, the direction I give to most people is: “Don’t think!” I look at animators like they’re actors. It’s all about performance – this different, weird kind of performance.

You’ll get into a shot and not really know where you’re going for the first twenty frames. Then, all of a sudden, it starts to make sense, and you just keep going. The shots can double or triple in length, but they still have a good momentum – they fulfil everything the narrative needs but they might build and build and build on it, and you don’t even know it until you’re actually in there doing it.

Animating The Assassin for "Mad God"

I think that will surprise some people. I think there’s a perception that animation is meticulously planned to the nth degree.

It depends on your skill level. At a certain level it’s like playing a musical instrument, where you’ve intuited all of this stuff so much over your life that you can just do it without a great deal of thought. Because the process is so slow, you’re working yourself into a meditative state. Certainly, the concentration level is really high. That’s just what it takes.

(Mad God’s immersive and atmospheric soundtrack features an original score by Dan Wool and sound design by Richard Beggs.)

Dan Wool came on board when I had only six minutes of the original material. He created a score that we were able to break up and move around while we were creating the film. The main theme is a very beautiful, fragile theme. Dan works in other more ambient, industrial noise, but it’s still scored as music.

I also got really lucky to work with one of Dan’s collaborators, Richard Beggs – one of the great sound designers. When I showed it to Richard for the first time, he said, “You know, not much is happening in this, but there’s a lot going on.” And I said, “That’s the movie I’m trying to make!”

The Assassin fro "Mad God"Can you tell us about the different characters in Mad God?

We call the main character The Assassin. He leads us through the film. The way I think about it – and I don’t preach this to anybody – all of the characters in Mad God are members of a ghost or spirit world, disconnected entities that don’t have any kind of solid background. Backgrounds are insinuated but they’re never specifically delineated.

For The Assassin, do you have a number of puppets in different scales?

I work in what I call a “Ray Harryhausen scale”, so the puppets are about the size of a small cat. It’s something you can easily get your hands around and manipulate. So, for the main characters, there’s usually just a one-off.

The Shitmen from "Mad God"

The Shitmen

In Chapter 2, there’s a section with these zombie-esque guys that we call the Shitmen. They’re built in different scales, because I’m doing some forced perspective stuff. I’ve got lead heroes with really good ball and socket joints, and then probably sixty others that are much more simplified, with wire armatures, that are foam-cast.

The outer surfaces of the Shitmen puppets are taken from my vacuum cleaner at home – a lot of dust and cat hair. I found that when I glued all of that on to these foam rubber things, it looked really great. It’s generally the kind of thing you stay away from when doing stop motion work, because the surface is very unstable. Every time you touch the thing, the surface shifts and changes. In a way it was like the 1933 King Kong.

Where the fur ripples.

Right. It has that same feeling, where the surface is agitated all the time.

As well as animated characters, the camera is very animated too. (The cameras used for Mad God were Canon EOS Rebel T2is with Nikon prime lenses.) There are sweeping shots where the camera moves round a big miniature. What sort of rig do you use?

I’ve got a lot of equipment left over from the photographic era, stuff from the RoboCop movies. I have lathe beds that I can attach cameras to that allow for tracking shots. I’m intentionally staying away from motion control. I like to leave myself open to changing things at any given moment. I don’t want the preordained camera moves to influence the performances; I want the performances to drive the camera moves. So I can always at any point go in and slow things down, or speed things up, or bring things to a stop.

The ammonite shot from "Mad God"There are a number of shots showing The Assassin’s diving bell descending on a rope through all kinds of bizarre environments.

For example, there’s one shot where the diving bell drops down in front of a gigantic ammonite, with the light from the capsule playing dramatically up the ammonite. Are these in-camera shots or composites?

Miniature diving bell from "Mad God"

The ammonite that I have in my collection is probably not more than four by six inches. Matt Jacobs animated a light going down and illuminating the ammonite. We took that, and used it as a background plate, and projected that up like Ray Harryhausen would have done (the background plate was displayed on a large TV monitor). Then we built a tiny diving bell – not much more than an inch tall. Chris Morley animated it going down on invisible wires, timing it to the lighting effect that Matt had shot.

(While Mad God is around 90% stop motion, there are a number of live action inserts. These inserts were shot using a pixilation technique originally developed for a scene in Chapter 3 …)

The Charnel Hospital from "Mad God"

The Charnel Hospital

Twenty years ago, cinematographer Pete Kozachik and I did this big shot for Chapter 3 in what we call the Charnel Hospital: ten floors of hospital rooms with operating tables where victims have been disassembled. The camera booms down and moves in, and we dissolve to a live action set.

My first intention was to do straight live action, but it looked crappy. It felt like a sitcom or something. So I came up with this pixilation effect that worked out really well.

In the scene, Niketa Roman plays a nurse, and our lead compositor Satish Ratakonda plays a doctor. I would have them rehearse a move until the performance beats were drilled in. Then, when we shot, I would have them do the same pantomime, but backwards, and as slowly as they possibly could. And there was no way they could do it – no possible way! So there would be these little screw ups and inconsistencies. I would take that footage, reverse it and speed it up by about 800%, so I would get a shimmering, kinetic texture.

(This technique was subsequently developed for use in Chapter 1, and proved an efficient way of creating inserts such as a shot of The Assassin opening up his jacket and looking at a map, or close ups of feet and hands. The pixilation effect helped to integrate the live action with the artificial stop motion world.)

There’s a certain amount of digital finessing, like the flak during the opening sequence.

Yeah, there’s quite a bit of compositing work, including the flak. I try and keep as many setups as I can stop motion and in camera. There’s a wide master shot where The Assassin walks into this big junkyard. In the foreground is a big pool of green slime, and in the background there are fires burning. I would sweeten shots like that by comping in the lake and the fires.

Junkyard composite shot from "Mad God"

The Junkyard composite shot

You said you had six minutes of film going back twenty years or so. How much of that has made it into the final cut?

I’m using just about everything. We take the Ray Harryhausen philosophy, which means 99.9% of the time we get everything in the first take. What I tell everybody is: “You can’t do anything wrong. Whatever you do is going to be right.”

You’ve described your improvisational approach to Mad God. In a visual effects shot for a theatrical feature, if you’ve got a three-second cut where you know the dinosaur’s got to get from point A to point B, is the process of animation inevitably more mechanical, or do you still get “in the zone”?

All of that is very heavily driven by the budget. Everything has a dollar attached to it – every single frame. So everything is boarded out, and from the boards you create your budget, and the budget drives the schedule. So you pretty much know what you’re doing all the time. Mad God is more in the tradition of the puppet film. The world is totally artificial. That gives you a great deal more freedom to do pretty much whatever you want to do.

We’ve seen some successful stop motion features in recent years, and it seems to be a relatively healthy craft. Do you feel optimistic about the future of stop motion?

There’s a grass roots interest that keeps it alive. Theatrical features with stop motion appear to be dwindling, unless you’ve got a billionaire behind you. I don’t think the last couple of films did that well at the box office. Studios don’t like to do it any more, it’s too complicated! But I don’t have that problem. When I don’t have a schedule, I can let things cook and simmer.

A homicidal creature from "Mad God"The influence for Mad God was totally out of Vladislav Starevich and Jiří Trnka – I like that eastern European darkness. I’m not such a big fan of the Tim Burton/Pixar approach. They’re driven by the economic needs to reach a big audience. In this country, everybody thinks that animated features have to be PG, and you have to start off with a little kid that’s got some huge thing that he has to deal with and ends up saving the world. To me that’s just a big yawn. There’s a great deal of skill and craftsmanship that goes into it, but it’s driven by the presumption that that’s the kind of thing that sells.

The Kickstarter crowdfunding model has worked very well for you. There’s clearly an enthusiastic audience out there.

If you go to a studio and try to get money for something, it’s like pulling teeth. What I found with Kickstarter was that it’s more a like a magazine culture. People that are interested in this kind of thing are very altruistic. They want to see something. They want to participate. They want to give you money! And it’s like, wow! A bunch of nice people want to give me money to make this thing!

So much so that you’re able to look ahead to another three films on top of this one.

Right. I’ve broken Mad God into four films in total. The way I’ve articulated the chapters is pretty much driven by the Kickstarter model. I’m trying to get the production time down so that I can create each ten to twelve minute section within a year.

A doomed creature from "Mad God"Actually, there may be a little confusion about how I’ve approached this. In some of the initial trailers, I have elements from Chapters 1, 2 and 3 mixed in. And so some people that have seen Chapter 1 are saying, “Wait a minute, I was expecting more stuff”

When I get these four movements done, I’ve got ideas about how to go back in to any given point and open it up with other material. I might tell the back story of The Assassin. I haven’t decided if I’m going to do that or not – I’ll probably want to think about it for a few more years.

The ability to evolve it over time goes back to what you said about it being like a painting.

Yeah. And it’s about how you engage yourself in a creative process. I look for ways of making things that get you back to being a child: working in a milieu of play where you don’t have all of the answers. The answers find you. As a consequence, you get surprised.

Creativity as a voyage of discovery, rather than a preconceived notion of what the end result will be.

Definitely. The day job I would describe as “architectonic”. If you have any business being in the commercial theatrical film world, you had better dang well know that the foundation that you’re pouring is going to support the twelve-storey building, because you don’t want to get up on the fourth floor and realise that you’ve got to jackhammer everything back down. And there’s just huge sums of money involved. When you don’t have that burden, it allows for a very different kind of process.Phil Tippett at work on "Mad God"

Finally, talking of theatrical features, the visual effects community is excited at the news that you’ve got the job of Dinosaur Supervisor on Jurassic World. I think people need to know if you’re going to be able to keep the dinosaurs under control this time, Phil, or are more people going to die?

You know, I don’t know what to say about that. I mean, everybody makes mistakes, don’t they? I make no promises.

Special thanks to Niketa Roman. All images copyright © Tippett Studio 2014. Used with permission.

The Trouble With Movie Stars

Night sky image by ESO/Yuri Beletsky

Hollywood stars are simply too big for their boots.

To clarify, I’m not talking about those charming actor-types known affectionately as the talent. I’m talking about actual stars. You know, those sparkling points of light that form the perfect backdrop for a swooping spaceship, or twinkle delicately over a farm at night, shortly before a gigantic killer robot squashes the barn.

Putting stars on the screen has always been a tricky business. Once upon a time, a night shoot meant stopping down, taping a blue filter over the lens and hoping in vain that the audience might actually buy the whole concept of day-for-night.

For me, the best old-school night skies appeared in Close Encounters of the Third Kind. Remember those fabulous Doug Trumbull starscapes suspended over Greg Jein’s miniature landscapes? For the first time in movie history, we saw stars that really looked like stars.

Reflecting on some of the sci-fi movies I’ve seen recently, it struck me just how realistic the stars looked (well, someone’s got to think about these things). Whether you’re tweaking your noise nodes in Nuke, plugging in particle systems with Maya, or just spraying yourself a nebula in good old Photoshop, the tools are available to help you make a starfields that are indistinguishable from the real thing.

But how realistic are they? To find out, let’s go stargazing

First, let’s choose ourselves an empty meadow, far away from city lights. Follow me to the middle of the field (mind that cowpat). Now we wait half an hour for our night vision to kick in. Finally we look up … and see spread above us the most gorgeous filigree of cosmic light. I mean, just look at it. It’s incredible, don’t you think? In particular, can you see how tiny each individual star is? I mean really tiny?

So here’s my question: if you were a visual effects supervisor charged with replicating that awesome view, how could you possibly make the stars small enough?

Time to crunch some numbers.

Most of the stars we can see from our meadow just look like points of light to our human eyes. In other words, they have an imperceptibly small angular diameter, which makes them impossible to measure. Luckily for us, however, there’s one star up there that’s bigger than the rest. It’s called Betelgeuse and, according to data from the Hubble Space Telescope, it has an angular diameter of around 0.125 arcseconds. One arcsecond is equivalent to 1/3600th degree. Or, if you prefer, pretty darned small.

Brr! Cold out here, isn’t it? Let’s go somewhere a little warmer. I vote for a movie theatre.

Everyone sitting comfortably? Right, let’s project a scene showing the same night sky we were just looking at. The scene’s been shot digitally at 4k resolution, which means we’ve got 4,096 pixels spanning the screen horizontally from left to right (for this exercise, I’m going to ignore the vertical). The scene’s been shot with a lens giving us a “normal” field of view of 55 degrees. That’s equivalent to a dizzying 198,000 arcseconds.

Next we need to work out the ratio of pixels to arcseconds. To do this, we divide 4,096 by 198,000. That gives us an answer of almost exactly 0.02. In other words, a single arcsecond spans just one fiftieth of a pixel.

The apparent diameter of Betelgeuse is one eighth of an arcsecond. So, in order to visualise the star “realistically” on screen, it’s got to be just one four hundredth of a pixel wide.

Correct me if I’m wrong but, by definition, one pixel represents the smallest single point that can be resolved on a movie screen. And Betelgeuse – the biggest star in the night sky – is four hundred times smaller than that!

There are only two possible conclusions we can draw from this bombshell. The first is that, with current digital technology, it’s literally impossible to project an image of a star at a size that’s genuinely representative of what the human eye would see.

The second possibility is that I’m really terrible at maths.

Frankly, I’m quite prepared to believe the latter. I’m also prepared to take flak on all the things I’ve failed to account for, such as atmospheric haze, which produces a glow around a star that greatly affects its apparent size. Or the fact that no film director is going to show you half an hour of black footage just so your pupils can dilate wide enough for him to show off his ultra-realistic stars.

As for all you optical and mathematical wizards out there, I can hear you furiously thumbing your calculator buttons already. So I’ll leave you with an open invitation to tear my argument apart and beat me over the head with its remains until … well, until I see stars.

Night sky image by ESO/Yuri Beletsky

C is for Composite

The VFX ABC - "C" is for "Composite"In the VFX ABC, the letter “C” stands for “Composite”.

The humble composite is the backbone of all visual effects. If you doubt me, check out the Oxford English Dictionary, which defines a composite as “anything made up of different parts or elements”. If that doesn’t describe almost every visual effects shot ever created, I don’t know what does.

To trace the development of the composite shot, we need to wind the clock a long way back. Even before moving pictures began to, uh, move, still photographers took great delight in bamboozling people with camera tricks. They used double exposures to create vaporous ghosts. Forced perspective illusions made large things appear small, and vice versa. With their clever painted backdrops and miniature sets, they transported ordinary people to extraordinary locations.

In short, the camera has always lied.

Cottingley Fairies

The Cottingley Fairies

Sometimes, these early composite images really did seem like magic. In 1917, the eminent author Sir Arthur Conan Doyle was taken in by a series of photographs taken by two English girls, in which the youngsters appeared in the same frame as a troupe of pint-sized fairy folk. It wasn’t until 1980 that the photographers – now old ladies – confessed their pixie playmates had been nothing more than cardboard cut-outs.

Early filmmakers borrowed still-frame techniques and used them to create a host of early moving composites. But, while the simpler tricks translated well to the motion picture medium, more complex illusions proved difficult. It’s one thing taking individual photographs, cutting them up and patching them together, but how do you make a complex collage when the pictures are whipping past at 24 frames per second?

One of the earliest answers to that question was mattes. A matte is simply a mask – a means of blanking off part of a photographic frame during an initial exposure, allowing a later exposure to fill in the missing piece of the puzzle.

Matte shot from Elizabeth and Essex

Matte shot from “Elizabeth and Essex” showing masked area – American Cinematographer, January 1940

Finished composite from "Elizabeth and Essex"

Finished composite from “Elizabeth and Essex” – American Cinematographer, January 1940

To understand mattes, let’s imagine a typical early composite shot of an actor walking up to the door of a gigantic castle. First, the director shoots his actor, in a minimal set, through a sheet of glass. The upper part of the frame – where the castle will appear – is masked out on the glass with black paint. The exposed film is then stored in its undeveloped – or latent  – state, while an artist paints the rest of the castle, also on glass. The camera is lined up with the castle painting, and the undeveloped film is wound back and exposed a second time, this time with a black mask protecting the part of the frame where the actor is walking.

Slightly more tricky than the latent image matte is the bi-pack matte. Here, the live action of the actor is shot without any masking, developed, and then loaded into the same camera as a reel of fresh, undeveloped film. This camera is set up in front of a glass painting of the castle, in which the live-action area has been left clear of paint. The fresh film is then exposed twice. The first time, the painted castle is unlit and a white light is shone through the clear glass area, contact-printing the live action directly on to the fresh stock behind. Both strips of film are rewound and a second exposure is made, this time with the castle painting illuminated and a black cloth behind the glass preventing any further exposure through the live-action portion of the frame.

Both the latent and bi-pack methods work well with static shots like the one I’ve described, in which the masked area is fixed and the actor stays well away from the blend line between the two elements. But what happens when you want your actor to pass in front of the painted castle?

For that, you need a travelling matte.

Early travelling mattes were created using a variant on the bi-pack method, invented by Frank D. Williams in 1916 and known, not surprisingly, as the Williams Process. Here’s how it works. First, you shoot your actor against a blue screen. By carefully printing this footage on to high-contrast film, you can generate a black silhouette of the actor moving against a pure white background – a holdout matte . Reverse-printing the holdout matte then creates a corresponding cover matte  – a white silhouette against black.

Composite shot from King Kong

Composite shot from “King Kong” (1933) with live action matted over a miniature background using the Dunning Process. Semi-transparent foreground figures betray the limitations of the technique.

To create your final Williams composite, first load your previously-shot castle background bi-packed with the holdout matte. Next, use a white light to print the combined footage on to a third, unexposed piece of film (the holdout matte allows everything to print except the moving silhouette of the actor). Finally, rewind the film and load up your actor footage, bi-packed with the cover matte, to print the actor’s image neatly into the black hole left in the castle footage. (The Dunning Process, developed by C. Dodge Dunning in 1925 and used to great effect in King Kong in 1933, works in a similar way, except it uses yellow light to illuminate the actor, improving the separation from the blue screen.)

These techniques worked well enough throughout the black and white era, but the advent of colour film threw a spanner in the works. Luckily, by the 1930s, it had become possible to synchronise movie cameras with the latest, high-illumination projectors, heralding the era of rear projection .

Using rear projection, you can shoot your castle background in advance – and in colour if you choose – and then project it on to a large screen. Your actor can prance around in front of this projected image to his heart’s content, allowing you to capture his every antic in-camera. You can even do clever things like having your actor walk on a treadmill while the background pans behind him, to create a moving camera composite. On the downside, you’ll have to deal with a hotspot in the centre of the screen and fall-off at the edges, not to mention a nasty increase in grain and contrast in the reprojected footage.

A typical Dynamation setup - image available at http://www.rayharryhausen.com

A typical Dynamation setup – original image available at http://www.rayharryhausen.com

One way or another, reprojecting film lies at the heart of almost all subsequent compositing developments during the photochemical age. That’s just as well, because if I’m to wrap up this whistlestop tour before bedtime I’m going to have to speed things up a little. Suffice it to say that the next sixty years saw the refinement of both rear and front projection techniques, and the development of the optical printer (which is really just a highly engineered way of rephotographing previously shot material), not to mention any number of proprietary processes ranging from Dynamation to Zoptic, along with curiosities like Introvision, which brought old-school theatrical effects into the mix in the form of the beam splitter (a half-silvered mirror positioned at 45° in front of the camera).

Composite shot from Return of the Jedi

Composite shot from “Return of the Jedi” (1983). Inset shows a reconstruction of the holdout and cover mattes used to create the shot. Screen image copyright © Lucasfilm Ltd.

In 1977, Star Wars  combined the almost defunct blue screen with a computerised motion-controlled camera and a precisely machined optical printer, turning the craft of compositing into a laser-accurate art form. At the same time, people like Doug Trumbull were keeping the spirit of the early pioneers alive by pushing latent image matte techniques to the limit in films such Close Encounters of the Third Kind and Blade Runner. To get clued up on how a typical composite was put together at ILM back in the 1980s, you can’t do better than this BBC Horizon documentary from the period, which breaks down classic shots from Return of the Jedi and Indiana Jones and the Temple of Doom.

Then the world went digital, and everything changed.

Or did it?

The modern compositor has access to vast range of computer software options. But the techniques, and the mind-set behind them, aren’t really anything new. In a simple AfterEffects setup, for example, you might build a shot using flat layers stacked in a virtual 3D space. That’s little different to what Disney artists were doing with a multiplane camera back in the 1930s. Nuke may take things to a whole new level, but the single most significant advantage any digital tool has over its photochemical counterpart is that you can duplicate elements without any loss of quality. Everything else is down to the skill and imagination of the user.

What the digital tools do give you, however, is improved workflow and an extraordinary level of finesse. In the photochemical days, the best you could hope for was to hide the join by smearing a little Vaseline on the lens of the optical printer. Now, compositors wield lens flares and chromatic aberrations with casual abandon, feathering edges and flashing in atmospheric haze in order to blend hundreds, if not thousands, of elements into a seamless whole.

Composite shot from Rise of the Planet of the Apes

Deep composite shot from “Rise of the Planet of the Apes” (2011). Image copyright 20th Century Fox.

The new kid on the block is deep compositing, in which every pixel rendered for a visual effects element contains information not only about colour and opacity, but also depth. This crucial z-plane data enables compositors to layer up separate elements without having to worry about those pesky holdout mattes (yes, that term is still in common use, even after all these years); the depth information contained within each element determines which should appear in front of the next. For a crash course in deep compositing, check out this video from The Foundry in which Robin Hollander of Weta Digital talks about the Golden Gate Bridge sequence from Rise of the Planet of the Apes.

If the history of compositing were a shot in a movie, it would be a helluva complex one, packed tight with elements all fighting for their place in an integrated whole. This potted history has been necessarily brief, so feel free to wade in and tell me about all the pieces I’ve missed, like the smoke and mirrors of the antiquated Schüfftan Process, or the meticulous work of the rotoscope artist which, to this day, conjures travelling mattes from shots where you’d swear the edges are nowhere to be seen.

Compositing is one giant jigsaw puzzle. Where does your piece fit in?

Happy New Year!

The Cinefex Blog 2014Happy New Year! We have tons of great stuff planned for the Cinefex blog in 2014, starting tomorrow with the next entry in our dictionary of visual effects – the VFX ABC. We’ve reached the letter “C” which, as everyone knows, stands for … uh, hold on … it’s on the tip of my tongue … oh darn, let’s just hope that by the time you come back tomorrow I’ll have remembered.

So what else can you expect from the blog this year? Well, we’ll have the usual mix of columns and commentary, interviews and industry reports, fireworks and fun, as we train the Cinefex lens on the magic realm of visual effects – past, present and future.

The blog is just one part of the whole Cinefex experience, which has at its core the classic quarterly magazine (available in print, online and iPad editions) and also includes our popular Facebook page, with its regular daily updates. Oh, and don’t forget to follow us on Twitter.

Now, what is it that “C” stands for again? Anyone?