The humble composite is the backbone of all visual effects. If you doubt me, check out the Oxford English Dictionary, which defines a composite as “anything made up of different parts or elements”. If that doesn’t describe almost every visual effects shot ever created, I don’t know what does.
To trace the development of the composite shot, we need to wind the clock a long way back. Even before moving pictures began to, uh, move, still photographers took great delight in bamboozling people with camera tricks. They used double exposures to create vaporous ghosts. Forced perspective illusions made large things appear small, and vice versa. With their clever painted backdrops and miniature sets, they transported ordinary people to extraordinary locations.
In short, the camera has always lied.
Sometimes, these early composite images really did seem like magic. In 1917, the eminent author Sir Arthur Conan Doyle was taken in by a series of photographs taken by two English girls, in which the youngsters appeared in the same frame as a troupe of pint-sized fairy folk. It wasn’t until 1980 that the photographers – now old ladies – confessed their pixie playmates had been nothing more than cardboard cut-outs.
Early filmmakers borrowed still-frame techniques and used them to create a host of early moving composites. But, while the simpler tricks translated well to the motion picture medium, more complex illusions proved difficult. It’s one thing taking individual photographs, cutting them up and patching them together, but how do you make a complex collage when the pictures are whipping past at 24 frames per second?
One of the earliest answers to that question was mattes. A matte is simply a mask – a means of blanking off part of a photographic frame during an initial exposure, allowing a later exposure to fill in the missing piece of the puzzle.
To understand mattes, let’s imagine a typical early composite shot of an actor walking up to the door of a gigantic castle. First, the director shoots his actor, in a minimal set, through a sheet of glass. The upper part of the frame – where the castle will appear – is masked out on the glass with black paint. The exposed film is then stored in its undeveloped – or latent – state, while an artist paints the rest of the castle, also on glass. The camera is lined up with the castle painting, and the undeveloped film is wound back and exposed a second time, this time with a black mask protecting the part of the frame where the actor is walking.
Slightly more tricky than the latent image matte is the bi-pack matte. Here, the live action of the actor is shot without any masking, developed, and then loaded into the same camera as a reel of fresh, undeveloped film. This camera is set up in front of a glass painting of the castle, in which the live-action area has been left clear of paint. The fresh film is then exposed twice. The first time, the painted castle is unlit and a white light is shone through the clear glass area, contact-printing the live action directly on to the fresh stock behind. Both strips of film are rewound and a second exposure is made, this time with the castle painting illuminated and a black cloth behind the glass preventing any further exposure through the live-action portion of the frame.
Both the latent and bi-pack methods work well with static shots like the one I’ve described, in which the masked area is fixed and the actor stays well away from the blend line between the two elements. But what happens when you want your actor to pass in front of the painted castle?
For that, you need a travelling matte.
Early travelling mattes were created using a variant on the bi-pack method, invented by Frank D. Williams in 1916 and known, not surprisingly, as the Williams Process. Here’s how it works. First, you shoot your actor against a blue screen. By carefully printing this footage on to high-contrast film, you can generate a black silhouette of the actor moving against a pure white background – a holdout matte . Reverse-printing the holdout matte then creates a corresponding cover matte – a white silhouette against black.
To create your final Williams composite, first load your previously-shot castle background bi-packed with the holdout matte. Next, use a white light to print the combined footage on to a third, unexposed piece of film (the holdout matte allows everything to print except the moving silhouette of the actor). Finally, rewind the film and load up your actor footage, bi-packed with the cover matte, to print the actor’s image neatly into the black hole left in the castle footage. (The Dunning Process, developed by C. Dodge Dunning in 1925 and used to great effect in King Kong in 1933, works in a similar way, except it uses yellow light to illuminate the actor, improving the separation from the blue screen.)
These techniques worked well enough throughout the black and white era, but the advent of colour film threw a spanner in the works. Luckily, by the 1930s, it had become possible to synchronise movie cameras with the latest, high-illumination projectors, heralding the era of rear projection .
Using rear projection, you can shoot your castle background in advance – and in colour if you choose – and then project it on to a large screen. Your actor can prance around in front of this projected image to his heart’s content, allowing you to capture his every antic in-camera. You can even do clever things like having your actor walk on a treadmill while the background pans behind him, to create a moving camera composite. On the downside, you’ll have to deal with a hotspot in the centre of the screen and fall-off at the edges, not to mention a nasty increase in grain and contrast in the reprojected footage.
One way or another, reprojecting film lies at the heart of almost all subsequent compositing developments during the photochemical age. That’s just as well, because if I’m to wrap up this whistlestop tour before bedtime I’m going to have to speed things up a little. Suffice it to say that the next sixty years saw the refinement of both rear and front projection techniques, and the development of the optical printer (which is really just a highly engineered way of rephotographing previously shot material), not to mention any number of proprietary processes ranging from Dynamation to Zoptic, along with curiosities like Introvision, which brought old-school theatrical effects into the mix in the form of the beam splitter (a half-silvered mirror positioned at 45° in front of the camera).
In 1977, Star Wars combined the almost defunct blue screen with a computerised motion-controlled camera and a precisely machined optical printer, turning the craft of compositing into a laser-accurate art form. At the same time, people like Doug Trumbull were keeping the spirit of the early pioneers alive by pushing latent image matte techniques to the limit in films such Close Encounters of the Third Kind and Blade Runner. To get clued up on how a typical composite was put together at ILM back in the 1980s, you can’t do better than this BBC Horizon documentary from the period, which breaks down classic shots from Return of the Jedi and Indiana Jones and the Temple of Doom.
Then the world went digital, and everything changed.
Or did it?
The modern compositor has access to vast range of computer software options. But the techniques, and the mind-set behind them, aren’t really anything new. In a simple AfterEffects setup, for example, you might build a shot using flat layers stacked in a virtual 3D space. That’s little different to what Disney artists were doing with a multiplane camera back in the 1930s. Nuke may take things to a whole new level, but the single most significant advantage any digital tool has over its photochemical counterpart is that you can duplicate elements without any loss of quality. Everything else is down to the skill and imagination of the user.
What the digital tools do give you, however, is improved workflow and an extraordinary level of finesse. In the photochemical days, the best you could hope for was to hide the join by smearing a little Vaseline on the lens of the optical printer. Now, compositors wield lens flares and chromatic aberrations with casual abandon, feathering edges and flashing in atmospheric haze in order to blend hundreds, if not thousands, of elements into a seamless whole.
The new kid on the block is deep compositing, in which every pixel rendered for a visual effects element contains information not only about colour and opacity, but also depth. This crucial z-plane data enables compositors to layer up separate elements without having to worry about those pesky holdout mattes (yes, that term is still in common use, even after all these years); the depth information contained within each element determines which should appear in front of the next. For a crash course in deep compositing, check out this video from The Foundry in which Robin Hollander of Weta Digital talks about the Golden Gate Bridge sequence from Rise of the Planet of the Apes.
If the history of compositing were a shot in a movie, it would be a helluva complex one, packed tight with elements all fighting for their place in an integrated whole. This potted history has been necessarily brief, so feel free to wade in and tell me about all the pieces I’ve missed, like the smoke and mirrors of the antiquated Schüfftan Process, or the meticulous work of the rotoscope artist which, to this day, conjures travelling mattes from shots where you’d swear the edges are nowhere to be seen.
Compositing is one giant jigsaw puzzle. Where does your piece fit in?