G is for Greenscreen

G is for Greenscreen - The Cinefex VFX ABCIn the VFX ABC, the letter “G” stands for “Greenscreen”.

You’re standing on a film set. What do you see? Cameras? Lights? A craft service table laden with muffins? A hundred people standing around waiting for something to happen?

Look hard, and you may also see something else: a piece of visual effects technology so commonplace that the eye just skitters over it, barely even registering it’s there – strangely appropriate, because the object’s sole function is to appear completely invisible to the camera.

I’m talking, of course, about the humble greenscreen.

Everyone knows what a greenscreen does. When you point a camera at it, the flat primary colour creates a blank space into which those clever visual effects artists can put anything they like. The greenscreen is a blank canvas ready and waiting to be painted with a spectacular Himalayan panorama, a brooding alien cityscape, a speeding freeway … whatever the backdrop, green is queen.

But can its reign continue? To find out, I asked a panel of VFX professionals whether they thought greenscreens would still be around in ten years time. Before they offer their thoughts on the future of greenscreen, however, let’s take a moment to consider its past.

Greenscreen composite from "Avengers Assemble"

Before and after greenscreen composite from “Avengers Assemble” by ILM

Greenscreen Past

The history of greenscreen is really the history of compositing, which the Cinefex VFX ABC explored in C is for Composite. Still, it never hurts to refresh the memory.

A fundamental discipline of visual effects is the combining of one image with another in a sort of kinetic collage. Typically, this involves cutting the moving image of an actor out of one shot and pasting it into the background of another. To do this effectively, you need a foolproof way of making a moving mask that precisely matches the actor’s constantly-changing silhouette. This mask is known as a travelling matte.

Ever since the early days of cinema, filmmakers have experimented with different ways of creating travelling mattes. One of the earliest solutions is still in use today: filming an actor in front of a coloured screen.

Technicolor bluescreen composite from "The Thief of Bagdad"

“The Thief of Bagdad” features some of the earliest Technicolor bluescreen composites

Developed in the 1930s, the Dunning Process used a blue screen, and required the actors to be illuminated with yellow light. Coloured filters were used to separate foreground from background, but the process only worked in black and white. The arrival of colour film led to more complicated systems of filters and optical printers being used to isolate the actors against the bright blue screens.

Why blue? Because the cool colour of the screen was at the opposite end of the spectrum to the warm skin tones of the actors standing in front of it; the contrast made it easier to create a good matte. You just had to make sure the wardrobe department didn’t dress your leading lady in a bright blue evening gown, or else she’d disappear before your eyes.

Preparing a bluescreen shot for "Ghostbusters"

Preparing a bluescreen shot for “Ghostbusters”, in which the Stay-Puft Marshmallow Man was composited into live-action plates shot in New York

In the ‘60s and ‘70s, Disney had great success with yellow screens lit by sodium vapour lights, used in films such as Mary Poppins. But for the most part the colour of choice remained blue. Once digital techniques came on the scene, however, blue began giving way to green.

So why the colour shift? One reason is that many digital cameras are configured using a Bayer Pattern, in which there are twice as many green sensors as either red or blue; these cameras are naturally more sensitive to the green end of the spectrum. And greenscreens often perform better outdoors, in environments where a traditional bluescreen might blend with the sky.

In many situations, however, the bluescreen is still the filmmaker’s best option – it just depends on the demands of the individual shot.

"White House Down" bluescreen composite

This composite shot by Crazy Horse Effects from “White House Down” proves the traditional bluescreen is still alive and kicking

Greenscreen Present

In the old days, lighting a bluescreen was a big deal. Because the optical department was reliant on delicate photochemical processes, it was vital that the blue colour captured in the original photography was as flat and clean as possible. For that reason, most bluescreen shots were set up on the soundstage, under carefully controlled conditions.

The effectiveness of modern colour separation tools – and the trend towards smaller set builds augmented by digital extensions – has led to a more relaxed approach. You’ll find greenscreens of all shapes and sizes on many location shoots, filling in the gaps between buildings or blocking off the ends of streets. Entire sets might be built and covered in greenscreen material, allowing actors to clamber over blocky toytown structures which will be replaced in post-production by entire digital environments.

Smaller greenscreens are used within the sets, or even on the bodies of the actors. Wondering what to display on that bank of monitors in the spaceship’s control room? No problem – just set the screens to green and drop in the funky graphics later. Need to alter the anatomy of your lead actor’s head? Easy – just give him a greenscreen bald cap and get VFX to track in the tentacles.

With a greenscreen, you really can do anything.

In fact, greenscreens have become so familiar that even Joe Public – who’s more interested in popcorn than post-production – understands broadly what they do. Granted, the only key he knows is the one that fits his front door, and he might wonder why the rotoscope department is always griping that there might as well not be a greenscreen there at all – “Gee whiz, the thing doesn’t run to the edge of the set, and it’s not even lit properly, I mean, these things aren’t magic carpets, you know!” Nevertheless, the greenscreen has become a universal shorthand for “visual effects go here”. If there’s a single image that symbolises the visual effects industry for the outside world, the greenscreen is it.

It’s an icon for people within the industry too. Take a look at all the VFX professionals you follow on social media. How many of their online avatars are bright green squares? Quite a few, right?

The Go Green movement rose up over a year ago with an agenda to raise awareness of inequalities within the visual effects industry – in particular the effect of nationally-granted subsidies across an international marketplace. The movement is still going strong, and the symbolic power of the greenscreen remain at the heart of its campaign.

There just no escaping it: the greenscreen is a dominant force in visual effects. In fact, it’s hard to imagine what filmmaking would be like without it.

Greenscreen Future

Let’s fast-forward ten years to a movie set of the near future. Look – there’s the camera. Mind your head on the lights. Hmm, looks like we could do with a fresh batch of muffins on the craft service table.

Now, let’s look for the greenscreens. Ten years on, are they still around? If not, what new technology has come along to replace them?

Here’s what our panel of visual effects experts had to say:

Visual effects technology continues to progress and develop at a high rate. Even now our teams have had to become adept at working around lack of green screen when time constraints/filming schedule prohibit its use. Having said that, I think in ten years time, greenscreen or an equivalent will still be needed when actors are in frame. I can see a time when greenscreen could be replaced with live feeds that can still be keyed off, but have the massive advantage of providing actors with on-set feedback. It would be an interesting development that would be beneficial both for us and for the wider production. – Jeff Clifford, Head of R&D, Double Negative

We’ll probably be using more sophisticated systems for real-time keying on location in order to visualize complex visual effects shots, but the reality is that green (or blue) screens are still very useful, and will likely continue to be for the foreseeable future. We are still coming up with better ways to light actors on green screen to make the integration better. But there are techniques that will likely revolutionize this, ie real-time rendering and the motion capture of performances. I can imagine a not-too-distant future in which we can create 100% photo-real characters, captured in real-time and rendered on a 100% digital environment. – Aladino Debert, Creative Director and VFX Supervisor, Advertising & Games, Digital Domain

I’m pretty sure that in 10 years we won’t be using color difference matting with green or blue screens any more. Future VFX youngsters will feel about this technique much the way we feel about using miniatures today. Cameras which capture depth data are already available. When the resolution of these channels increases, we’ll place set extensions and digital creatures not just behind the plate, but within it. This will complete the deep compositing idea. Meanwhile, I guess, VFX artists will continue spending their time on rotoscoping plates, where it was not possible or too expensive to setup a green screen. – Sven Martin, VFX Supervisor, Pixomondo

Yes, I believe we will still be using greenscreens. Manual rotoscoping is an art form in itself, but even the best roto artist will never match the precision of a greenscreen key. It’s impossible to determine the exact colour and opacity of a hair at a given pixel using even the best rotoscoping system, and to be consistent and accurate over the entire image and a whole sequence of images. Other software solutions which have attempted to extract foregrounds from their backing have been promising, but thus far have proven to be either temporally inconsistent, or simply less precise than a greenscreen. Rear projection has recently been tried again with stunning success in Oblivion. With improvements in projectors (increased dynamic range) I can see this idea being used more often. It does have its disadvantages though; you need to know in advance exactly what you want in the background. Rather than an end to greenscreen use, I hope we will see a hybrid solution: the continued development of the technology and an amalgamation of ideas targeting the same problem. A more intelligent keyer might consider not only colour, but depth, focus, disparity and other image factors to compute whether a pixel is solid foreground, solid background, spill, or transparent foreground. But it seems like it will be a long time before there is a set of circumstances in which a greenscreen would not be at least part of the solution. – Charlie Tait, Head of Compositing, Weta Digital

We will definitely still be using green and blue screens in 10 years time. Technology and techniques are improving, but some classes of problem just require them, and will for the foreseeable future. – Ken McGaugh, VFX Supervisor, Double Negative

I anticipate still using greenscreen insofar as there will be a need to extract live performance from unwanted background. It will be more electronically procedural, with less burden on set-up and lighting to specifications. I think on-set needs will be more forgiving. – Joe Bauer, VFX Supervisor, HBO’s Game of Thrones

Yes, we will still be using greenscreens. There will be advances in technology that will simplify the process, but I don’t think enough of an advance to automate the cutting of mattes. I also don’t believe all advances in technology will be accessible to every filmmaker. However, I do feel this is where 3D stereo technology will come in handy, with further exploration of depth maps. This is probably the area that will bring about the eventual elimination of green screen. – Lon Molnar, Owner & VFX Supervisor, Intelligent Creatures

I would love to see a day when we could do “deep” filming: somehow map out depth and use this to help automate our composites. This is years away from being a reality. Often there are times we choose to not use a blue or green screen, opting to rotoscope instead, but blue and green screens are here for the next 10 years and beyond. – Geoff Scott, VFX Supervisor, Intelligent Creatures

Bluescreen set-up from "Return of the Jedi"

Cameraman Don Dow attends to the miniature sail barge while assistant Patrick McArdle prepares the Vistarama motion control camera, in this bluescreen set-up from “Return of the Jedi”


Well, the consensus seems to be that greenscreens – and blue – aren’t going anywhere anytime soon. Still, given that I can already download an app to my smartphone that will scan an object, isolate it from its background and derive 3D geometry from the data, the dream of “deep filming” may be closer than we think.

Until it becomes a reality, however, the greenscreen seems likely to dominate as the VFX background of choice, and thus will continue to be what it’s always been: the original field of dreams.

Avengers Assemble photographs © 2012 by Marvel Entertainment. Ghostbusters photograph copyright © 1984 by Columbia Pictures Industries Inc. Prometheus photographs copyright © 2012 by Twentieth Century Fox. White House Down photographs © 2013 by Columbia Pictures. Return of the Jedi photograph copyright © 1983 by Lucasfilm Ltd.

C is for Composite

The VFX ABC - "C" is for "Composite"In the VFX ABC, the letter “C” stands for “Composite”.

The humble composite is the backbone of all visual effects. If you doubt me, check out the Oxford English Dictionary, which defines a composite as “anything made up of different parts or elements”. If that doesn’t describe almost every visual effects shot ever created, I don’t know what does.

To trace the development of the composite shot, we need to wind the clock a long way back. Even before moving pictures began to, uh, move, still photographers took great delight in bamboozling people with camera tricks. They used double exposures to create vaporous ghosts. Forced perspective illusions made large things appear small, and vice versa. With their clever painted backdrops and miniature sets, they transported ordinary people to extraordinary locations.

In short, the camera has always lied.

Cottingley Fairies

The Cottingley Fairies

Sometimes, these early composite images really did seem like magic. In 1917, the eminent author Sir Arthur Conan Doyle was taken in by a series of photographs taken by two English girls, in which the youngsters appeared in the same frame as a troupe of pint-sized fairy folk. It wasn’t until 1980 that the photographers – now old ladies – confessed their pixie playmates had been nothing more than cardboard cut-outs.

Early filmmakers borrowed still-frame techniques and used them to create a host of early moving composites. But, while the simpler tricks translated well to the motion picture medium, more complex illusions proved difficult. It’s one thing taking individual photographs, cutting them up and patching them together, but how do you make a complex collage when the pictures are whipping past at 24 frames per second?

One of the earliest answers to that question was mattes. A matte is simply a mask – a means of blanking off part of a photographic frame during an initial exposure, allowing a later exposure to fill in the missing piece of the puzzle.

Matte shot from Elizabeth and Essex

Matte shot from “Elizabeth and Essex” showing masked area – American Cinematographer, January 1940

Finished composite from "Elizabeth and Essex"

Finished composite from “Elizabeth and Essex” – American Cinematographer, January 1940

To understand mattes, let’s imagine a typical early composite shot of an actor walking up to the door of a gigantic castle. First, the director shoots his actor, in a minimal set, through a sheet of glass. The upper part of the frame – where the castle will appear – is masked out on the glass with black paint. The exposed film is then stored in its undeveloped – or latent  – state, while an artist paints the rest of the castle, also on glass. The camera is lined up with the castle painting, and the undeveloped film is wound back and exposed a second time, this time with a black mask protecting the part of the frame where the actor is walking.

Slightly more tricky than the latent image matte is the bi-pack matte. Here, the live action of the actor is shot without any masking, developed, and then loaded into the same camera as a reel of fresh, undeveloped film. This camera is set up in front of a glass painting of the castle, in which the live-action area has been left clear of paint. The fresh film is then exposed twice. The first time, the painted castle is unlit and a white light is shone through the clear glass area, contact-printing the live action directly on to the fresh stock behind. Both strips of film are rewound and a second exposure is made, this time with the castle painting illuminated and a black cloth behind the glass preventing any further exposure through the live-action portion of the frame.

Both the latent and bi-pack methods work well with static shots like the one I’ve described, in which the masked area is fixed and the actor stays well away from the blend line between the two elements. But what happens when you want your actor to pass in front of the painted castle?

For that, you need a travelling matte.

Early travelling mattes were created using a variant on the bi-pack method, invented by Frank D. Williams in 1916 and known, not surprisingly, as the Williams Process. Here’s how it works. First, you shoot your actor against a blue screen. By carefully printing this footage on to high-contrast film, you can generate a black silhouette of the actor moving against a pure white background – a holdout matte . Reverse-printing the holdout matte then creates a corresponding cover matte  – a white silhouette against black.

Composite shot from King Kong

Composite shot from “King Kong” (1933) with live action matted over a miniature background using the Dunning Process. Semi-transparent foreground figures betray the limitations of the technique.

To create your final Williams composite, first load your previously-shot castle background bi-packed with the holdout matte. Next, use a white light to print the combined footage on to a third, unexposed piece of film (the holdout matte allows everything to print except the moving silhouette of the actor). Finally, rewind the film and load up your actor footage, bi-packed with the cover matte, to print the actor’s image neatly into the black hole left in the castle footage. (The Dunning Process, developed by C. Dodge Dunning in 1925 and used to great effect in King Kong in 1933, works in a similar way, except it uses yellow light to illuminate the actor, improving the separation from the blue screen.)

These techniques worked well enough throughout the black and white era, but the advent of colour film threw a spanner in the works. Luckily, by the 1930s, it had become possible to synchronise movie cameras with the latest, high-illumination projectors, heralding the era of rear projection .

Using rear projection, you can shoot your castle background in advance – and in colour if you choose – and then project it on to a large screen. Your actor can prance around in front of this projected image to his heart’s content, allowing you to capture his every antic in-camera. You can even do clever things like having your actor walk on a treadmill while the background pans behind him, to create a moving camera composite. On the downside, you’ll have to deal with a hotspot in the centre of the screen and fall-off at the edges, not to mention a nasty increase in grain and contrast in the reprojected footage.

A typical Dynamation setup - image available at http://www.rayharryhausen.com

A typical Dynamation setup – original image available at http://www.rayharryhausen.com

One way or another, reprojecting film lies at the heart of almost all subsequent compositing developments during the photochemical age. That’s just as well, because if I’m to wrap up this whistlestop tour before bedtime I’m going to have to speed things up a little. Suffice it to say that the next sixty years saw the refinement of both rear and front projection techniques, and the development of the optical printer (which is really just a highly engineered way of rephotographing previously shot material), not to mention any number of proprietary processes ranging from Dynamation to Zoptic, along with curiosities like Introvision, which brought old-school theatrical effects into the mix in the form of the beam splitter (a half-silvered mirror positioned at 45° in front of the camera).

Composite shot from Return of the Jedi

Composite shot from “Return of the Jedi” (1983). Inset shows a reconstruction of the holdout and cover mattes used to create the shot. Screen image copyright © Lucasfilm Ltd.

In 1977, Star Wars  combined the almost defunct blue screen with a computerised motion-controlled camera and a precisely machined optical printer, turning the craft of compositing into a laser-accurate art form. At the same time, people like Doug Trumbull were keeping the spirit of the early pioneers alive by pushing latent image matte techniques to the limit in films such Close Encounters of the Third Kind and Blade Runner. To get clued up on how a typical composite was put together at ILM back in the 1980s, you can’t do better than this BBC Horizon documentary from the period, which breaks down classic shots from Return of the Jedi and Indiana Jones and the Temple of Doom.

Then the world went digital, and everything changed.

Or did it?

The modern compositor has access to vast range of computer software options. But the techniques, and the mind-set behind them, aren’t really anything new. In a simple AfterEffects setup, for example, you might build a shot using flat layers stacked in a virtual 3D space. That’s little different to what Disney artists were doing with a multiplane camera back in the 1930s. Nuke may take things to a whole new level, but the single most significant advantage any digital tool has over its photochemical counterpart is that you can duplicate elements without any loss of quality. Everything else is down to the skill and imagination of the user.

What the digital tools do give you, however, is improved workflow and an extraordinary level of finesse. In the photochemical days, the best you could hope for was to hide the join by smearing a little Vaseline on the lens of the optical printer. Now, compositors wield lens flares and chromatic aberrations with casual abandon, feathering edges and flashing in atmospheric haze in order to blend hundreds, if not thousands, of elements into a seamless whole.

Composite shot from Rise of the Planet of the Apes

Deep composite shot from “Rise of the Planet of the Apes” (2011). Image copyright 20th Century Fox.

The new kid on the block is deep compositing, in which every pixel rendered for a visual effects element contains information not only about colour and opacity, but also depth. This crucial z-plane data enables compositors to layer up separate elements without having to worry about those pesky holdout mattes (yes, that term is still in common use, even after all these years); the depth information contained within each element determines which should appear in front of the next. For a crash course in deep compositing, check out this video from The Foundry in which Robin Hollander of Weta Digital talks about the Golden Gate Bridge sequence from Rise of the Planet of the Apes.

If the history of compositing were a shot in a movie, it would be a helluva complex one, packed tight with elements all fighting for their place in an integrated whole. This potted history has been necessarily brief, so feel free to wade in and tell me about all the pieces I’ve missed, like the smoke and mirrors of the antiquated Schüfftan Process, or the meticulous work of the rotoscope artist which, to this day, conjures travelling mattes from shots where you’d swear the edges are nowhere to be seen.

Compositing is one giant jigsaw puzzle. Where does your piece fit in?