Here’s a Happy Thanksgiving treat – a sneak preview of the front cover of Cinefex 136! On the presses right now, our latest issue features breathtaking coverage of Gravity, Thor: The Dark World, Rush and Carrie. Sorry, the issue won’t be out until December 16, but in the meantime we hope you enjoy this peek at our awesome Gravity cover, created by Framestore.
Bullet time famously hit the big screen in The Matrix. It was the technique behind the movie’s iconic slow motion shots, like the famous one where the camera spins around Keanu Reeves and Hugo Weaving as they grapple while floating in mid-air.
This dramatic and stylised effect was created using an array of over one hundred still cameras mounted on a curved rail surrounding the actors, both of whom were suspended on wires. The cameras were triggered simultaneously to produce a sequence of frames, with each individual image taken from a slightly different viewpoint. This sequence was then comped into a separate background.
Bullet time is the epitome of the “magic camera” move – that moment in a film where the director breaks the laws of physics by putting the camera through a ballet routine even Nijinsky would be hard-pushed to replicate. In The Matrix, the bullet time moves are justified by the central story point that Neo and his nemesis, Agent Smith, are capable of, well, breaking the laws of physics. So the extreme camera moves – as well as looking awesome – are fair.
Whether or not a camera move is fair has always been open to debate. Take the 1941 classic Citizen Kane. You won’t see Orson Welles dodging bullets in ultra slo-mo, but you will find a wealth of magic camera moments. At the start of the “El Rancho” scene, for instance, the camera makes an impossible flight over a (miniature) rooftop, passing right through the middle of a neon sign before dropping (by way of a nifty lap dissolve) through the solid glass of a rain-spattered window to enter the nightclub below. Is that move motivated, or is it just Welles showing off? You tell me.
Broadly speaking, the rules of motivation (as much as there are any rules) say that if you’re going to move the camera, you’d better have a damn good reason. That reason might be straightforward: pan left to follow your hero down the street. It might be dramatic: go hand-held for gritty running action when said hero takes to his heels. It could be psychological: the instant Chief Brody spots the giant shark chomping down on poor little Alex Kintner … unleash the dolly zoom! Bottom line is: if the camera move takes you out of the movie, you made the wrong choice.
For a long time, the rules were based on what a real camera can or can’t do. That’s because, until recently, all films were shot with real cameras. However, visual and special effects techniques have now reached the point where the camera – or its virtual counterpart – can go anywhere and do anything (watch the first seventeen minutes of Alfonso Cuarón’ Gravity and tell me it ain’t so).
Now, camera movement is clearly an integral part of filmmaking process. Which leads us to an interesting question, namely: “To what degree do developments in visual effects influence the fundamental language of cinema?”
Well, I’d argue that visual effects have always advanced both the art and craft of filmmaking. As long as there have been directors asking, “Can we do this?” there have been effects artists answering, “Let me try something.” I’d also argue that The Matrix – in particular, bullet time – marks a turning point, a key moment in time where a camera move was employed that not only advanced the discipline and enhanced the story, but also captured the audience’s imagination.
That last part is the crucial bit. Classy camera moves might impress the film buffs, but most audiences don’t even notice them. When the Wachowski Brothers hurled it on to the big screen in The Matrix, bullet time proved itself as a visual effect that could sit centre-stage without disrupting the flow of the movie. Motivated, cool and clever, bullet time still stands as my ultimate magic camera move.
So now it’s over to you. Do you have a favourite magic camera moment?
President Obama is due to visit the Glendale campus of DreamWorks Animation tomorrow – November 26 – where he is expected to deliver a speech hailing filmmaking as an American success story. Visual effects artists, frustrated by the current state of the industry, are planning to stage a protest at the event. Protestors will wear green T-shirts, the “greenscreen” colour representing what a film looks like without visual effects.
A key issue for the protestors is subsidies. In recent years, economic incentives have seen not only artists but whole facilities moving around the world in search of favourable conditions in which to work. Currently, companies in Canada and the United Kingdom, for example, are enjoying a boom period while their counterparts in California are struggling to compete – or indeed survive. For some visual effects artists, a nomadic existence has become a way of life.
We at Cinefex believe in fair play. That’s why our magazine articles never pass judgement on a film, but simply report on the hard work that went into putting it on the screen. In that spirit, we wish our friends in the visual effects industry worldwide every success in their continued mission to create a level playing field. If everyone competes by the same rules, we all get to enjoy a better game.
Spectacular science fiction movies. They’re always big studio productions, right? As for the visual effects, they’re always provided by outside vendors – companies hired in by the producers to make movie magic. That’s how it’s always done.
Ender’s Game is different. Despite having a budget widely reported to be $100 million, it is in fact an independent film, co-produced by Oddlot Entertainment, Summit/Lionsgate and Digital Domain 3.0. Famous as a leading visual effects facility, Digital Domain is not known for film production. However, with Ender’s Game, the company known familiarly as “DD” has broken the mould by acting not as a vendor on the show, but as a partner.
I spoke recently to Daniel Seah, CEO of Digital Domain, and Matthew Butler, DD’s overall VFX supervisor on Ender’s Game. I was particularly interested to learn how DD’s unique role on the film impacted on the creative process …
Daniel, as I understand it, Digital Domain has an equity stake in Ender’s Game, and made further contribution in the form of visual effects services.
DS: Yes. Digital Domain’s contribution was both financial and creative, which is why it was such an interesting partnership. With a VFX company as an equity partner and co-producer, the production was able to take advantage of our ability to visually develop the film in the earliest stages to support foreign pre-sales and to produce VFX at cost.
Do you view this as a one-off experiment for Digital Domain? Or is it part of an on-going strategy to explore new business models? And how important is it to future plans that Ender’s Game is a success?
DS: Ender’s Game was an investment in Digital Domain’s future. It allowed the company to expand its role, to participate in a meaningful way, and to start building a track record in co-production. Handled strategically and in a balanced way, we see co-production as an important part of Digital Domain’s more diverse business strategy moving forward.
Matthew, you were overall VFX supervisor on Ender’s Game. What difference did it make to you and your team knowing you were working on an in-house project? For example, did you enjoy a more creative role? Did you feel more invested in the project as a whole? Or was the pressure on to “get it right”?
MB: All of the above! Working on a feature that we were part-owner of allowed us to take a more active creative role, much earlier in the process than usual. Part of the reason for that is that Gavin Hood is wonderfully collaborative. He was the screenwriter as well as the director. Very early on, he and I went over to DD’s local watering hole here in Venice, and he opened up his laptop with the script on it and we started talking and designing right then and there. It was exciting for me, and it’s a great way to get the most for your money. If you start world-building during the script stage, it also helps to save time. It lets everyone involved know the cost of things as you go – what will be painful and what will not.
How about budget management? Did you get more – or less – bang for bucks working this way?
MB: Definitely more. Being a co-producer means that you have skin in the game, so it’s risky, but it also protects you from the system that’s been in place for years. Typically you assess the level of work the best you can, create a budget, plan, and follow through that with the producers involved. When things change – and they inevitably do – as a service company you put your hand up and say, “Hang on.” Then the money people get together and you start the change order process. As a producer, it’s now a closed system. You hold hands with your partners and say, “We’re going to make this movie for this much money and time. These are our assets.” It works well if everyone plays ball. But it was also tricky. You need to be fluid and let things change. It’s an artistic environment. But there are ramifications to that. We got it done, and in the end we were much more efficient and ended up with more on the screen.
Did the financial risk to Digital Domain influence your readiness to take risks creatively? Were you still able to put resources into R&D, experimenting with new techniques and so on?
MB: Every new project means new challenges, new approaches and breaking records in one way or another – that’s true for every visual effects company on every film, and Ender’s Game was no different for us. We had huge simulations – in one shot 27 billion polygons – and we had brilliant technologists figuring out how to render those scenes without causing a blackout in the studio. Our role in co-financing the movie didn’t change that in any way. You still have to solve problems. It did help us to make choices – like, for instance, while we did create fully synthetic characters for the entire cast in the zero-G battle room sequences, we limited that to simple performances – no dialogue. Having CG actors delivering lines was not a way we wanted to go. Shooting as much as we could live, then using CG and developing new tools where it was necessary to achieve correct zero-G physics was our approach. Again, having that mindset of “these are our assets” and sticking to our plans made that successful.
The production of Ender’s Game spanned a difficult period for Digital Domain, involving bankruptcy protection and ownership changes. Daniel, was it difficult to keep the project on track when you took over as CEO?
DS: When I started as CEO in August, the VFX for Ender’s Game were nearly finished, so I don’t believe that management change disrupted the work at all. VFX Supervisor Matthew Butler and the team were laser-focused. I have so much respect for the commitment and abilities of these artists – they went through a lot and delivered amazing work.
Matthew, how easy was it for you and your team to maintain focus on the project through this difficult period?
MB: Honestly, when we’re on a film, crews just put the blinders on and focus. We did that with Ender’s Game, and the whole studio was supportive of keeping teams focused on their shows during that period. We are all really proud that it didn’t impact the quality or schedule of our project.
The visual effects industry as a whole is experiencing turbulent times. Do VFX companies need to explore new revenue streams in order to survive? And, perhaps, to become empowered?
DS: I can’t speak for any companies other than Digital Domain, but we do believe that a diverse business strategy is important, while still understanding and focusing on what is core to the company – in our case, visual effects for feature films and commercials. Approaching co-productions strategically is a path that we believe will benefit our company as we move forward.
MB: I think every company needs to figure out what’s right for its business. Having a strategy that includes diverse lines of business is something Digital Domain believes will be successful for us. A balance of feature film VFX, commercials and games work, co-productions and digital human projects is what Digital Domain has outlined as our path.
Is the “VFX company as producer” model scalable, or is it only big players like Digital Domain who can afford to take the risk?
DS: We at Digital Domain are fortunate in that our parent company is very supportive of our selective involvement in co-productions. Again, I can’t speak for other companies, but having strong financial backing for a co-production strategy is an important consideration.
Now that Ender’s Game is playing in theatres, does Digital Domain’s production role give you more of a sense of ownership? Is this your movie?
MB: Absolutely! We are following the numbers as closely as anyone. And the way we’re reading reviews is different too – we’re just as interested in how the story and actors are received as we are about how the visual effects are judged.
Daniel and Matthew – thank you both for talking about Ender’s Game.
Ender’s Game features a total of 941 visual effects shots, of which Digital Domain delivered just over 700. An additional 250 shots were delivered by Vectorsoul and Post23 (Mind Game), Method Studios, The Embassy, Comen VFX, G Creative Productions Inc (motion graphics) and Goldtooth Creative Agency Inc (motion graphics).
All images used with permission. Motion Picture Artwork ™ & © Summit Entertainment, LLC. All rights reserved.
Here’s how the human eye works. Light enters a hole in the front, passes through a lens and is focused on a light-sensitive surface at the back – the retina. A camera works in much the same way, but instead of a retina it uses either a charge-coupled device or a strip of film.
Unlike a camera, however, the human eye has a problem. There’s a place on the retina where the optic nerve is connected (the bit that carries all the information to the brain). Where there’s an optic nerve, there’s no room for light sensors. The result is something that all sighted people have, but which few of us are consciously aware of: a blind spot.
But that’s okay. If we don’t notice our blind spot, it must be pretty small, right?
Wrong. Although the size of the human blind spot varies from person to person, there’s evidence to suggest it can be up to 70 times the apparent size of the full moon as it appears in the night sky*. If you find that hard to visualise, try this bombshell: you and I are both blind across an area of our vision roughly the size of a DVD held at arm’s length.
I don’t know about you, but I find that mildly freaky.
Luckily for us, vision doesn’t happen in the eye. It happens in the mind. The blind spot might mean the brain gets a picture with a hole in the middle, but it turns out the brain is very good at filling in the missing detail.
Visual effects is all about filling in missing detail too. Like patching into a scene something that was never there in the first place – say, a ravening monster or a crashing aircraft. Possibly both. It might involve extending an existing background to hide the rig used to hold an actor off the ground. In other words, visual effects performs exactly the same sleight of hand as the human brain: it fools you into seeing something that isn’t really there.
In other words, we all carry in our heads our very own personal visual effects supervisor, an unsung hero who works constantly to bring inadequate stage footage up the standard required for theatrical release.
So how exactly does the brain perform its magic? One of the tricks it uses is to copy and paste information from the left eye to the right (and vice versa). Then there’s edge interpolation, whereby the brain assumes that any line passing through the blind spot does so unbroken. Also, the human eye is in constant motion. By scanning a scene, it ensures the sighted part of the retina gets to see everything at least some of the time.
The way moviegoers move their eyes is of particular interest to filmmakers, especially as eye tracking technology becomes more and more able to deliver meaningful data. In the future, eye tracking may be used commonly in test screenings to determine where the attention of audience members is directed on a shot-by-shot basis. Even before that stage, eye tracking has applications in the editing suite, such as quantifying how comfortable an assembly is to view. And let’s not forget product placement. Where’s the most effective place to put a can of soda in this shot? The eye tracking data will tell you.
Eye tracking may not yet be a standard weapon in the filmmaker’s arsenal, but advertisers are already using the product placement trick in a variety of media. In today’s movie theatre, it’s just the audience watching the screen. Soon, the screen may be watching them back.
Hmm. Maybe there’s a way to profit from all this. By my crude reckoning, blind spots could obscure up to 5% of the average moviegoer’s field of vision. So why not just leave 5% of the screen blank? Plus, if the eyes of the audience are wandering all over the place, why not dial down the level of detail in the parts of the frame you know they’ll be ignoring? Imagine the benefits of saying to your VFX supe, “Don’t bother feeding this part of the scene into your deep compositing pipeline. Those poor schmucks in the audience won’t see it anyway.” Just imagine! Shorter rendering times! Cost savings! If you can’t see the advantages you must be blind!
But then, in a sense, we’re all blind, aren’t we?
To find out, let’s wind the clock back to 1933 and the release of “King Kong”, a film that inspired not only the audiences of the day, but an entire generation of movie fans and professionals alike.
“King Kong” is packed to the rafters with innovative visual effects, including stop-motion animation, miniature sets, glass paintings, travelling mattes and full-scale practical creatures. The methods by which these effects were achieved are now well-known – indeed, they’re practically part of VFX lore.
But what was it like going behind the scenes back in 1933? Was there any “making of” information available back then? Was anyone even interested? To find out, I’ve delved into some of the magazines from the period.
One of the earliest references I found to the visual effects of “King Kong” was in the 6th Feb 1933 edition of “The Film Daily”, where a small paragraph in the “A Little from ‘Lots'” section announces, “Willis H. O’Brien is completing his work as chief technician on ‘King Kong’, for which RKO has high hopes.”
The following month, in a multi-page promotional feature timed to coincide with the movie’s release, “The Hollywood Reporter” also mentions O’Brien, along with artists Mario Larrinaga and Byron Crabbe. The text includes a rousing but wholly uninformative: “‘King Kong’ is the most sensational exhibition of camera tricks in the history of motion pictures.”
“King Kong” received plenty of attention in the press following its release, even featuring in such celebrity-obsessed monthlies as “Photoplay”. The April 10th 1933 edition contains a typically cheesy publicity still (left). The original caption reads: “She’s no woodland nymph, nor is he a satyr. Fay Wray, heroine, and Merian Cooper, producer … are measuring the hand print this monster ape leaves.”
Hmm. We’re not getting a lot of detail about how they put the big ape on the screen. Time to dig a little deeper. How about a contemporary review of the film?
In the May 1933 edition of “Motion Picture”, an enthusiastic reporter says, “How many kinds of trickery were used … we do not know, but after the first glimpse of King Kong … one’s imagination becomes adjusted to any slight jerkiness. You will never know the extent of what the movies can do till you’ve seen this tale.”
While it’s interesting to note that, even in those early days, one reviewer at least found stop-motion animation a little “jerky”, it’s still not much help. Perhaps “The Film Daily” can do better. In the 31st May 1933 edition, in a column entitled “Features Influenced by Cartoons”, Hugh Harman of Harman-Ising Melodies is quoted as saying, “Animated cartoons, by their increasing cleverness and popularity, are having a stimulating effect on features, influencing more imagination and novelty. ‘King Kong’ is an example of the feature possibilities suggested by the cartoons.”
“King Kong” a cartoon? It’s another interesting nugget shedding light on the attitudes of the age. But it’s not the VFX gold we’re panning for. Where the heck is the motherlode?
Wait a second. What’s this? In May 1933, “Movie Classic” ran an article entitled: “King Kong – How Did They Make It?” Now that’s more like it!
The article begins in cautious fashion, advising, “Under ordinary circumstances … it is not our desire to strip the films of their glamour. If ‘King Kong’ were other than … an obvious excursion into fantasy, we would not attempt to reveal the ‘inside story’ of its production.” The text goes on to reveal that the various creatures’ “limbs, heads and necks moved on tiny ball bearings,” and says “Kong himself, was constructed upon the skeleton of an ape, with each measurement greatly enlarged.” It also claims, “There were many dozens of Kongs (seventy-four, to be precise), all exactly alike, but of different sizes.”
Eventually we get an actual breakdown, describing the shot in which Ann Darrow shelters in a tree while Kong battles the T-Rex in the background: “All that was photographed … was the girl’s white figure perched among the branches. The background was a solid black velvet curtain. Then it was the job of the composite technicians to strip in the action of the fight – which, incidentally had been shot in miniature more than eight months previously.”
There’s even a decent description of the stop-motion process: “For each frame, O’Brien moved portions of the ape’s jaw a fraction of an inch and after photographing the position, moved the jaw again.” However, when interviewee Merian Cooper is pressed for more detail about a later shot where Kong removes Darrow’s clothing, Kong’s creator clams up. “I can’t tell you how this was done,” he says, “for the secret is not mine to divulge. It belongs to Willis O’Brien and his splendid technical crew.”
The article concludes, “There are many details about the production of ‘King Kong’ that are not available at present for publication … For whenever you ask Merian C. Cooper or his associates a question that trespasses on their secret processes, they invariably reply, ‘It was all done with mirrors.'”
Okay, so it’s hardly exhaustive. What’s also clear is that Cooper was keen to maintain the mystique of the movies. Indeed, there’s evidence to suggest he actively spread misinformation about O’Brien’s visual effects. How else do you explain the final article I uncovered in my journey through the Hollywood archives?
The article in question appeared in a 1933 edition of “Screen Book”. It’s a double-page spread filled with diagrams and descriptions purporting to explain exactly “How King Kong Was Filmed”.
My favourite picture demonstrates “How 50-foot Ape is shown climbing Empire State Building.” A detailed illustration shows a man in a Kong costume crawling on hands and knees along a model building facade laid flat on the studio floor, with the camera angle cheated to create the illusion of a vertical ascent.
It looks plausible enough, and no doubt convinced the magazine’s original readers. However, as Robert Ripley might say, the choice is yours whether to Believe It Or Not. I know what I think of the drawing. How about you – are you convinced?
If you want to explore how the visual effects of “King Kong” were really done, be thankful you live in the 21st century. Good places to start your research are Don Shay’s biographical article “Willis O’Brien – Creator of the Impossible” in Cinefex #7 and Goldner & Turner’s “The Making of King Kong”. If you want to follow me down the rabbit hole of archived movie periodicals, I recommend you start your adventure with a visit to the excellent Media History Digital Library.
Framestore have brought Audrey Hepburn back to life.
The iconic film star was resurrected in “Chauffeur”, a TV commercial for Galaxy chocolate, directed by Rattling Stick’s Daniel Kleinman, that aired in the UK earlier this year. To perform the miracle, Framestore created a 3D model of Hepburn and used it to replace the head of an lookalike actress. The CG Hepburn was built using film and photographic reference, and had in excess of 70 separate muscle movements. The work is stunning at every level – modelling, tracking, rendering, comping – and undoubtedly represents what Framestore VFX Supervisor William Bartlett describes as “the edge of what’s possible”.
As we all know, what’s barely possible today will become commonplace tomorrow. The convincing digital recreation of a human being on screen may still be difficult, but visual effects artists are steadily building a bridge across the “uncanny valley” – that hard-to-define region in which an animated character looks almost perfect, yet still creepily unreal.
The bridge is nearly complete.
With that in mind, here’s a short list of films that might be showing at your local movie theatre just a few years from now …
“Indiana Jones and the Resurrection Engine”
The new Indiana Jones adventure sees everyone’s favourite archaeologist caught up in a race against time to stop the Nazis getting their hands on a mysterious ancient artefact (again). The story takes place in 1937, just a year after the events of “Raiders of the Lost Ark”, and – wonder of wonders – Indy looks just as young and fit as he ever did. How has this been achieved? Did they find a younger actor to play the iconic role? No. Did one sip from the Holy Grail really give Harrison Ford that much of a boost? No again. The septuagenarian Ford acted out his scenes on a motion capture stage, after which his performance was used to drive a digital model of his own younger self. Thanks to a patented Arthritic-2-Athletic physical enhancement algorithm, Ford was even able to perform all his own stunts!
“Forrest Gump” – starring James Stewart
People have long been comparing Tom Hanks to Jimmy Stewart. Now’s your chance to see the two actors go head to head. Innovative digital techniques have enabled visual effects wizards to replace Hanks frame by frame with an accurate digital double of Stewart. Every aspect of Hanks’s performance has been mapped to a library of facial expressions collated from Stewart’s entire film catalogue. The Blu-ray edition will include a special “Tom/Jim” menu control, enabling you to switch instantly from one actor to the other. Casting a role is now like a box of chocolates: just take your pick!
“It’s a Wonderful Life” – starring Tom Hanks
After her Academy Award-winning turn as the UK’s first female Prime Minister in “The Iron Lady”, Meryl Streep is taking on another monumental historical role. In Steven Spielberg’s latest biopic, she’ll be playing America’s first President, George Washington. Streep’s performance will be motion-captured and mapped on to a 100% accurate CG model of Washington recreated using paintings and engravings from the period. Forget prosthetics. Forget gender. Simply applaud the performance of a great actress translated on to the authentic features of a man who died over 200 years ago! (Also featuring John Goodman as Martha Washington.)
Far-fetched? A couple of years ago I might have said “yes”. Not now. The only question remaining is the one posed by Jeff Goldblum’s character Ian Malcolm in “Jurassic Park”. When challenging John Hammond’s successful resurrection of extinct dinosaur species, Malcolm says:
“You were so preoccupied with whether or not you could, that you didn’t stop to think if you should.”
Thankfully, digital doubles of deceased historical figures are unlikely to tear down the fences and go on the rampage. And, unlike a certain carnivorous plant, Framestore’s very own “Audrey 2” probably won’t develop a taste for human flesh.
All the same … should we?
- “Chauffeur” Galaxy advert (Framestore website)
- The making of “Chauffeur” (requires Facebook login)
- Rattling Stick
Note: Due to international copyright restrictions, video in the above links may play only in certain territories.
It’s nice to begin with an easy one. Everyone knows what animation is. It’s (1) drawing a picture (2) putting it under a camera (3) exposing a single frame of film (4) drawing a slightly different picture (5) putting it under the camera (6) repeating until (a) your fingers bleed or (b) your eyes fall out.
Except that’s not everything …
Sometimes animation means posing puppets and moving them incrementally in the time-honoured tradition of stop-motion. If you want to include replacement animation, it means swapping fractionally different sculptures one after another to create the illusion of fluid change.
Except it’s not like that any more …
The modern animation toolkit contains IK handles, blend trees and all that lovely data from the motion capture volume. Animation is no longer about a series of discrete poses but the infinitely editable trace of an object as it moves along a three-dimensional path. In fact, the whole concept of the individual movie frame might soon be a thing of the past if frameless rendering ever takes hold.
Okay. So there are lots of different animation techniques out there. Some new, some old. Fundamentally, however, animation is concerned with just one thing, isn’t it?
Except it isn’t …
To explain: the word “animation” comes from the Latin “animus”, meaning “spirit” or “lifeforce”. So “animate” means “give life to”. An animator’s objective isn’t just to move things from one side of the screen to the other. It’s to breathe life into them. When you get right down to it, all animators are actors.
Except they’re not …
You might need an animator who can act if you want a couple of grumpy trolls to start a fist fight. But animation isn’t just about character work. What about all those tireless effects animators labouring to generate fireballs, fountains and all kinds of bad weather? Effects animation doesn’t require personality. It’s just a bunch of dumb objects obeying the laws of physics, right?
Except it’s not …
Everything has character. The thing you’re animating might be a thinking, breathing creature with complex motivations and a very large axe, or it might be the spectacular plume of lava thrown up after yet another inconvenient meteor has struck a distressingly active volcano improbably laced with high explosives. It doesn’t matter. Both have what really lies at the heart of all animation: soul.
Heart. Soul. Breathing life. Surely that’s something we can all agree on, isn’t it?
Except it isn’t …
What about that motion capture volume we mentioned earlier? When it comes to mo-cap, it isn’t the animator imbuing the character with spirit, it’s the actor. You know, the poor sap wearing the dot-festooned leotard. All the animator has to do is make a few tweaks to their digitally recorded performance.
Rats. I really thought this was going to be easy. Every time you think you’ve got a grip on animation, it slips right through your fingers. The only way to resolve this is to go back to the beginning, to the world’s first animated feature: Snow White.
According to an article in the January 1938 edition of Popular Science Monthly, the Disney artists who worked on Snow White created “more than 1,500,000 individual pen-and-ink drawings and water color paintings”. The article goes on to say, “Since this cartoon required an average of twenty-two individual painted cels for each foot of completed picture, 166,352 finished paintings were exposed to the camera.”
What it boils down to is that animation is a discipline that demands painstaking craftmanship and one heck of a lot of patience. And you can take that one to the bank.
Alfonso Cuarón’s Gravity runs about eight minutes longer than Disney’s Snow White. I imagine both films sent a similar number of animators to the medic with bleeding fingers and throbbing eyeballs. But is this year’s hyper-real space drama as much an animated feature as its venerable cartoon predecessor? Or is it something completely different? Perhaps, even, something entirely new?
“A” is for “Animation”. As for what “animation” really means … do you know?
It wasn’t always called visual effects. Back in the early days, the techniques used by film pioneer Georges Méliès to create his cinematic fantasies were so closely tied to their theatrical roots that they were inevitably referred to as “tricks” or “illusions”. When audiences saw them, they gasped. These weren’t visual effects. They were magic.
According the The Academy of Motion Picture Arts and Sciences, the term “special effects” first appeared in 1926, in the credits of What Price Glory? Mind you, it took a while to catch on: in 1933, Willis O’Brien’s screen credit on King Kong was the rather prosaic “Chief Technician”.
Once the term became generally accepted, the Academy started dishing out Oscars for Special Effects to the top practitioners in the field. But here’s a thing: the original award was a portmanteau affair for both visual and sound effects. Winners of this included Mighty Joe Young in 1949, 20,000 Leagues Under the Sea in 1954 and Ben-Hur in 1959.
This state of affairs lasted all the way up to 1962, at which point sound and vision finally parted company. For the next ten years, worthy artists received golden statuettes in a whole new category: “Special Visual Effects”. Meanwhile, on the shop floor, there was a growing distinction between visual effects (generally created in post-production) and special effects (practical gags that happen in front of the camera).
Confused? Hang in there.
In 1972, with the big studios shutting down their in-house effects shops, the Academy’s Special Visual Effects category was scrapped. For five years, the discipline was paid lip service via catch-all Special Achievement Awards. Then, in 1977, Star Wars exploded on to our screens, rebooting the whole industry and winning the very first Oscar for what had finally become plain old “Visual Effects”.
For better or worse, the term “visual effects” is the one that’s stuck to this day. But is it any good? Are there any visual effects supervisors out there who would rather have fewer syllables in their job title? Granted you can shorten the descriptor to VFX, but acronyms are for multinational corporations, not dedicated creative innovators.
So what’s the alternative? If modern effects are all about creating physically plausible models of the real world, why not call the whole business “simulation” instead? Hmm, then all the people working in the business would be simulators. Sounds like something you’d ride at the fair.
I have a soft spot for an expression I stumbled over (where else?) in an old issue of Cinefex. In 1916, German film pioneer Paul Wegener envisioned “a new pictorial fantasy world” made possible by advanced imaging techniques. The term he coined for this vision was “optical lyric”.
Will the Academy ever give out an Award for Optical Lyrics? Probably not. But at least it’s a name that evokes the sense of wonder of those early days. It makes you think of Méliès. Just by saying it aloud, you could dust a movie set with magic.
In an industry that’s experienced more than its share of sea changes in recent years, maybe it’s time to open the debate about what label it carries. Is there anything in a name? I think so.
What should visual effects really be called?
According to Kim Libreri of Lucasfilm, the answer to that question is “yes.” As reported by The Inquirer, Libreri recently delivered a presentation to BAFTA’s Technology Strategy Board in which he outlined a possible future in which visual effects are added during a film’s production shoot. Motion capture technology, combined with high-speed game-engine rendering, will permit a director to generate, on set and in real-time, exactly what will appear on the screen in the movie theatre.
Will it happen? Almost certainly. In fact, it’s happening already. Take Roland Emmerich’s White House Down, which made innovative use of the new Ncam system. Mounted under the main camera, Ncam’s scanner first builds a 3D model of its surroundings. Backgrounds and assets created in previs can then be added on the fly, delivering a precomposed image straight to the director’s monitor. Speaking in Cinefex 134, Emmerich said, “This kind of system is the future. Ncam is now my favourite tool.”
In the case of White House Down, the previs elements were ultimately replaced in post-production. As systems like Ncam become more sophisticated, the quality of the material captured on set will only improve. Sooner or later, it will be indistinguishable from anything that could be achieved in post-production.
So will we soon be conducting a post-mortem for post?
I don’t think so. I think directors are going to love the new process – why wouldn’t they? What they see is what they’ll get. But post-production is where films are built. That doesn’t just mean visual effects – what about the fundamental craft of editing? What about sound design and scoring? Like any creative work, films evolve in the making. If you ask me, post isn’t going away any time soon.
But I do think things are going to change. When this trend really does take hold, it’s going to shift the workload for visual effects facilities from post to pre-production. That means they’ll need longer lead times – easy to say, not so easy to implement. It may also mean visual effects become more closely integrated into the production process, and that has to be a good thing. Wouldn’t every VFX supervisor prefer to be a creative collaborator than a bolt-on extra?
Here’s a final thought for all those die-hards grumbling that this is just another nail in the coffin of old-school effects. Back in the earliest days of cinema, a matte artist would set up a big sheet of glass in front of the camera and paint the background right before the director’s eyes. Hanging miniatures – the original set extensions – were modelled in the workshop and mounted on rigs just inches from the lens. In both cases, when the director looked through his viewfinder, he saw the finished shot.
Once upon a time, everything was done in camera.
In the future, it looks like it will be again.