They say there’s nothing new under the sun. It isn’t true, of course – just ask a crowd of visual effects professionals how their tools and methods have changed recently and they’ll soon tell you how fast things are developing in the field.
So that’s precisely what we did. In one of our regular straw polls, we asked a range of people from the technical and creative sides of the industry this simple question:
What’s New in Visual Effects?
To find out what our contributors had to say, read on …
The Rise of the Machines
Visual effects have contributed to countless sci-fi films about the rise of the machine, from The Terminator to The Matrix and beyond. Now it seems life is imitating art, as automation begins to make its presence felt in the world of movie magic.
“In recent years, machine learning algorithms have already been surprising visual effects practitioners with the quality of the image results. The barriers have been practicalities of scale and control. The algo guesses correctly remarkably well if you can build a huge dataset and train it cleverly, but the sacrifice is artist control. These are black boxes, churning out the trained result, and they can’t be reasoned with. I think the forthcoming generation of algos will focus not on jumping directly to a result, but rather on building control assets along the way to enable artists to choreograph the performance. Development we’re seeing supports this as the way forward – producing an early version very quickly, with tweakable controls if it’s not exactly what you wanted off the bat. Early results are exciting.”
Rajat Roy, global technical supervisor, Prime Focus World
“We’re currently doing a lot of research on how to use machine learning to accelerate processes that take a long time to calculate, such as simulations, rendering and so on. We’re still experimenting, and have not yet applied it in production, but our goal is to get to a stage where it can be applicable to our pipeline. I think machine learning will truly revolutionize the way we work in the future.”
Mathieu Leclaire, Head of R&D, Hybride Technologies
One clever algorithm that’s caught the eye of artists recently is the Smart Vector toolset in The Foundry Nuke. Working on just a single frame, artists isolate part of a moving object with motion vectors. The Smart Vector process then automatically propagates that work throughout the rest of the sequence.
“One of the best tools that appeared these last few months is the new Smart Vector feature in Nuke 10. It has saved us an incredible amount of time and resources in production and allowed us to work on dozens of tricky shots without having to rely extensively on matchmove geometry or camera tracking.”
Bruno Leveque, Environment TD, Image Engine
“In the latest version of Nuke 10, Smart Vectors can be generated which analyze the pixel movement in the shot. This has meant the matte painting team has been able to pick up shots which would normally go to effects – blood and wound work on Logan, for example. Not only that, but for these shots we also don’t require a camera, as everything is frame-related. No distortion, no camera, no match move. No fuss!”
Conrad Allan, Matte Painter, Image Engine
Visual effects has always relied on accurate keying – the separation of a foreground element from a background. A recent paper by Yagiz Aksoy, Tunç Ozan Aydin, Marc Pollefeys and Aljosa Smoliç of ETH Zürich and Disney Research Zürich describes a ‘color unmixing’ algorithm that addresses what the authors describe as ‘difficulties dealing with image regions where the colors of multiple objects mix, either due to motion blur, intricate object boundaries or color spill from greenscreen.’
“Keying is one of the oldest tricks in the business and not much has changed since the early days, so this paper from Disney may be a game-changer. Allowing high quality mattes to be extracted with very fast turnaround and virtually no artist time is an important milestone in our industry. No matter how good the CG you put in your shots, if it’s ruined by edge issue it will take the viewer out of the story.”
Lucien Fostier, compositing TD, Image Engine
Heads in the Cloud
Nowadays we can store almost anything in the cloud – and frequently do, often without even realizing it. The cloud storage challenges for visual effects companies – not least huge file sizes and the need for high security – are considerable. But one by one they are being overcome.
“Expansion into the cloud looks to grow this year, with more visual effects providers adopting a global presence and the security implications having undergone further discussion with the major film studios. Initially the demand will be for extra on-demand computing resource, but the appeal of cloud storage is hard to ignore. This will inspire a lot of investment and brainstorming into pipeline design or redesign that model the cost of data transfers in sophisticated ways.”
Rob Pieke, Head of Software, MPC
Off-site capabilites open up new possibilities for everyone involved in visual effects, not least smaller companies eager to make the most of the current boom in television drama, and the associated requirement for high quality visual effects.
“For the past couple of years there has been much talk of mainstream VR, 6K, on-demand offsite resources – I’m not sure where stereo went – but technology aside, the growth of quality television drama has had a massive impact on the visual effects industry, and on the industry as a whole. The ambition and scale of shows for HBO, Netflix, Amazon and others has led to some interesting co-productions. and channels in the UK are upping their game – it’s very busy out there. On these larger projects, the availability of on-demand rendering and storage can really help the smaller companies compete with the bigger houses.”
Rob Harvey, owner/creative director, Lola Post Production Ltd
Once upon a time, an animated CG character was little more than a lump of virtual clay hanging off a few digital bones. These days, characters boast complex internal structures, with effects simulations driving nested layers of CG flesh to flex and jiggle just like real anatomy. Ziva Dynamics’ ZIVA VFX is a plugin for Autodesk Maya that takes things a stage further.
“Most character tools for visual effects focus on what the outside of an object looks like. Ziva uses FEM (Finite Element Method) which accounts for the various layers of bone, fat, skin, and muscle inside a character. Until recently, tools that included a complete interior simulation were in the realm of automobile and aerospace companies. That technology is now finding its way into the fast-paced visual effects pipeline.”
Michael Levine, senior creature effects TD, Image Engine
Changing the Game
The line between the film and gaming industries continues to blur, with games beginning to achieve near-cinematic levels of fidelity, and film adaptations of popular games attracting enormous budgets and top-drawer filmmakers. As a discipline that readily spans both areas of entertainment, visual effects is eager to keep its feet in both camps.
“The convergence of games and visual effects has been slowly taking place for many years, but this year it feels like it’s reached a turning point, with the visual effects industry reaching for a lot of technology previously only used by games. This has largely been driven by the huge growth of demand for ‘game-like’ VR experiences – often to complement major film releases – but it has also highlighted the rich authoring tools that many visual effects productions could benefit from.”
Rob Pieke, head of software, MPC
Gaming engines traditionally rely on powerful GPUs – graphics processing units – to render fast-moving imagery in real time. Now, the speed benefit of GPU hardware is making its presence felt in the world of cinema visual effects.
“GPU accelerated rendering is finally production ready. We have finished numerous projects over the last year and all of them utilized the GPU for final image rendering. Until now, we have found that GPU renderers had a hardware render signature or feel to them. Recent advances now offer a solution that rivals software quality renderers while maintaining the hardware speed advantage. We are seeing at least three or four times speed increases which allows for more iterations, in turn giving us better images, happier clients and happier artists. The speed, combined with new in-house techniques allow us to avoid the need to compromise on scene complexity. This freedom convinced the studio to fully integrate GPU throughout its entire pipeline for the upcoming feature film Colossal, from assets to simulation to final comp. We also recently completed a sequence for Journey To The West 2 which required a large creature simulation – previously, we would have been concerned that it would be impossible due to the traditional hardware limitations of previous GPU renderers, but issues like texture size limits are not a concern any more.”
Will Garrett, VFX Supervisor, Intelligent Creatures
Casting the Net Wide
As many companies are discovering, visual effects has applications beyond just film and gaming. From virtual reality to art installations to theme parks and beyond, artists are constantly finding new ways to explore the boundaries of the business.
“Artes Mundi” image courtesy of the artist, Bedwyr Williams, Limoncello Gallery, and, Bait Studio.
“Using visual effects outside of the norm has been something interesting for us in the past twelve months. Alongside our TV, film and advertising work we’ve taken on some contemporary arts projects which show how visual effects can be used in any visual medium. We worked on content for Cardiff Contemporary Visual Arts Festival, creating a fake meteor landing for Mark James Studio, which went viral. We also created a 4K, 20-minute matte painting for artist Bedwyr Williams as part of the Artes Mundi prize exhibition. It’s been good to look sideways at other types of content that VFX can play a key role in.”
Pete Rogers, visual effects producer, Bait Studio
“What makes the Artes Mundi project different to the typical type of visual effects work is that it was made to be viewed and experienced rather than just being part of a larger narrative. This kind of work is something that more and more visual effects studios are taking on with the recent developments in VR technology. It’s about creating an immersive environment where the viewer has more control.”
Llyr Williams, lead visual effects artist, Bait Studio
“Dream of Anhui” image courtesy of Tippett Studio.
“In the past few years we’ve seen a boom in new forms of media available for public consumption. In 2016, we worked on a huge 6K theme park ride called Dream of Anhui, completely in CG and rendered in Clarisse, that we directed and produced from start to finish. The finished renders had over 1,000 assets and some shots contained well over a trillion polygons, which really shows how things are changing to make incredible things possible. We even worked with the motion control company to sync our digital cameras to the movement of the seats.”
Niketa Roman, PR manager, Tippett Studio
“As we’ve branched into large scale environments – specifically for high resolution theme park rides – we’ve had to extensively invest in our production pipeline. Not only are we creating orders of magnitude more assets, we’re having to render more and more of them in a single pass at higher resolutions. A combination of new asset management systems, and the introduction of Clarisse has given us a workflow that scales above what would have been possible even a couple of years ago. On the capture front, we’re now using drones and photogrametry to do large surveys of outdoor spaces – this has given us accuracy and information about locations which traditional photo scouts lacked. In terms of visualization, we aren’t just making images for flat screens. From 180 degree domes to bespoke horseshoe shaped screens, we use VR headsets to preview content and make editorial decisions.”
Alex Hessler, CG supervisor, Tippett Studio
Moore’s Law – the observation that computing power doubles roughly every two years – has recently been called into question. However, one thing remains certain – the speed and power of computers will continue to grow for the foreseeable future. While this steady advance can only benefit professionals working in the visual effects industry, one truth remains the same – a computer is only as powerful as the human being who uses it.
“For me, instead of using the word ‘new’ I prefer to summarize everything in power and speed – those are the keys. When I started with 3D software back in 1995 or 1996, to render a 320-pixel image with just a few geos was a nightmare. Nowadays I can create entire worlds, scatter millions of objects, render huge images with photorealistic quality, do photogrammetry with my phone, sculpt billions of polygons. So we have speed and power, but for sure there’s something that is not new – the magic behind this.”
Pablo Del Molino Izquierdo, matte painter, Image Engine
Please Can We Have …?
Visual effects technology and techniques may keep progressing, but there’s always room for more innovation. What better way to end this roundup than with a request for what may turn out to be next year’s “what’s new in visual effects?”
“There’s one thing that does need inventing – a harness for wire work that doesn’t make the actor resemble a bluebottle in spider’s web. Invisible wires? Opposing magnets? A low-powered jet pack? Come on, guys – put those VR headsets down and do something useful. Oh, and a self-matting camera please. It’s the 21st century and we’re still drawing around things!”
Rob Harvey, owner/creative director, Lola Post Production Ltd
Thanks to all the visual effects professionals who contributed to this article, from the following companies:
Special thanks to Jake Basford, Anne Tremblay, Sepi Motamedi, Che Spencer, Jonny Vale, Tony Bradley, Niketa Roman and Alexandra Coxon. “The Matrix” photograph copyright © by Warner Brothers and Village Roadshow Pictures.