A veteran of films including Avatar, District 9, and all three films in The Hobbit trilogy, Matt Aitken gave a presentation about his role as Weta Digital visual effects supervisor on Avengers: Infinity War at VIEW Conference 2018. Cinefex caught up with Matt at the event and quizzed him not only about the creation of Marvel’s tortured bad guy Thanos, but also the evolution of digital characters at Weta Digital over the years.
CINEFEX: Matt, we spoke to you earlier this year for our article on Avengers: Infinity War. Just give us a quick recap on the work that Weta Digital did for the film.
MATT AITKEN: We did everything on planet Titan, where Thanos goes to get the Time Stone from Doctor Strange. Along with Digital Domain, we created Thanos, this digital character who is really the protagonist of the movie. For Marvel, I think this was a bit of a leap, having a lead character who was entirely digital. We were all very aware that if Thanos didn’t work, then the film was going to fail.
CINEFEX: Thanos was played by Josh Brolin, of course.
MATT AITKEN: That’s right, and Josh took to the digital performance space like a duck to water. He did this fantastic reference performance for Thanos, and really seemed to enjoy it. Because, you know, Thanos is a complex character. He’s not just a roaring, screaming baddie – he’s actually motivated by what he thinks are good intentions. We really needed to be able to get into his head for the storytelling to work.
CINEFEX: Weta Digital has this great heritage of doing digital characters. Did you break any new ground with Thanos?
MATT AITKEN: We did. In the past we’ve had concerns about these digital characters being physiologically different from the actors that are playing them – necessarily so. Caesar is different from Andy Serkis, the BFG is different from Mark Rylance, and Thanos is different from Josh Brolin. So, when we’re trying to make sure that we’ve captured all those nuances of performance, we’re kind of comparing apples to oranges.
We solved this for Infinity War by creating an intermediary step in the form of a digital facsimile of Josh. We called it the “actor puppet.” It was like a digi-double, but it was very geared towards facial performance, and had all the same range of motion as the Thanos digital puppet. We did a facial solve on the actor puppet, interpreting the tracking data from Josh’s actual face, then we iterated in that space until we were happy that we’d captured the full intention of his performance. So, at that point, we were in an apples-to-apples comparison space.
Once we’d done all that work, and were happy with it, then it was reasonably straightforward to transfer that motion to Thanos. We would take the animation curves, the timing and extent for each of the muscles on the face, and just apply it to the Thanos puppet, which we’d carefully calibrated to the Josh actor puppet. We feel this really helped us to capture all the subtleties of Josh’s performance.
CINEFEX: In that second stage, going from digital Josh to digital Thanos, did you inject a further level of finessing and performance?
MATT AITKEN: Yeah. We would get to the point where we were happy that we’d captured everything that Josh was doing in the technical sense, but then we would always have a keyframe animator – a craftsperson, if you like – sit down and do an extra pass to polish the performance. That’s something that we’ve done at Weta Digital since the very early days of Gollum and Kong because, as good as the pipeline is, the technology can knock the edges off a performance. I think that’s something that we will always cherish, that polishing pass. It’s part of our secret sauce.
Watch a breakdown reel showcasing Weta Digital’s work on Avengers: Infinity War:
CINEFEX: You mentioned Gollum and Kong. Weta Digital has this wonderful lineage of digital characters stretching from The Lord of the Rings through to Caesar in the Apes films, and now Thanos. Can you pick out a few of the key steps that you’ve made along the way?
MATT AITKEN: Kong was a key moment because, for the first time, we used facial motion capture. Some people may not realize that Gollum’s facial performance was entirely keyframe animated, but with Kong we stuck dots on Andy Serkis’ face, tracked them, and did a facial solve. This used a procedural approach to analyze what Andy’s face was doing, broke it down into individual muscle components, and then applied that to Kong’s face.
Now, for Kong’s facial performance, Andy was restricted to a cube maybe three feet on each side. If he moved out of that space we would lose the track, so we used it mainly for the big drama beats. Avatar was the next big leap forward because, for the first time, we had head-mounted cameras filming dots painted onto the actors’ faces. That meant they could roam freely throughout the performance capture space.
CINEFEX: Alongside those things that have changed, is there anything that hasn’t changed?
MATT AITKEN: Well, the thing that anchors all our digital performance work is that it’s always based on a human performance. That’s because, going all the way back to the time of Greek theater, actors are the people that we go to for performances. They are the specialists in that particular task. Why should we change that?
There’s another thing that hasn’t changed since the beginning – and this emerged through the process of working out how to do Gollum’s facial performance. Back then, we started by looking at taking the movement of Andy’s face and dragging a digital Gollum face around the same way, but that really quickly gave the appearance of somebody wearing a rubber Gollum mask. So we discarded that. We also looked at the same approach that we use for our body work, with muscles under the skin that fire, and bulge, and drag the skin around. But that was too crude for the facial performance – it didn’t give us the microscopic level of sculptural control that we needed.
So, the approach that we settled on for Gollum – and which has been the same ever since – was to take a sculptural approach. We have facial modelers who craft the individual component shapes of the performance to a very fine level of detail – a brow raise, a lip curl, and eyelid open or close, a cheek raise. We sculpt a set of 108 shapes for each facial performance, in a way that gives a full range of motion, but always on character. Then the facial animators create a performance from that by dialing those shapes in and out in a very complex way.
CINEFEX: Do you use that across the board? Take an extreme shot where, say, a character is thrown against a wall and their whole face distorts. Do you add some kind of dynamic simulation into the face rig, or is all that also controlled sculpturally?
MATT AITKEN: You mean like when somebody gets a massive punch to the head – which is the kind of shot that we’re often involved with! It would certainly be tempting to just add dynamics to it, but no, we want to have control over the shape of the face even at that moment, because we feel it’s so important to maintain character at all times. So we’ll sculpt those shapes as well.
CINEFEX: You’ve also made steady advances in the final physical appearance of these digital characters.
MATT AITKEN: Again, that started with Gollum, where we used subsurface light scattering, which makes the skin look very natural and not plasticky. Gollum wouldn’t have been nearly as successful if we hadn’t had access to that technology. Then there’s model weight – the amount of detail in the underlying geometry. Kong’s face had more geometric detail than there was in the whole of Gollum’s body. That’s been a constant progression, from Kong to Neytiri, Neytiri to Caesar, and that’s just about the tools getting better and the computers getting more powerful.
CINEFEX: How about lighting and rendering?
MATT AITKEN: We’re getting a lot better at hair and fur. We render now with our path-trace renderer, Manuka, which is able to capture a global illumination model, so everything feels much more photographic and natural. We also have our PhysLight lighting pipeline where we encapsulate all the physical characteristics of light, light transport, cameras – we’re talking absolute values for light rather than relative stops, for example.
CINEFEX: We know you can’t divulge the recipe for Weta Digital’s secret sauce. But are there any ingredients that you’d like to see added to it?
MATT AITKEN: Oh, I just feel like there’s always more we can do. It’s like we don’t ever complete a project – we just run out of time and they snatch it from us! For me, the first time I see one of these films is like another dailies session, only I’m not able to give notes any more! It’s only on the second viewing that I can watch it as an audience member. But, it’s a very exciting space to be working in – the performance space, virtual production, working with actors on the set for what is ultimately going to be a digital performance – it’s just great fun. I just hope to be able to keep doing that.
Save the date for next year’s VIEW Conference, scheduled for 21-25 October, 2019.
“Avengers: Infinity War” image copyright © 2018 by MARVEL.