Guy Williams, visual effects supervisor at Weta Digital, was at VIEW Conference 2019 presenting the company’s groundbreaking work on Gemini Man, for which artists created an uncannily realistic digital version of Will Smith as the 23 year-old ‘Junior.’ Cinefex spoke with Guy before his presentation to discuss some of the extraordinary changes he’s seen since starting his career in the early 1990s.
CINEFEX – What was the first project you worked on as a visual effects artist?
GUY WILLIAMS – My first job was at Boss Film, when I was fresh out of college. We were doing the Budweiser Superbowl commercial. They had made the decision to go all-CG with the bottles for the first time, and Boss was hired to do 40-50 shots of bottles playing football. I was at Boss for just over a year, then I went to Warner Digital for two years, then bounced around a few other places in Los Angeles. Back then, I knew just enough about computer graphics to get in trouble!
CINEFEX – Boss Film was set up by Richard Edlund after he left ILM. Was this around the time they were transitioning from photochemical work into the digital realm?
GUY WILLIAMS – Right. ILM got into computer graphics first, but in the early ‘90s a lot of companies moved rapidly in that realm. You had two different kinds of companies crop up – the ILM spin-offs like Boss, and new teams like Digital Productions, The Secret Lab, Rhythm & Hues. Boss started in photochemical and then made the transition to go CG – they had one of the biggest CG departments at the time. Boss doubled down really hard on digital, but couldn’t figure out how to make a business model out of it and ended up collapsing under the expenditure. But they did a lot of really big projects.
CINEFEX – What was it like at Boss back then?
GUY WILLIAMS – On my first day, I was expecting to walk into this beautifully polished operation, with 35 talented people and five new hires. I got there and there were two people with experience and 38 new hires!
CINEFEX – Why was that?
GUY WILLIAMS – Because there weren’t even 100 people in the industry worldwide at that point. Staffing your company with 40 people would have drained every other company and even closed a bunch of them down. So they had to draw people into the industry. That was the first big swelling of computer graphics in visual effects.
CINEFEX – What was your specific role?
GUY WILLIAMS – You didn’t have 10-15 different departments in a company back then. You just had CG artists. Everybody got on the same team and solved all the problems. As time went on, Boss made the decision to split that into two – you had people doing the 3D side of things – modeling, animating, lighting – and then you handed off to the second team which did the paint and compositing.
CINEFEX – Things are certainly different today. The tools you were using must have been very different, too.
GUY WILLIAMS – There were no real compositing packages back then. We did all our compositing with command line tools. If you wanted to put image ‘A’ over image ‘B’ and output image ‘C,’ you would do all that just by writing a line of code. If you wanted to blur it, colour correct it, add a drop shadow, you ended up with these big, complex scripts that ran each composite individually, one line of code at a time.
CINEFEX – It sounds painfully slow.
GUY WILLIAMS – Oh, it was hugely inefficient. The early systems would read in the images one pixel at a time, do the math on the pixel, render the pixel back out. Blurs were really hard to do, because your program had to read enough pixels in to do the blur. The larger the blur, the more pixels you had to read in. That really slowed things down.
CINEFEX – So you were artists and coders, both at the same time.
GUY WILLIAMS – If you wanted to do something but there was no tool to do it, you’d write it yourself. That’s how the industry worked in the beginning. We were a bunch of special forces kids who would solve problems as they cropped up.
CINEFEX – Can you give us a specific example from your own experience?
GUY WILLIAMS – We were doing a Kelloggs cereal commercial. The milk pitcher was shaped like a little cow and got accidentally stuck into the cereal. It was a full CG milk pitcher, but the animation tools didn’t allow for overlapping lattices in the deformation, which meant we couldn’t have facial animation and bend the neck. I wrote a tool that merged the models together so we could do the facial animation in one file, do the neck animation in another file, merge the result together and get a final render.
CINEFEX – Could you have foreseen the speed of change since those early days?
GUY WILLIAMS – I was too young to think that smart! It’s not because the future was clouded to me – I just didn’t think to look in that direction. I don’t think I would have predicted that we would have gotten departmentalised so much, because back then it wasn’t necessary. People actually saw departments as a restriction, because the financial impetus wasn’t there and there was a morale hit. In hindsight, we should have known it was coming, because the projects were always going to get bigger.
CINEFEX – That’s something that hasn’t changed – projects getting bigger year by year.
GUY WILLIAMS – You know, we used to have this joke that every frame would take an hour to render. That first Budweiser commercial I did, by the time you added up all the passes – one hour. The cereal commercial – one hour. By the time I was working on the first Lord of the Rings movie – maybe two hours. After that it really ramped up. For King Kong you’re talking six hours. Now, on Gemini Man, you’re looking at 400 hours, and we had some renders that took well over 1,000 hours per frame. That’s parallelised across a lot of processors, of course.
CINEFEX – So, even though computers are way faster than they were in the ‘90s, the stuff you’re asking them to calculate is way more complicated.
GUY WILLIAMS – Here’s an example of that from Gemini Man – one of my favourite things about what we did. In the past, we would have painted a texture map for the face, then taken out some of the red so by the time we added the subsurface back in, with the bloodflow, it would end up at the right colour. On this show, we painted separate maps for the two layers of melanin in the skin – eumelanin and pheomelanin. The maps work together so when you look at his face from the front, you see yellow in a certain place, and when you look from the side you see yellow in a different place. That’s because you’re seeing past the darker melanin layers into the paler skin colours that lie beneath. That’s one reason each frame takes so long to render, but it’s that attention to realism that really makes the difference. If you want to believe this thing is 100 percent living and breathing, you have to treat it as if it really is living and breathing.
CINEFEX – So you’re simulating more accurately the way light passes through the various skin layers.
GUY WILLIAMS – And I’ve only scratched the surface on that. On the subject of light transmission, we don’t do RGB math on our rendering any more. We don’t talk in terms of red, green and blue and how they make any colour possible. Our math is all done in waveforms, frequencies of light. That’s important, because it means we can feed in the correct absorption terms for pheomelanin and eumelanin. So, when you shine a certain colour of light on the skin, it gives you the correct result.
This all dates back to Avatar, by the way, where we had saturated blue creatures carrying around orange torches. If you do the RGB math, orange times blue equals zero, so you see hardly anything – well, if the orange light is bright enough, you might get a little bit of brown. But that’s not right. If you take a blue ball and put an orange light next to it, you’ll still see blue. That’s because it’s about absorptions of frequencies of light. We do all that stuff now on our renderer.
CINEFEX – Each new step takes the craft closer to scientific accuracy. But some things are still approximations. With a typical creature rig, for example, the animator moves the bones, and the bones drive the muscle simulation. But that’s backwards – in the real world it’s the muscles that drive the bones.
GUY WILLIAMS – Exactly. When I raise my arm, I do it by firing my bicep. When I reach out with my finger to touch something, a dozen or more different muscles fire to get it to land right where I want it to. That’s because of years and years of hand-eye coordination training. But imagine animating like that – it would be incredibly hard to do. I don’t know if we’re ever going to go there, but we’ll go much closer to it. We’ll still animate the bone, but then we’ll pass it through a filter that goes back, turns the bones off, and fires what it thinks the muscles would be doing. That’s definitely one for the future.
CINEFEX – What about other kinds of simulations, like water and cloth?
GUY WILLIAMS – I think we’re going to see more coupled sims in the future. Right now, we cheat coupling by doing one sim and using it to affect another sim. But the truth is they should affect each other equally, constantly.
CINEFEX – Can you elaborate on that?
GUY WILLIAMS – Here’s an example. When I started in the industry, one of the first things you learned to simulate was a flag fluttering in the wind. The depressing thing is that the technology we use today to do that exact same thing is very similar to what it was back then. All you’re doing is putting forces on the flag and using the inertia of the flag to make it look like the flag is billowing. There’s no accounting for the density of air.
CINEFEX – Which affects the movement of the flag?
GUY WILLIAMS – You ever see a sheet hanging on a line? The breeze picks up and the sheet swells like a parachute, then collapses as the wind passes through it. You cannot do that with a single cloth simulation – you have to couple it. You need to do a fluid sim for the air that responds to the sheet at the same time as the cloth sim responds to the air movement. All that’s part of the next round of cool tricks we’re coming up with. In a way it’s nothing glamorous – except it is, because finally cloth will really start to look amazing.
That’s what’s beautiful about our industry today – all this stuff is ongoing. Ten years from now, there’ll be a whole bunch of new things that we’re figuring out. We are still growing.