The Trouble With Movie Stars

by Graham Edwards

Night sky image by ESO/Yuri Beletsky

Hollywood stars are simply too big for their boots.

To clarify, I’m not talking about those charming actor-types known affectionately as the talent. I’m talking about actual stars. You know, those sparkling points of light that form the perfect backdrop for a swooping spaceship, or twinkle delicately over a farm at night, shortly before a gigantic killer robot squashes the barn.

Putting stars on the screen has always been a tricky business. Once upon a time, a night shoot meant stopping down, taping a blue filter over the lens and hoping in vain that the audience might actually buy the whole concept of day-for-night.

For me, the best old-school night skies appeared in Close Encounters of the Third Kind. Remember those fabulous Doug Trumbull starscapes suspended over Greg Jein’s miniature landscapes? For the first time in movie history, we saw stars that really looked like stars.

Reflecting on some of the sci-fi movies I’ve seen recently, it struck me just how realistic the stars looked (well, someone’s got to think about these things). Whether you’re tweaking your noise nodes in Nuke, plugging in particle systems with Maya, or just spraying yourself a nebula in good old Photoshop, the tools are available to help you make a starfields that are indistinguishable from the real thing.

But how realistic are they? To find out, let’s go stargazing

First, let’s choose ourselves an empty meadow, far away from city lights. Follow me to the middle of the field (mind that cowpat). Now we wait half an hour for our night vision to kick in. Finally we look up … and see spread above us the most gorgeous filigree of cosmic light. I mean, just look at it. It’s incredible, don’t you think? In particular, can you see how tiny each individual star is? I mean really tiny?

So here’s my question: if you were a visual effects supervisor charged with replicating that awesome view, how could you possibly make the stars small enough?

Time to crunch some numbers.

Most of the stars we can see from our meadow just look like points of light to our human eyes. In other words, they have an imperceptibly small angular diameter, which makes them impossible to measure. Luckily for us, however, there’s one star up there that’s bigger than the rest. It’s called Betelgeuse and, according to data from the Hubble Space Telescope, it has an angular diameter of around 0.125 arcseconds. One arcsecond is equivalent to 1/3600th degree. Or, if you prefer, pretty darned small.

Brr! Cold out here, isn’t it? Let’s go somewhere a little warmer. I vote for a movie theatre.

Everyone sitting comfortably? Right, let’s project a scene showing the same night sky we were just looking at. The scene’s been shot digitally at 4k resolution, which means we’ve got 4,096 pixels spanning the screen horizontally from left to right (for this exercise, I’m going to ignore the vertical). The scene’s been shot with a lens giving us a “normal” field of view of 55 degrees. That’s equivalent to a dizzying 198,000 arcseconds.

Next we need to work out the ratio of pixels to arcseconds. To do this, we divide 4,096 by 198,000. That gives us an answer of almost exactly 0.02. In other words, a single arcsecond spans just one fiftieth of a pixel.

The apparent diameter of Betelgeuse is one eighth of an arcsecond. So, in order to visualise the star “realistically” on screen, it’s got to be just one four hundredth of a pixel wide.

Correct me if I’m wrong but, by definition, one pixel represents the smallest single point that can be resolved on a movie screen. And Betelgeuse – the biggest star in the night sky – is four hundred times smaller than that!

There are only two possible conclusions we can draw from this bombshell. The first is that, with current digital technology, it’s literally impossible to project an image of a star at a size that’s genuinely representative of what the human eye would see.

The second possibility is that I’m really terrible at maths.

Frankly, I’m quite prepared to believe the latter. I’m also prepared to take flak on all the things I’ve failed to account for, such as atmospheric haze, which produces a glow around a star that greatly affects its apparent size. Or the fact that no film director is going to show you half an hour of black footage just so your pupils can dilate wide enough for him to show off his ultra-realistic stars.

As for all you optical and mathematical wizards out there, I can hear you furiously thumbing your calculator buttons already. So I’ll leave you with an open invitation to tear my argument apart and beat me over the head with its remains until … well, until I see stars.

Night sky image by ESO/Yuri Beletsky

6 thoughts on “The Trouble With Movie Stars

  1. But you are missing the bright of the star. And the bright of a pixel is the average bright of the area covered by it: the star and the black void of space.

    • Sure, brightness is a factor. But, if we’re creating our starfield digitally, however accurate the original dataset is in terms of each star’s physical characteristics, the final image, as you point out, shows only an average brightness. The rendering process effectively blurs the data and makes the stars too big, just so they can be seen.

  2. Here is an interesting article about the measured size of stars in photographic plates vs. their magnitudes, I think it will answer your question.

    http://adsabs.harvard.edu/full/1942MNRAS.102..242E

    In a nutshell, the amount light spreads in photographic emulsion (and by extension, in a sensor, and also your retina) is proportional to magnitude and varies by wavelength. This results in star images much larger than you would expect if your estimation was simply based on attempting to realistically reproduce the light from the stars in real life.

    • Thanks, Nick. The link to the article appears to be broken but, if I understand your explanation correctly, the physics of the viewing apparatus itself affects the perceived size of the star. But then everything comes back to the observer, doesn’t it? Nothing’s absolute. I wonder how the spreading effect of a CCD compares to that of photographic emulsion, or the human retina.

  3. There can be a fair amount of spread on a CCD, proportional to the number of photons per pixel. The charge the photon introduces leaks out to neighboring pixels, and even through the substrate. Manufacturers do all kinds of things to minimize spreading, like putting so-calling “charge drains” under the sensor, but the resolution of a CCD is often less than the number of pixels it has because of charge spreading.

    Astro-people spend time working out curves for each sensor so they can judge the magnitude of a star according to how far the image of the star has leaked out into neighboring pixels. As far as I can tell, film and CCD are quite similar in this regard.

    If you point an inexpensive video camera towards a very bright light, you will sometimes see a really bright vertical line appear in your image. That’s the worst case for spreading, where the charge is excessive, and it chain reacts all the way across the sensor!

    The human retina is a different case entirely – although there is spreading for a number of reasons, both optical and electrical, the retina is backed by an adaptive image enhancement process before the signal even hits the optic nerve. There is local gain adaptation in addition to edge recognition and enhancement, and motion detection as well. That’s why we see after images and so on, the retina has adapted to what it is seeing, and when you look away it has to re-adapt.

    It makes you realize that photorealism is kind of an odd goal for effects, and can give you more of an appreciation for an artistic interpretation to an image – are you trying to achieve what a camera with a given technology achieves, or are you trying to guide the viewer’s sensory apparatus into seeing what you want them to see? I think that’s the bottom line for all the unrealistic skies we’ve viewed that nonetheless make us feel like we are seeing starts.

    • Thanks for that fantastic breakdown, Nick. Fascinating stuff!

      I agree with your closing comment about photorealism. Many people judge the quality of a visual effect by how “realistic” it looks, whereas the truth is that everything we see on a movie screen, whether “real” or not, is limited by the physics of the camera, and directed by the artistic intentions of the director and DP.

Comments are closed.