Capturing on Digital vs Film — What’s the Difference?
This is another question that came to me via a film course lecturer. I’m going to try and give an accurate answer without getting TOO technical, however for a good understanding, some tech is needed, bare with me:
In terms of how digital film is recorded, it’s actually rather similar to celluloid in many ways. A digital film or ‘clip’ is made up of a series of digital still photographs essentially, just like celluloid. Each still is called a ‘frame’ and films are usually made up of 24, 25, or 30 frames per second, depending on your region. Typically 24 is used for all cinema releases, 25 is used for PAL (Europe) broadcast, and 30 is for NTSC (American) TV.
For HD, every frame equates to a 2-megapixel image, for 4K it’s 8MP, and for 8K it’s 35MP (approximately, adjusting for sensor dimensions). How these frames are captured by the camera and turned into what we’d call “footage” is where things differ from celluloid.
In celluloid cameras, as you’re probably aware, the film itself has a red, green and blue filter layer stacked within the celluloid, as well as a silver bromide mixture that reacts when hit by photons to leave an impression of an exposed colour image. It’s the chemistry of the film stock which determines the saturation, hue and luminance of an image.
As it stands in 2018, digital is pretty different in this sense. Back when digital camera sensors were first created, processor technology wasn’t fast enough to be able to record a red, green and blue layer of sensor information simultaneously, and even if it could, we weren’t good enough at compressing this information—the size of the footage would’ve been enormous and impossible to edit.
Consequently, Bayer pattern sensors were created. Pretty much all digital cameras, including high-end cinema cameras, use this technology. A Bayer pattern sensor basically relies on clever maths to work. Rather than having an entire red, green and blue sensor stack, there is just one sensor layer, which has red, green and blue pixels within it. Because humans are more receptive to green than any other colour, Bayer sensors have two fixed green pixels for every red or blue fixed pixel. The result is summed up in the image below:
When light hits a digital camera’s sensor, it will excite either the red, green or blue pixels. This signal is converted from analogue to digital almost instantly. Each frame captured turns into a digital mosaic made up of green, blue and red pixels. The problem here is that because of the way the sensor’s pixels are arranged, there will be gaps between colours, and more green than anything else! For example, if you were filming a red car, only 1/3rd of your sensor would capture any information, and that wouldn’t look good at all. (probably something like the image below, but even worse)
This is where the clever maths comes in. Once the camera has captured the sensor image and it’s been turned into this digital mosaic, most* of the time it gets sent off instantly to the camera’s in-built processor. The processor then performs something called debayering or demosaicing. It basically uses a really smart algorithm to ‘make up’ and fill in the gaps based on pixel proximity. For example, if you’ve filmed a red car, and 9 red pixels have lit up in a tight group, chances are the green and blue pixels in the same area were supposed to be red too. It isn’t perfect, but it does save space and time.
A digital camera processor will perform this process 24, 25 or 30 times every second! Each camera manufacturer has it’s own preferred decoder/encoder for performing this process, which is referred to as its codec. There are many out there, and they all largely do the same thing, albeit slightly differently. Their main function is to turn the information they’re given into a complete, smooth image. They also use clever compression algorithms to make the files smaller. Once this has happened, you’re left with a collection of stills, just like celluloid! A big ol’ pile of loose, single images, consecutively numbered. Not ideal for playback on a computer…
To transform this into a video file that your media player or editor can make sense of and play back, the camera’s processor will use something called a wrapper. A .MOV, .MP4 (or God forbid, an .AVI or .WMV) are all examples of wrappers. They pretty much just package up the pile of decoded frames into a complete video file, so that when you open it in a media player, the media player thinks “Oh cool, this is a video file that has 25 frames per second. I know how to play this, here ya go, mate…”.
*Some fancy high-end cameras capture RAW, whereby they literally just save the digital mosaic data straight to the memory card without going through an in-built codec. You then plop this data into your high-end editing computer and let it use the codec there. This means capturing massive files in camera, but it does give you more control of your images afterwards. Desktop computers are also usually more powerful than camera processors, so they do a better, more thorough job of debayering.
The advantages of digital are mostly cost related; it’s cheaper to shoot digitally on the whole. Disadvantages are that film cameras tend to give a more organic image, and digital Bayer pattern sensors can get things wrong and make images difficult to properly manipulate in the edit.
Wow, this got technical pretty fast. The sad part is that this is kind of the tip of the iceberg. There’s easily a TED Talk to be done on this…
TL;DR: No, it records very similarly to celluloid, it just uses a codec and a wrapper to package up the stills digitally into footage.
If you have any questions about this, or indeed about any other aspect of filmmaking, do get in touch!