Imagine you are one of the great auteurs of the cinema. Maybe Ingmar Bergman, or perhaps Martin Scorsese. You pour all your energies into creating what you believe is your masterpiece, a visual feast, a triumph of the cinematic art.
Then someone explains to you that audiences probably will not see what you see in your preview theatre. They are going to be distracted by unpleasant picture artefacts that will ruin the mood and destroy your carefully balanced creation.
You would probably be quite unhappy. “This cannot be!” would be cried. “Something must be done!” would be growled in anger. Rude words would ensue.
Now imagine you are the head of production in a post house, finishing a major drama, or perhaps an art documentary. The director chose to shoot a “film look” 24 frames a second digital camera. In your viewing theatre, after the final grade, it looks absolutely stunning.
Then you have to explain to the director that, by the time it gets to the viewer’s screen at home, it could be ruined.
But this is the reality because, a decade and a half into the 21st century, we are still perpetuating a technical fudge that was invented in the 1920s to get around a limitation that no longer even exists.
Interlace is Evil
From the dawn of television, we knew that we needed at least 50 pictures a second to create the impression of smooth movement. But given the primitive electronics back then, it simply was not technically possible to deliver 50 pictures a second.
So we created a workaround called interlacing. This delivers 60 half pictures a second (50 in Europe and other parts of the world), by interlacing first the odd-numbered lines on the screen, then the even-numbered lines. You only get 30 (or 25) pictures a second because that is all the electronics of almost a century ago could manage. But the eye is fooled into thinking that it is getting 60 (or 50).
Today we do not need to do that. We can create 60 full pictures a second. We have the bandwidth to deliver 60 full pictures a second. The flat-panel screens in everyone’s homes display 60 full pictures a second.
Yet interlacing still exists, in all the broadcast systems currently in use. I cannot state this too clearly or too often: interlacing is evil and should be destroyed.
Let me explain why. Film – and now digital cinematography – is generally shot at 24 frames a second. Again that was a technical compromise: it was as fast as celluloid could be pulled through the camera without being ripped to pieces. But we really cannot argue with 120 years of Hollywood history.
So the source is 24 frames a second; our output is (currently) 30 interlaced frames a second. This is bridged by a familiar technique: the 2:3 cadence (old-timers might know it as the 2:3 pulldown, from telecine days).
Take four “film” frames – let’s call them A B C D. Frame A is shown as two fields (half pictures) of video, frame B is shown for three fields of video, frame C for two fields of video, and frame D as three fields. You alternate two and three, in other words. And from four frames of 24fps original you get five frames (10 fields) of 30fps video. It works and it looks fine.
Until someone downstream edits it, without respecting the sequence. Then it starts to look really bad. And if the broadcaster then puts a credit roll on top, or some translucent graphics in a trailer, then the video motion will make it hard to watch. The director will certainly have a breakdown.
What can we do? Ultimately, the end game is to convince everyone in the world thatinterlacing is evil and rid the world of this pestilential practice.
Until then, you have to hope that the broadcaster will have some sort of intelligent device that will minimise the effects and repair the cadence where practical. We have some solutions at AmberFin. If you want to read more on the topic, check out our Mixed Cadence white paper.
Without it, we are condemned to a world of broken-hearted directors.
I hope you found this blog post interesting and helpful. If so, why not sign-up to receive notifications of new blog posts as they are published?