The recent release of the movie Anchorman 2 marks a major turning point in our industry. Not because it is an infantile follow-up to a not very funny original, but because it is the last movie that Paramount will ever release on film:
Yes, that’s right. The first of the Hollywood majors has shut down its celluloid delivery for good. From now on, it is all digits. So all the problems we used to have with film-originated material have gone away in the digital era, right?
24 frames per seconds is here to stay
When the first moving pictures were shown, back in the century before last, audiences loved them. That enthusiasm has never gone away: we still like to go to the movies.
Once the technology settled down, the world agreed on 24 frames a second. That was about as fast as the mechanical systems could move the relatively fragile film. It gave a reasonable sense of movement, and did not flicker too much if you flashed each frame twice on the screen.
But it had its limitations, the most obvious of which was that fast camera movements were out: they looked uncomfortably jerky. So directors and cinematographers learnt that slow pans and zooms worked, and over a century that became embedded in the language of the movies.
Now that we no longer have to worry about shuttling celluloid forward, we could shoot at any frame rate we want, and indeed some people are choosing to use higher frame rates. But for a lot of people, the slightly dreamy feel of slow camera movement is part of movie magic, and they want to retain that visual mood in their television drama or shampoo commercial or whatever.
That is why a lot of content destined for television and online platforms is shot, using digital cameras, at 24 frames a second. But for television in many parts of the world we need 30 frames a second (yes, I know it is 29.97, but that is a whole other argument).
This was solved for film in the 1950s with the 2:3 cadence, scanning one film frame for two fields and the next for three, and so on. Four film frames became five television frames, and hey presto 24 fps was converted to 30 fps.
It was not perfect. If you are old enough to remember Westerns on television, you will remember that the wheels on the wagon always seemed to be going jerkily backwards. But most of the time it was quite good enough.
Today we use the same 2:3 cadence with digital pictures. We might do it inside our editing software because we may want to mix frame rates on our timeline, or we might do it through a standalone box on output, but it is the same process that we have known and understood for 60 years.
The problems only come downstream, when the content needs to be edited for some reason. A broadcaster might need to change the commercial break pattern, or add different credits, or make a content edit.
The right way to do it is to take the impeccable content as delivered by the producer and strip out the 2:3 cadence, edit it at 24 fps again, then output it to the target frame rate. Does that always happen under the pressure of broadcast deadlines: what do you think?
The problem is that, if you do not cut on a cadence boundary – remember that four “film” frames have become five “television” frames, so you have to take these as a group – then awful things will happen when the content finally gets through the interlaced broadcast system and onto your progressive display. At best it will be uncomfortable to watch. At worst it will be a travesty of what the director visualized, with coffee-room conversations of “Did you see those jerks on television last night?”.
In an ideal world, your content will never need to be touched downstream so you need never worry about this process. In the real world, you might want to look at our white paper on adaptive file-based standards conversion of mixed cadence material. It is more readable than it sounds, and it could keep your programs looking beautiful, wherever they are shown and regardless of what has been done to them along the way.