Do you have a strategy to cope with mixed cadence content?

Frame Rate Conversion Doing it in SoftwareIn a previous blog post – “Playing the numbers game with mixed cadence materials” – I outlined some of the problems that we still face when handling material shot in 24fps. Physical rolls of film may be all but history, but people love the film look, so we still have to wrestle with the 2:3 cadence, the bodge we invented to get from 24fps to 30fps to avoid making everyone run around and talk as if they are breathing helium:

And I promised I would return to the subject, and talk about what we can do to get the best possible quality out of media that was shot at a different frame rate to the display rate. In order to figure out the best technique, first we need to know a little about the realm of possible solutions. Here are some common techniques:

Do nothing – if there are no interruptions to the 2:3 cadence, then smile sweetly and watch the pictures not getting degraded any further.

Change the metadata – if you tell the receiving device the content is at a new frame rate, it will read the video at that speed. You can get away with it when converting 24fps to 25fps with a little resampling of the audio, but it really does not work if you try big speed-ups when converting 24fps to 30fps.

Drop or repeat – you can increase the frame rate by repeating some frames; or decrease it by dropping some frames. After all, this is sort of what we do with the 2:3 pulldown. But repeating frames regularly is blindingly obvious and your audiences definitely will notice. Only to be used sparingly and intelligently to repair problems.

Linear interpolation – basically, blend two pictures together to make a new one. But the better the original pictures, the worse the artefacts: fast motion and sharp edges become really smeary due to the blending process – this can be very visible.

Motion compensated frame rate conversion – this is the same sort of process, but this time every pixel in the scene is measured and its motion vectors predicted. The new pixels are a linear blend of the source pixels projected along the motion vectors. Smart processing can predict with a degree of confidence whether a pixel is a moving object or noise. Works really well with a lot of image types, but some repetitive images – like car wheels or pans across regular structures like brickwork are very difficult to process and result in very disturbing results.

Black box magic – clever inventors regularly come up with new approaches and better algorithms. The chances are, though, that like motion compensated conversion, the current gold standard, it will work well with one sort of image type and not with another.

So which do you choose? I suggest the smart answer is that you do not choose one mechanism for a whole clip, but choose the best algorithm for a short sequence within the clip.

If you go out and buy a hardware box – even if it has the best possible processing – then your only option is to have that process either in the circuit or bypassed. And as I started out by saying, quite often we can get away without doing anything.

But if you do it in software, then that software can make the decision of which technique to use, on a scene by scene basis or even a frame by frame basis if necessary. That is the flexibility of software: you can choose how much you use. If the next scene needs something different, then switch to it. How do you switch? You’ll have to read the white paper to find out – we think it’s quite simple and therefore quite clever – but you should decide.

My conclusion, then, is that the perfect world where interlace no longer exists and cadence is a thing of the past has still not arrived. Until that glorious future, we still have 24fps origination in a 29.97i transmission chain and have to be flexible about how we handle it. The good news, though, is that if you make it part of the transcode and quality control process on a software platform like iCR, you can adapt the processing to optimize the output scene-by-scene.

If you want to find out more about the problems and my solution, we have a white paper.

I hope you found this blog post interesting and helpful. If so, why not sign-up to receive notifications of new blog posts as they are published?

CTA Adaptive File based Standards Conversion of Mixed Cadence Material Whitepaper

Share this post:

Subscribe

Sign up for Dalet updates and our quarterly newsletter. Expect #MediaTech insights!

Leave a Reply

Your email address will not be published. Required fields are marked *

Recommended Articles

The Transformative Power of News 

With increasingly fragmented audiences and the rise of highly-biased and even fake news, maintaining the integrity of journalism is more crucial than ever. This is why at Dalet, we are on a mission to empower storytellers with innovative technology solutions, so they can overcome the challenges of delivering news in the 2020s.

Read More
Story Centric News

Story-Centric Vs. Traditional News Production

At its core, story-centric production prioritizes narratives and audience engagement across multiple channels. Unlike the static, point-in-time focus of traditional broadcasting, this approach emphasizes flexibility, collaboration, and continuity. The goal is not just to tell stories but to create cohesive narratives that resonate with audiences wherever they consume content.

Read More

Future-Proofing the Newsroom: Key Insights for Modern Newsrooms in 2025 

Newsrooms are under immense pressure to adapt and innovate due to changing audience demands, new digital platforms, and the impact of social media, all while dealing with tighter budgets and stretched resources. Dalet, in collaboration with Caretta Research, has released an industry report titled ‘The future of newsroom workflows,’ offering insights and strategies from a global survey of over 60 leading news organizations.

Read More

The Transformative Power of News 

With increasingly fragmented audiences and the rise of highly-biased and even fake news, maintaining the integrity of journalism is more crucial than ever. This is why at Dalet, we are on a mission to empower storytellers with innovative technology solutions, so they can overcome the challenges of delivering news in the 2020s.

Read More
Story Centric News

Story-Centric Vs. Traditional News Production

At its core, story-centric production prioritizes narratives and audience engagement across multiple channels. Unlike the static, point-in-time focus of traditional broadcasting, this approach emphasizes flexibility, collaboration, and continuity. The goal is not just to tell stories but to create cohesive narratives that resonate with audiences wherever they consume content.

Read More

Future-Proofing the Newsroom: Key Insights for Modern Newsrooms in 2025 

Newsrooms are under immense pressure to adapt and innovate due to changing audience demands, new digital platforms, and the impact of social media, all while dealing with tighter budgets and stretched resources. Dalet, in collaboration with Caretta Research, has released an industry report titled ‘The future of newsroom workflows,’ offering insights and strategies from a global survey of over 60 leading news organizations.

Read More