Do you have a strategy to cope with mixed cadence content?

Why Captioning Need Not Be ScarySo here’s the problem. Closed captions – or subtitles outside North America – are now pretty much mandatory on television broadcasts: and even if the law does not require them, your audience still expects them.

Today consumers expect to get their “television content” online and on handheld devices as well as from broadcast sources. Not unreasonably, many of them will expect to find captions on these services just as they would on broadcast television. So we really ought to be sure that captions are created as part of the production, and delivered whatever the platform.

But… I count today 15 different subtitle input types, and 23 different delivery types for different platforms. So that is 345 different combinations, which means a lot of transcoding. Add to that the need to reconsider captions for different screen resolutions and you could end up with thousands of different paths. Which is why people tend to run away screaming from the problem.

Hold on, though. If we stop and think about it, we have already bravely fought our way through a challenge of this scale. We deal regularly in content that is created in standard definition, high definition or something else, at 24, 25 and 29.97 frames a second, and wrapped in QuickTime, MXF or something nasty. We have to deliver to multiple broadcast formats and the vast number of online streams.

It was tough, certainly, but we did it. We created workflows which looked at what was coming in, processed the file accordingly, and adjusted the metadata so that the files made sense on the way out. Clever workflows can do all of this using a single platform like Dalet AmberFin.

If we can do it for audio and video – which are complicated – then why should it be hard to do it for captions which, after all, are just little bits of text to be displayed at pre-defined times? The answer is that it really is that simple. Once you get the workflow right.

That profound understanding of how captioning works is also important in the other critical area: editing the content. You will need to provide different versions of the same piece of content for different platforms: for compliance and censorship; for different commercial break patterns; to comply with content licensing and intellectual property rights; and for time.

Once you edit the content you want to deliver it as quickly as possible. So sending it off to a specialist caption house to get a new set of subtitles is not an option. It’s far better to automatically edit the caption file as you edit the content.

We have had captions for 40 years now, and have developed along the way workflows to make them work. Today, though, we have to adopt a file-factory approach or we will be swamped in data. With the advanced captioning options from Dalet AmberFin, you can move at your own pace to an integrated approach, guaranteed to be free of scary surprises. Creating a captioned clip should take no more effort, time or money than creating a non-captioned clip.

If you want to find out more about how to improve your captioning workflow click here to instantly download our white paper. You can also check out the latest Dalet AmberFin datasheet that details how the Dalet media processing platform handles integrated closed captioning and subtitling workflows, providing seamless processing of captioning data as part of media operations.

I hope you found this blog post interesting and helpful. If so, why not sign-up to receive notifications of new blog posts as they are published?

CTA Captions Whitepaper

Share this post:

Subscribe

Sign up for Dalet updates and our quarterly newsletter. Expect #MediaTech insights!

Leave a Reply

Your email address will not be published. Required fields are marked *

Recommended Articles

AI technologies progressed drastically in the last few years. Speech-to-text and face recognition are prime examples of use cases where AI-driven solutions that have existed for many years have now reached an acceptable level of maturity and commercial viability.

Read More
AmberFin_hero

Leveraging Premium Media Processing for Business Success

In today's fiercely competitive media business environment, every company is looking for means to stay ahead of the pack. Smart, highly efficient media processing can be a game-changer. Discover how Dalet AmberFin delivers high-quality content that grows your audience.

Read More

How Cloud-Native MAM is Adopting PAM Features for Seamless Media Workflows 

Recently, the lines between Media Asset Management (MAM) and Production Asset Management (PAM) have become increasingly blurred. This convergence reflects the quickly evolving needs of the industry. In recent years a huge leap forward in technological capability has coincided with rising creative demands and shifting media consumption trends. This has all had a significant effect...

Read More

AI technologies progressed drastically in the last few years. Speech-to-text and face recognition are prime examples of use cases where AI-driven solutions that have existed for many years have now reached an acceptable level of maturity and commercial viability.

Read More
AmberFin_hero

Leveraging Premium Media Processing for Business Success

In today's fiercely competitive media business environment, every company is looking for means to stay ahead of the pack. Smart, highly efficient media processing can be a game-changer. Discover how Dalet AmberFin delivers high-quality content that grows your audience.

Read More

How Cloud-Native MAM is Adopting PAM Features for Seamless Media Workflows 

Recently, the lines between Media Asset Management (MAM) and Production Asset Management (PAM) have become increasingly blurred. This convergence reflects the quickly evolving needs of the industry. In recent years a huge leap forward in technological capability has coincided with rising creative demands and shifting media consumption trends. This has all had a significant effect...

Read More