Do you have a strategy to cope with mixed cadence content?
"Funny Swedish subtitle, when the subtitles broke down" - CC Image courtesy of Robert Nyman on Flickr

According to the UK’s Office for Communications (OfCom), almost 80% of people who have used subtitles / captioning have no hearing impediment. Wow, that’s a big number. How does that equate to audience, and why is captioning and subtitling technology so impenetrable?

I am sure that if you think hard enough, you’ll remember when you last turned on captions or subtitles. It might have been because someone else in the house was playing music or on the phone. It might have been to keep the noise level down late at night, or it might have been because you have hearing issues. Whatever the reason, captions make content more accessible and prevent viewers from channel hopping. I have written at length (here and here) about the mandatory need for captions in many territories, but today I’m going to consider that captions are actually beneficial to the success of a title rather than simply a tick box that ensures compliance. 

We all know that when a title goes international, there are many territories where the original soundtrack with captioning is the preferred option, but in the native territory there are benefits too. The OfCom report is one of the better studies on the subject and reminds us all that the number of words per minute (wpm) that we can read is less than the number of words per minute that we can hear and comprehend. This leads us to a significant challenge: how do we reduce the word count. Quite simply, the creation of good, understandable captions is a skill and, currently, only a human can do it.

If you haven’t seen Rhett & Link’s auto-captioning experiment, I recommend getting some good coffee, putting on some headphones and relaxing in front of YouTube for a little under five minutes of grammatically well formed gibberish. This shows that a human is needed to author captions, but what about all the downstream processes? A white paper that is now a little old, but still accurate has re-emerged on the Dalet Academy site. It details some of the history about how we got to where we are today with captions and subtitles. It’s always been a bit messy with little incentive to clean up the workflows … until now.

Captions and subtitles are now beginning to be a significant cost. Not because of the human that does all the valuable creative work, but because of the number of special cases in the workflow that lack of coordination and standardisation brings. Fortunately, a lot of good work is currently going on in the W3C to bring some order to the chaos of captions. The TTML – Timed Text Markup Language v1.1 – covers pretty much all of today’s applications that use Western Alphabets. It makes a great mezzanine format to get into and out of currently incompatible media representations of captions and subtitles. Tooling from companies like Dalet is now allowing content owners and aggregators to separate the embedding of the captions from the processing / editing and manipulation of those captions. This may sound trivial, but when you are trying to ship a title to 500 different destinations, many of which vary only in their caption and metadata delivery requirements, this is a big deal.

We’re not at the stage where TTML solves all the world’s problems, but anyone who is putting together new workflows and choosing a legacy proprietary format is either very courageous or entirely confident that the company that owns that legacy format will be in business next year. Captions and subtitles are made with tricky technology. If you’d like to see a Bruce’s Shorts Webinar on the captioning technologies, then drop us a line to academy@dalet.com and give us some hints about what you need to know.

’till next time.

Bruce

Share this post:

Subscribe

Sign up for Dalet updates and our quarterly newsletter. Expect #MediaTech insights!

Leave a Reply

Your email address will not be published. Required fields are marked *

Recommended Articles

From Field to Screens: Overcoming Olympic Hurdles with Technology 

As Paris counts down to the 2024 Olympics, Stephane Guez, co-founder and principal, Dalet, explains how technological innovation will deliver an engaging experience

Read More

Sports Content Monetization: 5 Ways to Monetize Video Archives

Find out how sports organizations are maximizing the value of their content, capitalizing on growing fan demand and evolving distribution channels.

Read More

Accelerating Content Workflows in Sports Production

Find out how sports organizations of all types are using fast game-to-audience media workflows​​ to produce sports content with max efficiency.

Read More

From Field to Screens: Overcoming Olympic Hurdles with Technology 

As Paris counts down to the 2024 Olympics, Stephane Guez, co-founder and principal, Dalet, explains how technological innovation will deliver an engaging experience

Read More

Sports Content Monetization: 5 Ways to Monetize Video Archives

Find out how sports organizations are maximizing the value of their content, capitalizing on growing fan demand and evolving distribution channels.

Read More

Accelerating Content Workflows in Sports Production

Find out how sports organizations of all types are using fast game-to-audience media workflows​​ to produce sports content with max efficiency.

Read More