First, and most obvious, they make television accessible to people whose hearing is impaired, and would otherwise find it impossible to enjoy television. They are otherwise perfectly normal members of the community: they shop from your advertisers and want to pay your cable subscription or license fee to watch your programming. So of course you want to encourage them.
Second, in many parts of the world literacy levels are lower than they should be. All the evidence shows that reading words as you hear them is an enormous aid to confident reading. Countries like India have recognized this, provide captions on a huge amount of content, and have seen real returns in growing literacy.
Whether you are helping the hearing impaired or those with poor literacy skills, you are boosting democracy: you are helping the whole population be better informed and better able to take their place in society. That, surely, is a good thing to do.
There are also times when those with keen hearing and good literacy will want to use captions. Long nights, sitting up with a fussy baby. Trying to catch the news in a crowded railway station. Checking the on-screen action in a sports bar.
Regardless of the reasons, captions are a good thing: they keep audiences tuned in to your channel.
So adding captions should not be a nuisance. However, anybody who has ever had to deal with the mind-boggling minutia of caption transformations and synchronization with an audio-visual cut and splice job will know that in fact captions can be a nuisance. For many of us, it is now a federally mandated nuisance.
The trick is to manage the captions sensibly. At AmberFin, we believe you should handle the caption file in exactly the same way as the audio and video files.
The modern, file-based broadcaster transcodes the audio and video on ingest to a house format. Why not treat the caption files in exactly the same way, and store them in the same wrapper as the audio and video.
If you have to deliver captions in different formats – and increasingly we are expected to provide captions for online content as well as broadcast – then transcode them again at the point of delivery. Just like the audio and video.
In between, if the video gets edited so does the audio. And so does the caption file. All it needs is a bit of intelligent processing to make sure that the written and spoken words still match, and there is still enough time to read each caption.
Of course, in the USA, there is a simplicity to the caption workflows that has come about from a predominantly mono-lingual approach to the captioning requirements. In Europe and the rest of the world, we have a lot of different languages with a lot of different character sets, all using equipment from comparatively small, local vendors. The result is that there isn’t really a single contribution format that allows the encapsulation of all the different character sets in a way that can be simply played out by a dumb playout server.
We have recently looked at the problem on behalf of a customer. We can make the solution for the contribution package easy for the teletext workflow using standards. We can make the solution for the contribution package easy for the Unicode subtitles in the distribution multiplex. What can’t be done today is simultaneously make a standards based package that makes life easy for the teletext path AND the DVB bitmap subtitles in the multiplex AND the Timed Text subtitles online AND for the video in the playout server AND for the audio in the playout server.