When Cicero said, “the good of the people is the chief law”, he was probably not thinking about closed captioning, given that he said it about 2100 years ago. But whether or not they are mandated by law, I think we all agree thatclosed captions – or subtitles – are very much “for the good of the people”.
First and most obvious, they make television accessible to those with impaired hearing who would otherwise struggle to understand what is happening.
Second, even if you can hear well there may be times when you do not want to listen. You could be watching in silence to help get a baby to sleep, or you could be in an airport or train station watching news or sports.
Third, it helps literacy if you can see the words as they are spoken, as Brij Kothari, who pioneered Same Language Subtitling TV for mass literacy in India, demonstrated.
For these reasons and more, responsible broadcasters aim to caption most if not all of their programs. And in the USA at least, it is now the law. Indeed, with the 21st Century Communications and Video Accessibility Act, not only must you put closed captions on broadcast content, but if that same material is then made available online then it must have captions too.
I am writing this on 26 September 2013: the full effect of the law comes into effect in four days’ time.
So on the upside we can all agree that captions are a good idea, on air and online. On the downside, it adds another layer to the repurposing factory we need to build to get our content onto every platform. The last time I counted there were 15 different input caption formats and 23 delivery formats, so there are issues here.
But we already have the answer. For most organizations running file-based environments, then we handle video and audio by accepting whatever comes in, transcoding it to a house format, then doing everything with the house format right to the point of delivery when we transcode it to whatever needs to go out.
The wrappers that carry the audio and video components are also capable of carrying other data in the payload. Why not also include the captions? Just like the pictures and sound transcode to the house format on the way in, and to the destination format on the way out.
Along the way, if you edit the content, you edit the caption file at the same time. A bit of intelligence in the system can check as you go that the caption file is still valid: if the edit means audiences have to read a complex sentence in 17 frames, then it will flag up a warning. We can provide that sort of intelligence.
The result is that you handle captions throughout the packaging process, and deliver them to as many different formats as you need, without any extra work. You would not think of separating audio from video in this process, so why should captions be handled any differently?
It means we can follow Cicero’s wise words by doing something for the good of the people, and along the way staying within the law.