Did you know that most of the world does not distinguishbetween the terms “captions” and “subtitles”? Except that is, in the United States and Canada, where these terms do carry different meanings:
In North America, “subtitles” are designed to help viewers who can hear but cannot understand the language or accent, or the speech is not entirely clear; “subtitles” only transcribe dialogue and some on-screen text.
“Captions” on the other hand, are designed for to the deaf and hard of hearing and describe all significant audio content —spoken dialogue and non-speech information such as the identity of speakers and, occasionally, their manner of speaking— along with any significant music or sound effects using words or symbols.
The United Kingdom, Ireland and many other countries use the term “subtitles” and there is often a single “subtitle” stream that serves the hard of hearing, deaf and foreign language communities. This may largely be due to the fact that in many parts of the world, many different languages are spoken, and content is often created for use across international boundaries. In which case, putting sufficient text on the screen for a foreign speaker to understand, and putting sufficient text on the screen for somebody who’s hard of hearing to understand is pretty much the same thing.
Open or closed?
Captioning also comes in different flavors. “Open captioning” is typically used to describe something that’s going to be “burned into the video” and will thus be on-screen and visible to all viewers. Whereas “closed captioning” is typically used to describe something that’s carried as data, and will be put on the screen by the display or decoder at the discretion of the viewer. Closed captioning is also slightly different for TV compared to DVD and cinema.
So whether you say ‘tomahto’, or ‘tomayto’, when it comes to handling “captions“, and “subtitles” in your file based workflows, the challenges (and solutions) are exactly the same!