If you read my last post on the evils of interlace, you’ll know how strongly I feel about the topic. Just like maggot therapy went out of fashion with the advent of antibiotics, interlace is on its way out with the arrival of HEVC:
Hold on a second, I hear you say, if interlace is so bad, why was it ever invented and what the heck am I going to do with all this interlaced content I still have?
To answer that question, we must once return to the dawn of television history. The whole chain, from camera to home receiver, depended upon valves and diodes: devices that could barely keep up with any sort of video bandwidth. So interlaced scanning was invented to give the illusion of vertical detail within the constraints of the electronics of the time.
Interlace is a simple compression technique (if you are feeling generous) or a fudge (if you are not). The basic trade off is how many vertical lines of television resolution can you achieve for a given frame rate in a given signal bandwidth. Interlace is a compromise to transmit 60 (or 50) pictures a second at half the vertical resolution: all the odd lines in the first picture, then all the even lines in the second. You only need half the bandwidth, you get twice the vertical resolution and you fool the eye into seeing smooth movement at effectively 60 (or 50) frames per second.
And just like maggot therapy, we really don’t need to do this any more because we have better options. Modern solid state electronics and digital processing are more than capable of supporting true 50 fps, or indeed 120 fps, or a more international 150fps. With proper progressive scanning. Without compromise.
With each passing year, it gets harder to correctly display interlaced pictures. You just can’t buy CRTs any more. Plasma, LCD, OLED and the rest are all progressively scanned screens. To show an interlaced television stream on them, these devices have to convert it internally to progressive scanning. Broadcasters and creative producers have no control over how this will work inside the television or set-top box, and what artifacts it will add to the image.
Many producers take great care over the look of their content. They may choose to shoot in a “film look” mode, at 25 fps. They may choose to shoot 50p to capture fast movement. What they have in common is that they want viewers to see what they created. Converting everything to an interlaced scan just for the hop from the broadcaster to the home means that the way that vision is preserved is entirely dependent upon the de-interlacer in the receiver, a chip likely chosen by the manufacturer solely on the basis of cost.
One of the great advances in the HEVC specification is that it does not support interlace. It is simply not there: all the transmission modes assume progressive scanning. Ding dong, the witch is dead!
However, if your content was created in an interlaced format or restored from an archive in an interlaced format, don’t give up! Passing it through a professional de-interlacer such as the Dalet AmberFin iCR will at least ensure a clean progressive signal, free from artifacts, going in to the HEVC encoder. The iCR is also the perfect host for a software-based HEVC codec. iCR integrates high quality de-interlacing within a generic transcode platform, and tightly couples transcode to a variety of media QC tools to check quality before delivery.
Featured in: Dalet AmberFin | Transcoding |
With 30 years in the industry, Bruce looks after Media Technology for Dalet. An engineer who designed antennas, ASICs, software, algorithms, systems and standards, Bruce is best known for being @MrMXF and you can get his book on Amazon.
More Articles By Bruce