Menu extends

May 22, 2015
LEAN Mean Versioning Machine
How to deliver more, improve quality and reduce costs by optimising media processing.

LEAN Mean Versioning Machine

How to deliver more, improve quality and reduce costs by optimising media processing.

Between UHDTVs, smartphones, tablets and a plethora of other screens/devices/services through which to consume media, the race to deliver content has become an uphill battle. Consumers increasingly demand a wider variety of content in progressively diverse delivery mediums, putting growing pressure on content owners and broadcasters to re-version, repackage and repurpose media. However, through optimal implementation of open technologies and IT best practice, broadcasters and content owners can not only respond to this demand but also add greater flexibility, efficiency and quality to their workflows and outputs.
 
Media is transcoded at a number of touch points in the production and distribution process, potentially degrading the source quality over iterations. The problem is that the average number of times content is encoded and decoded is higher than the design efficiency of most codecs commonly used by broadcasters today. The average number of transcodes from content origination to its eventual destination is rising to as many as twenty times.
 
These statistics reflect the complexity of the broadcast business today. Companies who shoot or produce content aren’t necessarily those who will aggregate it, and those who aggregate content are not always the same as those who create the various accompanying media assets (trailers, promos, etc.). At every step, the file will be encoded, decoded and re-encoded several times. Content destined for overseas distribution or incoming from foreign producers/broadcasters may have to undergo yet more transcode steps in preparation for final delivery.
 
The fact is, media takes a bit of a beating between acquisition and various outputs, resulting in a significant impact on the technical and subjective quality of the media that the end user eventually sees. But media processing is also CPU (or GPU) intensive, making the alternative quite expensive in terms of infrastructure.
 
To improve quality while reducing cost, we need to consider how to minimize the number of times media is processed and ensure that the media processing that has to be done is of the highest quality.
 
For example, creating packages and versions is far more efficient when you have a clear, standardized view of where all the “raw” components of the packages are and can “virtually” assemble and store the versions and packages as metadata, leaving the source media in it’s original state. In this case, we only re-encode the file at the point of delivery – employing LEAN or “just-in-time” methodology in media workflows.
 
This also serves to insulate operators from the complexities of media manipulation and processing, leaving them confident that those automated actions “just happen” and ensuring that all their interactions with media are about making creative choices and applying human judgment to business processes.

Knowing where media came from – tracking the structural and genealogical media metadata – is also critical in automating media processing (speaking of which, attend our next webinar on BPM!) and is a key part of a MAM-driven workflow. With new resolutions, frame rates and codecs constantly emerging – and an increase in crowd-sourced content driving the number and variety of devices used for acquisition – strong media awareness and understanding ensures that the “right” or, more-honestly (since any processing will degrade content), “least-worst” media-processing path can be chosen.
 
Overall, when it comes to delivering the highest of image quality, the explosion in acquisition formats makes the need for good asset management more important than ever, as it allows content owners to transparently manage that additional complexity.
YOU MAY ALSO LIKE...
5 reasons why media delivery standards might be good for your business
Like me, I am sure that you have been to a restaurant in a group and everyone orders from the set menu EXCEPT for that one person who orders the exotic, freshly prepared fugu, which requires an extra 30 minutes of preparation from a licensed fugu chef so that the customers don't die eating it. Restaurant etiquette means that our main course is served at the same time, forcing everyone to spend a long time hungry, waiting for the special case. And if you split the bill equally, the special case becomes subsidised by the people wanting the set meal. Does this model relate to the media industry? Is there a cost for being special? How can we reduce that cost? What gets done with the cost savings? How can you help? Fortunately those 5 questions lead into 5 reasons why delivery standards might be a good idea. 1. The set meal is more efficient than the a la carte I must confess that when I write this blog while hungry there will be a lot of food analogies. I'm quite simple really. In the "set meal" case - you can see how it's easier for the kitchen to make a large volume of the most common meal and to deliver it more quickly and accurately than a large number of individual cases. In the file delivery world, the same is true. By restricting the number of choices to a common subset that meet a general business need, it is a lot easier to test the implementations by multiple vendors and to ensure that interoperability is maximised for minimum cost. In a world where every customer can choose a different mix of codecs, audio layout, subtitle & caption formats, you quickly end up with an untestable mess. In that chaotic world, you will also get a lot of rejects. It always surprises me, how few companies have any way of measuring the cost of those rejects, even though they are known to cause pain in the workflow. A standardised, business-oriented delivery specification should help to reduce all of these problems. 2. Is there a cost for being special? I often hear the statement – "It's only an internal format - we don't need to use a standard". The justification is often that the company can react more quickly and cheaply. Unfortunately, every decision has a lifespan. These short-term special decisions often start with a single vendor implementing the special internal format. Time passes and then a second vendor implements it, then a third. Ultimately the custom cost engineering the special internal format is spent 3 or 4 times with different vendors. Finally the original equipment will end of life and the whole archive will have to be migrated. This is often the most costly part of the life cycle as the obsolete special internal format is carefully converted into something new and hopefully more interchangeable. Is there a cost of being special? Oh yes, and it is often over and over again. 3. How can we reduce costs? The usual way to reduce costs is to increase automation and to increase "lights out" operation. In the file delivery world, this means automation of transcode AND metadata handling AND QC AND workflow. At Dalet and AmberFin, all these skills are well understood and mastered. The cost savings come about when the number of variables in the system is reduced and the reliability increases. Limiting the choices on metadata, QC metrics, transcode options, workflow branches increases the likelihood of success. Learning from experiences of the Digital Production Partnership in the UK, it seems that tailoring a specific set of QC tests to a standardised delivery specification with standardised metadata will increase efficiency and reduce costs. The Joint Task Force on File Formats and Media Interoperability is building on the UK's experience to create an American standard that will continue to deliver these savings 4. What gets done with the cost savings? The nice thing about the open standards approach is the savings are shared between the vendors who make the software (they don't have to spend as much money testing special formats) and the owners of that software (who spend less time and effort on-boarding, interoperability testing and regression testing when they upgrade software versions.) 5. How can you help? The easiest way is to add your user requirements to the Joint Task Force on File Formats and Media Interoperability list. These user requirements will be used to prioritise the standardisation work and help deliver a technical solution to a commercial problem. For an overview of some of the thinking behind the technology, you could check out my NAB2014 video on the subject, or the presentation given by Clyde Smith of Fox. Until next time.
The Upper Hand in Stadium Sports
A journey to building a Dalet sports stadium As a Canadian, I was brought up to love hockey, but as a technologist and trainer, there’s nothing better than walking around a stadium, meeting and interacting with all of the different characters involved in a sporting event and witnessing first hand how coaches, players, fans, directors and video editors, to name a few, can benefit from a good MAM system (Media Asset Management). Gone are the days of coaches rummaging through disorganized closets to find tapes of old game footage; in a Dalet Sports Factory stadium, all twenty or more camera feeds are directly ingested into the Dalet Galaxy MAM platform, stamped and synchronized by timecode. Associating multiple video streams from different camera angles, the play-by-play action is logged by professional sports loggers while generating official statistics for the national league. Using Dalet Sports Logger with customizable sports-specific action buttons, a facility can generate metadata-rich content for every sporting event., allowing coaches or players the ability to easily search and preview any play from any camera angle that has been recorded, or is currently recording, into the system. Fans in the stadium can’t help but notice the enormous screens, high above the action, broadcasting instant replays of the game just seconds after they see it happen in real-time. Managed by in-house studio directors using Dalet On-Air Playout tools, highlights of the game are automatically created using the key information entered by the sports loggers, which enables the directors to easily select the clips they wish to display on those big screens to excite fans. Fast search and browse tools give video editors the ability to find key plays based on teams, players, games or any other relevant information. They then quickly create highlight reels of all the amazing moments of the season using the Dalet OneCut video editor or other well-known non-linear editing systems, such as Adobe Premiere®, Avid® or Final Cut Pro®. All the key plays logged by sports loggers appear on the video editing timeline as markers for any video recorded of the event, making a video editor’s job simple and efficient while editing multiple camera angles. In any case where the stadium or team has rights to the game footage, Dalet’s integration with YouTube, Twitter and other social media platforms means all highlights, interviews and behind-the-scenes footage will automatically be saved and displayed in the highest quality and correct format, along with all of the descriptive metadata entered by the sports loggers, using the Dalet AmberFin transcode platform. Whenever someone watches a highlight on the web or adds a comment, this data is fed back to the Dalet system, allowing players to have a closer interaction with their fans. With all these benefits gained from a well crafted MAM solution, one might suggest that sports bodies need to redefine the rules regarding the use of technology, as a good MAM gives an unfair advantage to the teams that have one over those that don’t.
Taking MXF Interoperability to the next level
Next week, in a corner of the Bayerischer Rundfunk campus in Munich, Germany, likely without much fanfare, something fairly monumental will take place – the IRT MXF PlugFest. Now in its ninth year, this event brings together vendors in the media and entertainment industry to facilitate MXF interoperability tests. Following each event, the IRT (Institute für Rundfunktechnik) publishes a report on the levels of overall interoperability, standard compliance, decoder robustness, and the common errors and interoperability issues – you can download the previous reports here. All of the previous eight reports make interesting reading (particularly if read in order), but none has been more greatly anticipated than the report due from this ninth PlugFest. What then, you may ask, makes this year’s event so special that we would dedicate a whole blog post to a relatively small, vendor-only event in Bavaria? The UK DPP (Digital Production Partnership) has been closely watched by a number of industry organizations and groups, particularly with regards to the file specification it has published, based on AMWA AS-11 for the delivery and interchange of media files. This specification aims to end the headache of media file interoperability at the point of delivery for broadcasters and media facilities across the UK and Ireland. While the issue of file compatibility is not unique to the UK, unique challenges in the German-speaking media community have dictated a slightly different approach to the creation of a standardized interchange format. The ARD group, the Association of Public Broadcasting Corporations in the Federal Republic of Germany, is made up of 10 member broadcasters, covering regional, national and international distribution, who have the capability to exchange media at almost any point in any workflow including news, production and archive. In July this year, together with ZDF (in English: the Second German Television), with support from other German-language public and private broadcasters, the ARD published two new MXF-based media file-format “profiles.” At this point, you would be forgiven for asking, “Do we really need another specification/standard?” In fact, the two profiles, named HDF01 and HDF02, are not too dissimilar to the AMWA Application Specifications AS-10 and AS-11. What makes the ARD-ZDF MXF-profiles different is that not only do they describe what the output of the MXF encoder should look like, but the tolerances and behavior of MXF decoders. For example, MXF files compliant with the profiles shall not have any ancillary date tracks (commonly used for the carriage of subtitles or transitory audio and aspect ratio metadata), but to ensure interoperability, it is required that decoders are tolerant of ancillary data tracks that may be present. Specifying not only the encoder, but also decoder behavior will have a massive benefit to interoperability, particularly when deploying and testing systems. Many of the properties specified in the profiles are low-level elements that frequently cause interoperability problems that require lengthy discussions between multiple vendors, users and integrators to find resolution. Constrained encoding profiles ensure that “problematic” files can quickly be analyzed and “non-compliant” elements identified, but without specifying additional decoder requirements, applying these constraints can introduce as many challenges as they remove with little or no consideration for legacy assets or flexibility to find quick, short-term resolutions to interoperability issues in workflow. Dalet is proud to have been one of the very first vendors to have a product certified by AMWA for the creation of UK DPP delivery specification compliant files and is equally pleased to be going into the first IRT MXF PlugFest since the publication of the HDF01 and HDF02 ARD-ZDF MXF profiles, as one of the first few to fully support the new profiles. The event next week will set the baseline for a new era in media file interoperability and, while reading the historic MXF PlugFest reports is interesting, I personally cannot wait to see what I expect to be the biggest change yet, between the report for next week’s ninth and 2015’s 10th event.
Forecast is looking Cloudy
“Hey Bruce, can I run your software in the cloud?”, said one of our customers while I was travelling the other week. “Sure”, I replied, “what do you mean by cloud?”. “Erm, well, you know the cloud. Like the Amazon thing”. “Why would you want to do that?”, I asked. “Because, well – it will be cheaper it’s the cloud isn’t it!”. The conversation continued with an exchange of Amazon pricelists and AmberFin price lists, an estimate of conversion volumes and growth rates. We didn’t stop there. We then realized we needed to take into account input bandwidth to the cloud and output bandwidth to retrieve the file, amortisation rates that apply to owned and operated hardware, set up costs and a whole host of other operational figures. The result was a little surprising. Over the lifespan of the transcoder, it’s still cheaper to own a transcode farm than to rent one that sits on the cloud. Moreover the cloud solution ended up with the cloud provider earning a lot of money and the poor transcoder vendor earning a lot less money. I must confess that sounded like bad news to me, but it also sounded like bad news for my customer. It would mean that the poor transcode vendor had less cash to invest in new codecs, new workflows, new wrappers, QC functionality, metadata mark-up and all those other elements of the workflow puzzle that my customers rely on to turn the humble transcoder into an enterprise class revenue generator. We then reflected on the future. It seemed inevitable that in a decade or so, transcoding will be a service that you subscribe to in some sort of cloud. The BIG question is how do we get there from here. We decided that the first step was to adjust the customer's workflow so that the transcode function was less of a desktop application and more of an enterprise service within their facility. We then went for a beer. I can’t offer you a beer today, but I can offer you our enterprise white paper that might help you see how you can industrialise your file based workflows. Simply click on this link to download. Also, why not subscribe to my free short video training series called "Bruce's Shorts"? Subscribers receive a topical weekly short video directly to their in-box as well as invitations to exclusive webinars. I am planning a webinar on enterprise workflows very soon so make sure you don't miss out and subscribe today!
Gearing up for Broadcast Asia
Next week will see a first forAmberFin when we exhibit in our own right at Broadcast Asia. This is a dynamic, fast growing regional market for our products and services – one that we have prioritised for a number of years. I will travel to Singapore for the exhibition and I’m excited at the prospect of what promises to be a strong event for AmberFin: In all markets, your success is strongly influenced by the strength of the relationships you create. Relationships with customers are key, for sure. But so are the relationships with other companies and organisations that will help establish your reputation in that market. Danmon Asia At Broadcast Asia, we will hit the ground running by announcing a marketing agreement covering Vietnam, Laos, Cambodia and Myanmar with Danmon Asia. The specialist reseller will introduce broadcasters in these territories to AmberFin and its software-based products, focusing on those that are already working in a file-based environment and looking to develop better QC capabilities within their workflows. For these markets we need the right channel partner to help establish ourselves. Danmon Asia is the right company for us because it has an outstanding track record in the design and commissioning of customer focussed file-based workflows. Thanks to our successful partnership in Scandinavia, Danmon already understands our company, our products and our customer driven ethos. Could DPP help Asian broadcasters? 
At Broadcast Asia, we will showcase a number of AmberFin products and solutions that are designed to increase media quality and operator efficiency in file-based workflows across the widest range of enterprise operations. We will demonstrate new competitively priced file-based media ingest, transcode andplayback products that enable the digitization and transformation of new and archived media content. These new product introductions reflect AmberFin’s strategic marketing initiative aimed at addressing the broadest range of media organizations with enterprise level solutions that are designed to meet their specific business needs. One increasingly successful initiative in the UK is the Digital Production Partnership where numerous broadcasters and facilities have come together to create a number of common standards in file-based technology to help the transfer of material between organisations. It will be interesting to discuss this initiative with visitors to Broadcast Asia and see whether a similar approach in their region would be beneficial to our industry. If you would like to know more about the DPP initiative, download our DPP white paper by clicking on the button below.
Digital Production Partnerships (DPP) Help Broadcast Service Providers
As I mentioned in an earlier blog post, back in February we organised a highly successful webinar together with our UK channel partner, ATG Broadcast. The webinar focussed on the advantages and challenges ofDPP adoption here in the UK and looked at it from various different perspectives: If you are a broadcast service provider you will quickly appreciate the increased certainty that DPP provides around the areas of file delivery and playback. During the webinar, we heard fromPeter Darlington who works with Red Bee Media. He pointed out there are a number of very real and tangible benefits to broadcast service providers from the adoption of DPP including MXF option constraints, common codec usage and even common audio track layout. Consistent format for service provision 
 Another major consideration is that DPP provides a consistent format for UK broadcaster service provision. Peter stated that this is key since it encourages adoption within both the service provider and vendor communities. Also, this approach simplifies decision making and reduces effort for broadcasters whilst maintaining workflow quality and efficiency. In his webinar presentation, Peter pointed out that it is important to encourage the vendor community to adopt DPP. This one action will significantly increase third party interoperability within workflows. Then, instead of broadcasters asking if all their media files are compatible with all their production tools they can focus their attention on other issues that will increase workflow efficiency. Is it all plain sailing with MXF? 
Despite these clear benefits, there remain a number of significant challenges before service providers can easily adopt DPP. For example, there are multiple client systems and workflows to audit and upgrade whilst client broadcasters each have their own approaches, timescales and priorities.Aligned with this is the thorny issue of vendor and supply chain compliance. All this adds to the cost of introduction in terms of finance and support, making a complex picture today. Groundswell is growing 
Despite this, Peter reported that DPP adoption is already underway with some Red Bee Mediaclients. Edge support is already in place and end-to-end support is in development at their facility. As Peter concluded, today we are faced with the classic chicken and egg scenario between vendors, broadcasters, producers and service providers. However, we need to move forward – there are hurdles to overcome but the prize is out there.
"NAB 2013 - I’m all for progress, it’s change I object to"
Attributed to both Mark Twain and Will Rogers, it’s one of my favourite reflections on the media technology industry and particularly apt today, the second day of NAB. I’ve already had conversations over the last couple of days with engineers and technologists who are excited about the latest gadgets and gizmos that they are expecting to find on the show floor: However, when I quiz them about the usage of those self-same gadgets, they often describe to me the usage of the new gizmo in a traditional workflow for traditional TV. “Yeah, but look at what cloud can offer” says the guy who has to look after the internal storage of a particular facility. What he sees (and it was a he) is that a particularly horrible job of balancing the facility’s storage requirements in terms of volume, throughput and resilience is now someone else’s problem and he can get on with the bits of his job that he likes. The same conversation with a woman who had more business oriented aspirations came to a different conclusion. “ We could out-source everything” and by “everything” she meant storage,editing, transcoding, versioning, distribution, playout – the works. Her vision of a playout facility was that everything was in the cloud and the satellite uplink became a sort of glorified youtube player at a data center that was close enough to the dish to be economically viable. We all want progress, but some of the changes that are about to confront us may be uncomfortable for many. The underlying visual media business model is changing. No-one knows what the right answer will be, but we all know (as consumers) that we like to be entertained and that we will probably consume more visual media and not less. If you’re not in Vegas, why not have a read the captions white paper for a historical perspective on why history leads us into awkward places. If are you wandering the halls of NAB, why not pop in and see us at SU8505 and we can discuss the future. That’s our business. I, personally, would like to help you feel comfortable that the future of media is a good one and that making great looking content is going to keep us all entertained and in jobs for a long time to come!