Menu extends

Jan 26, 2015
Taking MXF Interoperability to the next level

Taking MXF Interoperability to the next level

Next week, in a corner of the Bayerischer Rundfunk campus in Munich, Germany, likely without much fanfare, something fairly monumental will take place &ndash; the IRT MXF PlugFest.&nbsp;Now in its ninth year, this event brings together vendors in the media and entertainment industry to facilitate MXF interoperability tests. Following each event, the IRT (Institute f&uuml;r Rundfunktechnik) publishes a report on the levels of overall interoperability, standard compliance, decoder robustness, and the common errors and interoperability issues &ndash; you can download the previous reports&nbsp;<a href="http://mxf.irt.de/" target="_blank">here</a>. All of the previous eight reports make interesting reading (particularly if read in order), but none has been more greatly anticipated than the report due from this ninth PlugFest.

Next week, in a corner of the Bayerischer Rundfunk campus in Munich, Germany, likely without much fanfare, something fairly monumental will take place – the IRT MXF PlugFest.
 
Now in its ninth year, this event brings together vendors in the media and entertainment industry to facilitate MXF interoperability tests.  Following each event, the IRT (Institute für Rundfunktechnik) publishes a report on the levels of overall interoperability, standard compliance, decoder robustness, and the common errors and interoperability issues – you can download the previous reports here.  All of the previous eight reports make interesting reading (particularly if read in order), but none has been more greatly anticipated than the report due from this ninth PlugFest.
 
What then, you may ask, makes this year’s event so special that we would dedicate a whole blog post to a relatively small, vendor-only event in Bavaria?
 
The UK DPP (Digital Production Partnership) has been closely watched by a number of industry organizations and groups, particularly with regards to the file specification it has published, based on AMWA AS-11 for the delivery and interchange of media files. This specification aims to end the headache of media file interoperability at the point of delivery for broadcasters and media facilities across the UK and Ireland.
 
While the issue of file compatibility is not unique to the UK, unique challenges in the German-speaking media community have dictated a slightly different approach to the creation of a standardized interchange format.
The ARD group, the Association of Public Broadcasting Corporations in the Federal Republic of Germany, is made up of 10 member broadcasters, covering regional, national and international distribution, who have the capability to exchange media at almost any point in any workflow including news, production and archive. In July this year, together with ZDF (in English: the Second German Television), with support from other German-language public and private broadcasters, the ARD published two new MXF-based media file-format “profiles.”
 
At this point, you would be forgiven for asking, “Do we really need another specification/standard?”
 
In fact, the two profiles, named HDF01 and HDF02, are not too dissimilar to the AMWA Application Specifications AS-10 and AS-11. What makes the ARD-ZDF MXF-profiles different is that not only do they describe what the output of the MXF encoder should look like, but the tolerances and behavior of MXF decoders. For example, MXF files compliant with the profiles shall not have any ancillary date tracks (commonly used for the carriage of subtitles or transitory audio and aspect ratio metadata), but to ensure interoperability, it is required that decoders are tolerant of ancillary data tracks that may be present.
 
Specifying not only the encoder, but also decoder behavior will have a massive benefit to interoperability, particularly when deploying and testing systems. Many of the properties specified in the profiles are low-level elements that frequently cause interoperability problems that require lengthy discussions between multiple vendors, users and integrators to find resolution.
 
Constrained encoding profiles ensure that “problematic” files can quickly be analyzed and “non-compliant” elements identified, but without specifying additional decoder requirements, applying these constraints can introduce as many challenges as they remove with little or no consideration for legacy assets or flexibility to find quick, short-term resolutions to interoperability issues in workflow.
 
Dalet is proud to have been one of the very first vendors to have a product certified by AMWA for the creation of UK DPP delivery specification compliant files and is equally pleased to be going into the first IRT MXF PlugFest since the publication of the HDF01 and HDF02 ARD-ZDF MXF profiles, as one of the first few to fully support the new profiles.
 
The event next week will set the baseline for a new era in media file interoperability and, while reading the historic MXF PlugFest reports is interesting, I personally cannot wait to see what I expect to be the biggest change yet, between the report for next week’s ninth and 2015’s 10th event.

YOU MAY ALSO LIKE...
An IBC preview that won’t leave you dizzy
When we write these blog entries each week, we normally ensure we have a draft a few days in advance to make sure we have plenty of time to review, edit and make sure that the content is worth publishing. This entry was late, very late. This pre-IBC post has been hugely challenging to write for two reasons: Drone-mounted Moccachino machines are not on the agenda &ndash; but Bruce&rsquo;s post last week definitely has me avoiding marketing &ldquo;spin.&rdquo; There are so many things I could talk about, it&rsquo;s been a struggle to determine what to leave out. Earlier this year, at the NAB Show, we announced the combination of our Workflow Engine, including the Business Process Model &amp; Notation (BPMN) 2.0-compliant workflow designer, and our Dalet AmberFin media processing platform. Now generally available in the AmberFin v11 release, we&rsquo;ll be demonstrating how customers are using this system to design, automate and monitor their media transcode and QC workflows, in mission-critical multi-platform distribution operations. Talking of multi-platform distribution, our Dalet Galaxy media asset management now has the capability to publish directly to social media outlets such as Facebook and Twitter, while the new Media Packages feature simplifies the management of complex assets, enabling users to see all of the elements associated with a specific asset, such as different episodes, promos etc., visually mapped out in a clear and simple way. Making things simple is somewhat of a theme for Dalet at IBC this year. Making ingest really easy for Adobe Premiere users, the new Adobe Panel for Dalet Brio enables users to start, stop, monitor, quality check and ingest directly from the Adobe Premiere Pro interface with new recordings brought directly into the edit bin. We&rsquo;ll also be demonstrating the newly redesigned chat and messaging module in Dalet Galaxy, Dalet WebSpace and the Dalet On-the-Go mobile application. The modern, and familiar, chat interface has support for persistent chats, group chats, messaging offline users and much more. Legislation and consolidation of workflows mean that captioning and subtitling are a common challenge for many facilities. We are directly addressing that challenge with a standards-based, cross-platform strategy for the handling of captioning workflows across Dalet Galaxy, Dalet Brio and Dalet AmberFin. With the ability to read and write standards-constrained TTML, caption and subtitle data is searchable and editable inside the Dalet Galaxy MAM, while Dalet Brio is able to capture caption- and subtitle-containing ancillary data packets to disk and play them back. Dalet AmberFin natively supports the extraction and insertion of subtitle and caption data to and from .SCC and .STL formats respectively, while tight integration with other vendors extends support for other vendors. There are so many other exciting new features I could talk about, but it&rsquo;s probably best to see them for yourself live in Amsterdam. Of course, if you&rsquo;re not going to the show, you can always get the latest by subscribing to the blog, or get in touch with your local representative to get more information. There, and I didn&rsquo;t even mention buzzwords 4K and cloud&hellip; &hellip;yet!
AmsterMAM – What’s New With Dalet at IBC (Part 1)
If you’re a regular reader of this blog, you may also receive our newsletters (if not, email us and we’ll sign you up) – the latest edition of which lists 10 reasons to visit Dalet at the upcoming IBC show (stand 8.B77). Over the next couple of weeks, I’m going to be using this blog to expand on some of those reasons, starting this week with a focus on Media Asset Management (MAM) and the Dalet Galaxy platform. Three years ago, putting together an educational seminar for SMPTE, Bruce Devlin (star of this blog and Chief Media Scientist at Dalet) interviewed a number of MAM vendors and end users about what a MAM should be and do. Pulling together the responses – starting with a large number of post-it notes and ending with a large Venn diagram – it was obvious that what “MAM” means to you is very dependent on how you want to use it. What we ended up with was a “core” of functionality that was common to all MAM-driven workflows and a number of outer circles with workflow-specific tasks. This is exactly how Dalet Galaxy is built – a unified enterprise MAM core, supporting News, Production, Sports, Archive, Program Prep and Radio, with task-specific tools unique to each business solution. At IBC we’ll be showcasing these workflows individually, but based on the same Dalet Galaxy core. For news, we have two demonstrations. Dalet News Suite is our customizable, Enterprise multimedia news production and distribution system. This IBC we’ll be showcasing new integration with social media and new tools for remote, mobile and web-based working. We’ll also be demonstrating our fully-packaged, end-to-end solution for small and mid-size newsrooms, Dalet NewsPack. In sports workflows, quick turnaround and metadata entry is essential – we’ll be showing how Dalet Sports Factory, with new advanced logging capabilities, enables fast, high-quality sports production and distribution. IBC sees the European debut of the new Dalet Galaxy-based Dalet Radio Suite, the most comprehensive, robust and flexible radio production and playout solution available, featuring Dalet OneCut editing, a rock-solid playout module featuring integration with numerous third parties and class-leading multi-site operations. Dalet Media Life provides a rich set of user tools for program prep, archive and production workflows. New for IBC this year, we’ll be previewing new “track stack” functionality for multilingual and multi-channel audio workflows, extended integration with Adobe Premiere and enhanced workflow automation. If you want to see how the Dalet Galaxy platform can support your workflow, or be central to multiple workflows click here to book at meeting at IBC or get in touch with our sales team. You can also find out more about what we’re showing at IBC here.
More Secrets of Metadata
Followers of Bruce&rsquo;s Shorts may remember an early episode on the Secrets of Metadata where I talked about concentrating on your metadata for your business, because it adds the value that you need. It seems the world is catching onto the idea of business value of metadata, and I don&rsquo;t even have to wrestle a snake to explain it! Over the last 10 years of professional media file-based workflows, there have been many attempts at creating standardized metadata schemes. A lot of these have been generated by technologists trying to do the right thing or trying to fix a particular technical problem. Many of the initiatives have suffered from limited deployment and limited adoption because the fundamental questions they were asking centered on technology and not the business application. If you center your metadata around a business application, then you automatically take into account the workflows required to create, clean, validate, transport, store and consume that metadata. If you center the metadata around the technology, then some or all of those aspects are forgotten &ndash; and that&rsquo;s where the adoption of metadata standards falls down. Why? It&rsquo;s quite simple. Accurate metadata can drive business decisions that in turn improves efficiency and covers the cost of the metadata creation. Many years ago, I was presenting with the head of a well-known post house in London. He stood on stage and said in his best Australian accent &ldquo;I hate metadata.&quot; You guys want me to make accurate, human oriented metadata in my facility for no cost, so that you guys can increase your profits at my expense.&rdquo; Actually he used many shorter words that I&rsquo;m not able to repeat here J. The message that he gave is still completely valid today: If you&rsquo;re going to create accurate metadata, then who is going to consume it? If the answer is no one, ever, then you&rsquo;re doing something that costs money for no results. That approach does not lead to a good long-term business. If the metadata is consumed within your own organization, then you ask the question: &ldquo;Does it automate one or many processes downstream?&rdquo; The automation might be a simple error check or a codec choice or an email generation or a target for a search query. The more consuming processes there are for a metadata field, the more valuable it can become. If the metadata is consumed in a different organization, then you have added value to the content by creating metadata. The value might be expressed in financial terms or in good-will terms, but fundamentally a commercial transaction is taking place by the creation of that metadata. The UK&rsquo;s Digital Production Partnership and the IRT in Germany have both made great progress towards defining just enough metadata to reduce friction in B2B (business to business) file transfer in the broadcast world. Cablelabs continues to do the same for the cable world and standards bodies such as SMPTE are working with the EBU to make a core metadata definition that accelerates B2B ecommerce type applications. I would love to say that we&rsquo;ve cracked the professional metadata problem, but the reality is that we&rsquo;re still half way through the journey. I honestly don&rsquo;t know how many standards we need. A single standard that covers every media application will be too big and unwieldy. A different standard for each B2B transaction type will cost too much to implement and sustain. I&rsquo;m thinking we&rsquo;ll be somewhere between these two extremes in the &ldquo;Goldilocks zone,&rdquo; where there are just enough schemas and the implementation cost is justified by the returns that a small number of standards can bring. As a Media Asset Management company, we spend our daily lives wrestling with the complexities of metadata. I live in hope that at least the B2B transaction element of that metadata will one day be as easy to author and as interoperable as a web page. Until then, why not check out the power of search from Luc&rsquo;s blog. Without good metadata, it would be a lot less exciting.
Why Ingest to the Cloud?
With Cloud storage becoming cheaper and the data transfer to services such as Amazon S3 storage being free of charge, there are numerous reasons why ingesting to the Cloud should be part of any media organization&rsquo;s workflow. So, stop trying to calculate how much storage your organization consumes by day, month or year, or whether you need a NAS, a SAN or a Grid, and find out why Cloud could be just what your organization needs. Easy Sharing of Content Instead of production crews or field journalists spending copious amounts of time and money shipping hard drives to the home site or being limited by the bandwidth of an FTP server when uploading content, with object storage services like Amazon S3 or Microsoft Azure, uploading content to the Cloud has become easy and cheap. Once content is uploaded to the Cloud, anyone with secure credentials can access it from anywhere in the world. Rights Access to Content In recent news, cloud storage services such as Apple iCloud were hacked and private content was stolen, increasing the concern about security and access rights to content in the Cloud. With secure connections such as VPN and rights access management tools, you can specify, by user, group access rights and duration of how long content can be accessed on the Cloud. Both Microsoft and Amazon have setup security features to protect your data as well as to replicate content to more secure locations. Cloud Services to Process the Data By uploading content to the Cloud, in the backend you can setup services and workflows to run QC checks on the content, stream media, transcode to multiple formats, and organize the content for search and retrieval using a Media Asset Management (MAM) System hosted on the Cloud. Cloud Scalability Rather than buying an expensive tape library or continuing to purchase more hardware for a spinning disk storage, with cloud storage, one can scale down or scale up with the click of a button. No need for over-provisioning. Disaster Recovery An organization can easily set up secure data replication from one site to another or institute replication rules to copy content to multiple virtual containers, offering assurance that content will not be lost. Amazon S3 provides durable infrastructure to store important data and is designed for durability of 99.99999999% of objects. Moving Towards an OPEX Model As operations and storage move to the Cloud, you can control your investment by paying as you use services and storing content on the Cloud. Instead of investing on infrastructure maintenance and support, with operations on the Cloud, you can focus the investment on what makes a difference, the content and not the infrastructure to support it. Why Upload to the Cloud? The Cloud is no longer a technology of the future, with cloud storage adopted by Google, Facebook and Instagram, Cloud technology is the reality of today. By adopting this technology you control your investment by usage needs, backup your data and provide secure access to content to anyone with credentials anywhere in the world. The biggest limitation now is bandwidth, and the hurdle is adjusting the current infrastructure to support Cloud operations. Many organizations are turning towards a hybrid Cloud model, where content and services are hosted both locally and via Cloud solutions. Learning from the Cloud experience, Dalet has made initiatives over the past few years to evolve existing tools and services for the Cloud. Dalet now offers direct ingest from the Dalet Brio video server to Amazon S3 Storage and, at NAB this year in Las Vegas, Dalet showcased the first MAM-based Newsroom on the Cloud. To learn more about Dalet ingest solutions, please visit the ingest application page.
MXF AS02 and IMF: What's the Difference and Can They Work Together?
If you read my previous posts about IMF, you will already know what it is and how it works. But one of the questions I often get is "how is IMF different from AS02 and will it replace it? After all, don’t they both claim to provide a solution to versioning problems?". In a nutshell, the answer is yes, IMF and AS02 are different and no, IMF will not replace AS02; in fact the two complement and enhance each other. Let me explain: MXF AS02 (for broadcast versioning) and IMF (for movie versioning) grew up at the same time. And while both had very similar requirements in the early stages, we soon ended up in a situation where the level of sophistication required by the broadcasters’ versioning process never really reached critical industry mass. Efforts were continually made to merge the MXF AS02 work and the IMF work to prevent duplication of effort and to ensure that the widest number of interoperable applications could be met with the minimum number of specifications. When it came to merging the AS02 and IMF work, we looked at the question of what would be a good technical solution for all of the versioning that takes place in an increasingly complex value chain. It was clear that in the studio business there was a need for IMF, and that the technical solution should recognize the scale of the challenge. It came down to a very simple technical decision, and a simple case of math. AS02 does all of its versioning using binary MXF files, while IMF does all of its versioning using human-readable XML files. There are maybe 20 or 30 really good MXF binary programmers in the world today; XML is much more generic, and there must be hundreds of thousands of top quality XML programmers out there. Given the growing amount of localized versioning that we are now faced with, it makes sense to use a more generic technology like XML to represent the various content versions whilst maintaining the proven AS02 media wrapping to store the essence components. In a nutshell this is the main difference between AS02 and IMF. Both standards have exactly the same pedigree and aim to solve exactly the same problems, but IMF benefits from a more sophisticated versioning model and therefore requires a greater degree of customization – and XML is a better means of achieving this. IMF is not going to replace AS02. Rather the goal is to get to a place where we have a standardized IMF package as a means of exchanging versioned packages within the workflow. IMF will actually enhance the AS02 bundles that represent componentized clips that are already ingested, transcoded and interchanged today.
Shared Storage for Media Workflows… Part 1
In part one of this article, Dalet Director of Marketing Ben Davenport lists and explains the key concepts to master when selecting storage for media workflows. Part two, authored by Quantum Senior Product Marketing Manager Janet Lafleur, focuses on storage technologies and usages. The first time I edited any media, I did it with a razor and some sticky tape. It wasn&rsquo;t a complicated edit &ndash; I was stitching together audio recordings of two movements of a Mozart piano concerto. It also wasn&rsquo;t that long ago and I confess that every subsequent occasion I used a DAW (Digital Audio Workstation). I&rsquo;m guessing that there aren&rsquo;t many (or possibly any) readers of this blog that remember splicing video tape together (that died off with helical-scan) but there are probably a fair few who have, in the past, performed a linear edit with two or more tape machines and a switcher. Today, however, most media operations (even down to media consumption) are non-linear; this presents some interesting challenges when storing, and possibly more importantly, recalling media. To understand why this is so challenging, we first need to think about the elements of the media itself and then the way in which these elements are accessed. Media Elements The biggest element, both in terms of complex and data, is video. High Definition (HD) video, for example, will pass &ldquo;uncompressed&rdquo; down a serial digital interface (SDI) cable at 1.5Gbps. Storing and moving content at these data rates is impractical for most media facilities, so we compress the signal by removing psychovisually, spatially, and often temporally redundant elements. Most compressions schemes will ensure that decompressing or decoding the file requires less processing cycles that the compression process. However, it is inevitable that some cycles are necessary and, as video playback has a critical temporal element, it will always be necessary to &ldquo;read ahead&rdquo; in a video file and buffer at the playback client. Where temporally redundant components are also removed, such as in a MPEG LongGOP compression scheme like Sony XDCAM HD, the buffering requirements are significantly increased as the client will need to read all the temporal references, typically a minimum of one second of video, or 1Gb of data. When compared to video, the data rate of audio and ancillary data (captions, etc.) is small enough that often it is stored &ldquo;uncompressed&rdquo; and therefore requires less in the way of CPU cycles ahead of playback &ndash; this does, however, introduce some challenges for storage in the way that audio samples and ancillary data are accessed. Media Access Files containing video, even when compressed, are big - 50Mbps is about as low a bit rate as most media organizations will go. On its own, that might sound well within the capabilities of even consumer devices &ndash; typically a 7200rpm hard disk would have a &ldquo;disk-to-buffer&rdquo; transfer rate of around 1Gbps, but this is not the whole story. 50Mbps is the video bit rate &ndash; audio and ancillary data results in an additional 8-16Mbps Many operations will run &ldquo;as fast as possible&rdquo; - although processing cycles are often the restricting factor here, but even a playback or review process will likely include &ldquo;off-speed&rdquo; playback up to 8 or 16 times faster than real-time &ndash; the latter requiring over 1Gbps Many operations will utilize multiple streams of video Sufficient bandwidth is therefore the first requirement for media operations, but this is not the only thing to consider. If we take a simple example of a user reviewing a piece of long-form material, a documentary for instance, in a typical manual QC of checking the beginning, middle and end of the media. As the media is loaded into the playback client, the start of the file(s) will be read from storage and, more than likely, buffered into memory. The user&rsquo;s actions here are fairly predictable, and therefore developing and optimizing a storage system with deterministic behavior in this scenario is highly achievable. However, the user then jumps to a pseudo-random point in the middle of the program; at this point the playback client needs to do a number of things. First, it is likely that the player will need to read the header (or footer) of the file(s) to find the location of the video/audio/ancillary data samples that the user has chosen &ndash; a small, contained read operation where any form, if buffering, is probably undesirable. The player will then read the media elements themselves, but these too are read operations of varying sizes: Video: If a &ldquo;LongGOP&rdquo; encoded file, potentially up to twice the duration of the &ldquo;GOP&rdquo; &ndash; in XDCAM HD, 1 sec ~6MB Audio: A minimum of a video frames-worth of samples ~6KB Ancillary data: Dependent on what is stored, but considering captions and picture descriptions ~6B Architecting a storage system that ensures that these reads of significantly different orders happen quickly and efficiently to provide the user with a responsive and deterministic way for dozens of clients often accessing the exact same file(s) requires significant expertise and testing. Check back tomorrow for part two of &ldquo;Shared Storage for Media Workflows,&rdquo; where Janet Lafleur looks at how storage can be designed and architected to respond to these demands!
Shared Storage for Media Workflows… Part 2
In this guest blog post, Quantum Senior Product Marketing Manager Janet Lafleur shares in-depth insights on storage technologies as well as general usage recommendations. Read part one of this two-part series here, written by Dalet Director of Marketing Ben Davenport, which details the key challenges for storage in today’s media workflows. Storage Technologies for Media Workflows Video editing has always placed higher demands on storage than any other file-based applications, and with today’s higher resolution formats, streaming video content demands even more performance from storage systems, with 4K raw requiring 1210 MB/sec per stream—7.3 times more throughput than raw HD. In the early days of non-linear editing, this level of performance could only be achieved with direct attached storage (DAS). As technology progressed, we were able to add shared collaboration even with many HD streams. Unfortunately, with the extreme demands of 4K and beyond, many workflows are resorting to DAS again, despite its drawbacks. With DAS, sharing large media files between editors and moving the content through the workflow means copying the files across the network or on reusable media such as individual USB and Thunderbolt-attached hard drives. That’s not only expensive because it duplicates the storage capacity required; it also diminishes user productivity and can break version control protocols. NAS vs. SAN for media workflows For media workflows, the most common shared storage systems are scale-out Network Attached Storage (NAS), which delivers files over Ethernet, and shared SAN, which deliver content over Fibre Channel. Scale-out NAS aggregates I/O across a cluster of nodes, each with its own network connection, for far better performance than traditional NAS. However, even the industry-leading NAS solutions running on 10 Gb Ethernet struggle to deliver more than 400MB for a single data stream. In contrast, shared Storage Area Network (SAN) solutions can provide the 1.6 GB/sec performance required for editing streaming video files at resolutions at or greater than 2K uncompressed. In a shared SAN, access to shared volumes is carefully controlled by a server that manages file locking, space allocation and access authorization. By placing this server outside the data path – between the client and the storage – shared SAN eliminates the NAS bottleneck and improves the overall storage performance. Fortunately, there are media storage solutions that provide both NAS and SAN access from a shared storage infrastructure, giving the choice of IP or Fibre Channel protocols depending on user or application requirements. Object storage for large-scale digital libraries Regardless of whether it’s SAN or NAS, most disk storage systems are built with RAID. Using today’s multi-terabyte drives and RAID 6, it’s possible to manage a single RAID array up to 12 drives with a total usable capacity of about 38 terabytes. However, even a modestly sized online asset collection requires an array larger than 12 disks, putting it at higher risk of data loss from hardware failure. The alternative is dividing data across multiple RAID arrays, which increases the cost as well as management complexity. Also, failure of a 4TB or larger drive can result in increased risk and degraded performance for 24-48 hours or more while the RAID array rebuilds depending on the load of work being done. Object storage offers a fundamentally different, more flexible approach to disk storage. Object storage uses a flat namespace and abstracts the data addressing from the physical storage, allowing digital libraries to scale indefinitely. Unlike RAID, object storage can be dispersed geographically to protect from disk, node, rack, or even site failures without replication. When a drive fails, the object storage redistributes the erasure code data without degrading user performance. Because object storage is scalable, secure and cost-effective, and enables content to be accessible at disk access speeds from multiple locations, it’s ideal for content repositories. Object storage can be deployed with a file system layer using Fibre Channel or IP connectivity, or can be integrated directly into a media asset manager or other workflow application through HTTP REST. The best object storage implementations allow both. Choosing the right storage for every step in the workflow An ideal storage solution allows a single content repository to be shared throughout the workflow, but stored and accessed according to the performance and cost requirements for each workflow application. Shared SAN for editing, ingest and delivery. To meet the high-performance storage demands of full-resolution video content, a SAN with Fibre Channel connections should be deployed for video editing workstations, ingest and delivery servers, and any other workflow operation that requires the 700 MB/sec per user read or write performance needed to stream files at 2K resolution or above. Object storage or scale-out NAS for transcoding, rendering and delivery. Transcoding and rendering servers should be connected storage that can deliver 70-110 MB/sec over Ethernet with high IOPS (Input/Output Operations Per Second) performance for much smaller files, often only 4-8K in size. While scale-out NAS and object storage can both fulfill this requirement, solutions that can be managed seamlessly alongside SAN-based online storage greatly simplify management and can reduce costs. Object storage or LTO/LTFS tape for archiving. For large-scale asset libraries, durability and lower costs are paramount. Both object storage and LTO/LTFS tape libraries meet these requirements. But for facilities doing content monetization, object storage offers the advantage of supporting transcode and delivery operations while also offering economical, scalable long-term data protection. Policy-based automation to migrate and manage all storage types. No workflow storage solution with multiple storage types is truly complete without automation. With intelligent automation, content can be easily migrated between and managed across different types of storage based on workflow-specific policies. At a time where the digital footprint of content is growing exponentially due to higher-resolution formats, additional distribution formats, and more cameras capturing more footage, the opportunities for content creators and owners have never been greater. The trick is keeping that content readily available and easily accessible for users and workflow applications to do their magic. By choosing the right storage solutions and carefully planning, facilities can move forward with new technologies to meet new demands, without disrupting their workflow.
Reinheitsgebot: A clear and positive influence on the definition of European media file exchange and delivery formats
It doesn&rsquo;t take much research into either Reinheitsgebot or file specifications to realise that this title is almost complete nonsense. When Reinheitsgebot, aka the &ldquo;German Beer Purity Law,&rdquo; was first endorsed by the duchy of Bavaria 499 years ago (23rd April 1516) it actually had nothing to do with the purity of beer and everything to do with the price of bread &ndash; banning the use of wheat in beer to ensure that there was no competition between brewers and bakers for limited supply. Reinheitsgebot has come to represent a mark of quality in beer and something that German brewers are very proud of, but as the law spread across what is now modern Germany in the 16th century, it actually lead to the disappearance of many highly regarded regional specialities and variations. By contrast, the definition of file formats for exchange and delivery in the media industry has everything to do with the purity, or quality, of media files &ndash; indeed the initiative that has lead to the publication of the ARD-ZDF MXF Profiles in the German-speaking community was lead by the group looking at quality control and management. This has represented a fairly significant change in mind-set in our approach to QC. Within reason, the file format should not really affect the &ldquo;quality&rdquo; of the media (assuming sufficient bit-rate). However, to have a consistent file-QC process, you need to start with consistent files, and the simplest way to do this is to restrict the &ldquo;ingredients&rdquo; in order to deliver a consistent &ldquo;flavour&rdquo; of file. By restricting the variations, we considerably simplify QC processes, mitigate risk of both QC and workflow errors occurring downstream, and reduce the cost of implementation through decreased on-boarding requirements. This point is critical, and for illustration, one need only refer to the results of the IRT MXF plug-fest that takes place each year. At the 2014 event, outputs and interoperability of 24 products from 14 vendors, restricted to four common essence types and two wrapper types, were tested. Even with these restrictions, a total of 4,439 tests were conducted. Assuming each test takes an average of 60 seconds, that equates to very nearly two whole man-weeks of testing before we even consider workflow-breaking issues such as time-code support, frame accuracy, audio/video off-set, etc. Constrained media file specifications equate to far fewer variations, simplifying the on-boarding process and enabling media organizations to easily facilitate thorough automated and human QC, while focusing on the quality of the media, not the interoperability of the file. However, the file specifications themselves may not completely answer all our problems. Referring back to the German beer market, despite the regulation being lifted in 1988 following a ruling by the European Court of Justice, many breweries and beers still claim compliance with Reinheitsgebot, even though very, very few beers actually do. We have two issues in media that are equivalent &ndash; future proofing and compliance. When introduced, Reinheitsgebot specified three permitted ingredients &ndash; water, barley and hops. Unknowingly, however, brewers were adding another ingredient &ndash; either natural airborne yeast, or yeast cultivated from previous brews, a necessary addition for the fermentation process. Without launching into a convoluted discussion about &ldquo;unknown, unknowns,&rdquo; from this we learn that we have to accept the extreme difficulties of scoping future requirements. Reinheitsgebot was replaced in 1993 by the Provisional German Beer Law, allowing for ingredients such as yeast and wheat, without which the famous Witbier (wheat beer) would not exist &ndash; one of the German beer industry&rsquo;s biggest exports. Globally, this has lead to much confusion over what Reinheitsgebot compliance means, especially with many wheat beers claiming adherence. In the media industry, the UK DPP launched a compliance program run by the AMWA, but there are many more companies claiming compliance than appear on the official list. While I suspect that many beers have been consumed in the writing of media file specifications, in reality it is unlikely that the story of the German beer purity law has had much impact &ndash; it may still have some lessons to teach us though. And now, time for a beer! Cheers! Note: this article also appeared in the June 2015 issue of TV Technology Europe
5 reasons why media delivery standards might be good for your business
Like me, I am sure that you have been to a restaurant in a group and everyone orders from the set menu EXCEPT for that one person who orders the exotic, freshly prepared fugu, which requires an extra 30 minutes of preparation from a licensed fugu chef so that the customers don't die eating it. Restaurant etiquette means that our main course is served at the same time, forcing everyone to spend a long time hungry, waiting for the special case. And if you split the bill equally, the special case becomes subsidised by the people wanting the set meal. Does this model relate to the media industry? Is there a cost for being special? How can we reduce that cost? What gets done with the cost savings? How can you help? Fortunately those 5 questions lead into 5 reasons why delivery standards might be a good idea. 1. The set meal is more efficient than the a la carte I must confess that when I write this blog while hungry there will be a lot of food analogies. I'm quite simple really. In the "set meal" case - you can see how it's easier for the kitchen to make a large volume of the most common meal and to deliver it more quickly and accurately than a large number of individual cases. In the file delivery world, the same is true. By restricting the number of choices to a common subset that meet a general business need, it is a lot easier to test the implementations by multiple vendors and to ensure that interoperability is maximised for minimum cost. In a world where every customer can choose a different mix of codecs, audio layout, subtitle &amp; caption formats, you quickly end up with an untestable mess. In that chaotic world, you will also get a lot of rejects. It always surprises me, how few companies have any way of measuring the cost of those rejects, even though they are known to cause pain in the workflow. A standardised, business-oriented delivery specification should help to reduce all of these problems. 2. Is there a cost for being special? I often hear the statement – "It's only an internal format - we don't need to use a standard". The justification is often that the company can react more quickly and cheaply. Unfortunately, every decision has a lifespan. These short-term special decisions often start with a single vendor implementing the special internal format. Time passes and then a second vendor implements it, then a third. Ultimately the custom cost engineering the special internal format is spent 3 or 4 times with different vendors. Finally the original equipment will end of life and the whole archive will have to be migrated. This is often the most costly part of the life cycle as the obsolete special internal format is carefully converted into something new and hopefully more interchangeable. Is there a cost of being special? Oh yes, and it is often over and over again. 3. How can we reduce costs? The usual way to reduce costs is to increase automation and to increase "lights out" operation. In the file delivery world, this means automation of transcode AND metadata handling AND QC AND workflow. At Dalet and AmberFin, all these skills are well understood and mastered. The cost savings come about when the number of variables in the system is reduced and the reliability increases. Limiting the choices on metadata, QC metrics, transcode options, workflow branches increases the likelihood of success. Learning from experiences of the Digital Production Partnership in the UK, it seems that tailoring a specific set of QC tests to a standardised delivery specification with standardised metadata will increase efficiency and reduce costs. The Joint Task Force on File Formats and Media Interoperability is building on the UK's experience to create an American standard that will continue to deliver these savings 4. What gets done with the cost savings? The nice thing about the open standards approach is the savings are shared between the vendors who make the software (they don't have to spend as much money testing special formats) and the owners of that software (who spend less time and effort on-boarding, interoperability testing and regression testing when they upgrade software versions.) 5. How can you help? The easiest way is to add your user requirements to the Joint Task Force on File Formats and Media Interoperability list. These user requirements will be used to prioritise the standardisation work and help deliver a technical solution to a commercial problem. For an overview of some of the thinking behind the technology, you could check out my NAB2014 video on the subject, or the presentation given by Clyde Smith of Fox. Until next time.
LEAN Mean Versioning Machine
Between UHDTVs, smartphones, tablets and a plethora of other screens/devices/services through which to consume media, the race to deliver content has become an uphill battle. Consumers increasingly demand a wider variety of content in progressively diverse delivery mediums, putting growing pressure on content owners and broadcasters to re-version, repackage and repurpose media. However, through optimal implementation of open technologies and IT best practice, broadcasters and content owners can not only respond to this demand but also add greater flexibility, efficiency and quality to their workflows and outputs. Media is transcoded at a number of touch points in the production and distribution process, potentially degrading the source quality over iterations. The problem is that the average number of times content is encoded and decoded is higher than the design efficiency of most codecs commonly used by broadcasters today. The average number of transcodes from content origination to its eventual destination is rising to as many as twenty times. These statistics reflect the complexity of the broadcast business today. Companies who shoot or produce content aren&rsquo;t necessarily those who will aggregate it, and those who aggregate content are not always the same as those who create the various accompanying media assets (trailers, promos, etc.). At every step, the file will be encoded, decoded and re-encoded several times. Content destined for overseas distribution or incoming from foreign producers/broadcasters may have to undergo yet more transcode steps in preparation for final delivery. The fact is, media takes a bit of a beating between acquisition and various outputs, resulting in a significant impact on the technical and subjective quality of the media that the end user eventually sees. But media processing is also CPU (or GPU) intensive, making the alternative quite expensive in terms of infrastructure. To improve quality while reducing cost, we need to consider how to minimize the number of times media is processed and ensure that the media processing that has to be done is of the highest quality. For example, creating packages and versions is far more efficient when you have a clear, standardized view of where all the &ldquo;raw&rdquo; components of the packages are and can &ldquo;virtually&rdquo; assemble and store the versions and packages as metadata, leaving the source media in it&rsquo;s original state. In this case, we only re-encode the file at the point of delivery &ndash; employing LEAN or &ldquo;just-in-time&rdquo; methodology in media workflows. This also serves to insulate operators from the complexities of media manipulation and processing, leaving them confident that those automated actions &ldquo;just happen&rdquo; and ensuring that all their interactions with media are about making creative choices and applying human judgment to business processes. Knowing where media came from &ndash; tracking the structural and genealogical media metadata &ndash; is also critical in automating media processing (speaking of which, attend our next webinar on BPM!) and is a key part of a MAM-driven workflow. With new resolutions, frame rates and codecs constantly emerging &ndash; and an increase in crowd-sourced content driving the number and variety of devices used for acquisition &ndash; strong media awareness and understanding ensures that the &ldquo;right&rdquo; or, more-honestly (since any processing will degrade content), &ldquo;least-worst&rdquo; media-processing path can be chosen. Overall, when it comes to delivering the highest of image quality, the explosion in acquisition formats makes the need for good asset management more important than ever, as it allows content owners to transparently manage that additional complexity.
How to bring standards to your organisation
Back in the 1990s, I was told of an old maxim: "If you can't win the market place, win the standard." I thought that this was a cynical approach to standardisation until we looked through some examples of different markets where there are a small number of dominant players (e.g., CPUs for desktop PCs, GPU cards, tablet / smartphone OS) versus markets where there is enforced cooperation (Wi-Fi devices, network cabling, telephone equipment, USB connectivity). So, how does this affect technology in the media industry, and how can you use the power of standards in your organisation? It seems that the media technology industry hasn't made its mind up about what's best. We have come from a history that is strong in standardisation (SDI, colour spaces, sampling grids, etc.), and this has created a TV and film environment where the interchange of live or streaming content works quite well, although maybe not as cheaply and cleanly as we would like. When the material is offline or file-based, there are many more options. Some of them are single-vendor dominant (like QuickTime), some are standards-led (like MXF), some are open source (Ogg, Theora) and others are proprietary (LXF, FLV). Over any long timeframe, commercial strength beats technical strength. This guiding principal should help explain the dynamics of some of the choices made by organisations. Over the last 10 years, we have seen QuickTime chosen as an interchange format where short-term "I want it working and I want it now" decisions have been dominant. In other scenarios – as in the case of "I am generating thousands of assets a month and I want to still use them in six years time when Apple decides that wearables are more important than tablets" – MXF is often the standard of choice. Looking into the future, we can see that there are a number of disruptive technologies that could impact decision-making and dramatically change the economics of the media supply chain: IP transport (instead of SDI) High Dynamic Range (HDR) video 4k (or higher) resolution video Wide colour space video HEVC encoding for distribution High / mixed frame rate production Time Labelling as a replacement for timecode Specifications for managing workflows Some of these are clearly cooperative markets where long-term commercial reality will be a major force in the final outcome (e.g., IP transport). Other technologies could go either way – you could imagine a dominant camera manufacturer “winning” the high / mixed frame rate production world with a sexy new sensor. Actually, I don't think this will happen because we are up against the laws of physics, but you never know – there are lots of clever people out there! This leads us to the question of how you might get your organisation ahead of the game in these or other new technology areas. In some ways being active in a new standard is quite simple – you just need to take part. This can be costly unless you focus on the right technology and standards body for your organisation. You can participate directly or hire a consultant to do this speciality work for you. Listening, learning and getting the inside track on new technology is simply a matter of turning up and taking notes. Guiding the standards and exerting influence requires a contributor who is skilled in the technology as well as the arts of politics and process. For this reason, there are a number of consultants who specialise in this tricky but commercially important area of our business. Once you know “who” will participate, you also need to know “where” and “how.” Different standards organisations have different specialties. The ITU will work on the underlying definition of colour primaries for Ultra High Definition, SMPTE will define how those media files are carried and transported, and MPEG will define how they are used during encoding for final delivery. Figuring our which standards body is best suited for the economic interests of your organisation requires a clear understanding of you organisation’s economics and some vision about how exerting influence will improve those economics. Although a fun topic, it's a little outside today's scope! So how do you bring standards to your organisation? Step 1: join in and listen Step 2: determine whether or not exerting influence is to your advantage Step 3: actively contribute Step 4: sit back and enjoy the fruits of your labour For more on the topic, don't forget to listen to our webinars! Coming soon, I'll be talking about Business Process Management and standards – and why they matter. Until the next one...
50,000 Shades of Gray - An Exploration of HDR Pleasure and Pain!
OK, so technically High Dynamic Range Ultra High Definition TV (HDR UHDTV) might not quite reach 50,000 shades of gray (at least not just yet), but you&#39;ve got to admit that you might not have read this far if I had put some math in the title! I am very excited about HDR. &quot;Why&quot; you ask? Read on... The thing about having more pixels on the screen for UHDTV (think 4 or 8K) is that it only enhances the viewing experience if you can see them! In my personal case, I am short sighted and without my contact lenses or glasses the TV is just one HUGE blurred pixel making light on the wall. Even with my contact lenses, I cannot see the detail when I move away from the screen. High Dynamic Range, however, is not just about brighter screens; it&#39;s about seeing detail in the bright clouds while, at the same time, seeing detail in the deep blacks. It&#39;s about seeing rich, vibrant colors as well as subtle blues in the skies, instead of burned out whites. In fact, where today HD has approximately 250 different levels of brightness (&ldquo;shades of gray&rdquo;), UHD will have 1000 or even 2000 &ndash; or quite possibly, more. If high dynamic range is so good, then why haven&#39;t we seen it before? Well, we&#39;re in an interesting place in history today. Semiconductor physics has reached the point where we are able to build sensors with a response similar to that of the human eye. LCD and light panel physics has reached the stage where we can build a 10mm-thick panel of more than 2m diagonal that can produce a luminance and color range similar to the response of the human eye. Wait. I know that over half the people reading this are saying, &quot;Bruce, what about the eye&#39;s night vision accommodation or high-end accommodation?&rdquo; Well, I&#39;m sure that it is possible to build a TV that permanently burns the image into the eyeballs, or one that is so dim that we see it in only black and white, but I&#39;m not sure that those 50 shades of eyeball pain would be a commercial success. So we have conquered the capture and display barriers that physics has thrown up in front of us. Surely the &quot;middle bit&quot; must be easy to solve. In fact, everything you can think of can be solved with enough money and software, but we want to solve this particular high dynamic range problem at a cost that allows broadcasting, movies and OTT/IPTV to exist and grow compared to the market place today. This is where the techno-electro-commercial politics of the media industry are coming to bear. We need to have a representation of color that is wider than today&#39;s BT.709 color space to match the eye&#39;s response. This is (sort of) decided with the ITU BT.2020 specification. We then need to ensure that the encoders and decoders that exist are modified to allow this broader color space with enough dynamic range to resolve the details in the whites at the same time as resolving the detail in the blacks. Although this sounds simple, we also have to ensure that the amazing efficiencies of HEVC and AVC are not broken by this act. Finally we have to have some agreement between vendors on what the numerical values of pixels actually mean. This isn&#39;t just a case of matching the black value and the white value, but we have to ensure that any two devices that capture photons produce the same numeric output for a given number of RGB photons at the input. This is known as the OEFT - the Opto-Electrical Transfer Function. At the other end of the chain we have to match the numerical values to the number of photons emitted by a display. This is known as the EOTF - Electro-Optical Transfer Function. When you get down into the details you discover that there are many ways to do this, and there are currently several competing proposals in the standards committees vying for the position of &quot;default transfer function.&quot; More critically for many of the devices processing video in the chain today, HDR requires more bit depth and more accurate filtering than standard dynamic range video. Anyone trying to cut corners will introduce artifacts into their images that will destroy the whole idea behind UHDTV &ndash; to give content a &quot;WOW&quot; factor so viewers want to own the UHDTV HDR experience. If you missed our webinar on the basics of UHDTV that covers HDR, not to worry. I will be covering it in one of my Bruce&#39;s Shorts very soon. Be sure to sign up and you&#39;ll get an email filling you in on our latest hijinks. Oh &ndash; and there is a good chance that I&#39;ll repeat the UHD webinar if we get enough requests. There seems to be a lot of interest out there!