Menu extends

Mar 07, 2015
Could your MXF files be infected with a virus?
The behavior of ignoring unsupported or unrecognized media file labels facilitates the existence of “dark metadata,” which is a potential area of weakness in the broadcast chain. However, when you know what dark metadata you have, where it is and what it means, it can add value to the workflow chain.

Could your MXF files be infected with a virus?

The behavior of ignoring unsupported or unrecognized media file labels facilitates the existence of “dark metadata,” which is a potential area of weakness in the broadcast chain. However, when you know what dark metadata you have, where it is and what it means, it can add value to the workflow chain.

We all know not to click on those shifty-looking attachments in emails, or to download files from dubious websites, but as file delivery of media increases, should we be worried about viruses in media files? In the case of the common computer virus, the answer is “probably not” – the structure of media files and applications used to parse or open MXF, QuickTime and other files do not make “good” hosts for this type of virus. Compared to an executable or any kind of XML-based file, media files are very specific in their structure and purpose – only containing metadata, video and audio – with any element labeled appropriately sent to the applicable decoder. Any labels that are not understood or supported by the parser are simply ignored.

However, this behavior of ignoring unsupported or unrecognized labels facilitates the existence of “dark metadata,” and this is a potential area of weakness in the broadcast chain. Dark metadata isn’t necessarily as menacing as the name could suggest and is most commonly used by media equipment and software vendors to store proprietary metadata that can be used downstream to inform dynamic processes – for example, to change the aspect ratio conversion mode during up or down conversion, or audio routing in a playout video server. When you know what dark metadata you have, where it is and what it means, it can add value to the workflow chain.

Since dark metadata will usually be ignored by parsers that don’t understand/support the proprietary data it carries, it can also be passed through the media lifecycle in a completely harmless way. However, if you are not aware of the existence of dark metadata and/or the values of the data it carries, then there is a risk that processes in the media path could be modified or activated unintentionally and unexpectedly. In this case, the media is in some way carrying a virus and in the worst case, could result in lost revenue.

The anti-virus software installed on your home or work PC isn’t going to be much help in this instance, but there are simple steps that can be taken to ensure that you don’t fall foul of “unknown unknowns.” 

  • Implement a “normalization” stage at the entry point for media into your workflow. You can read other articles in this blog about the benefits of using a mezzanine file format, but even if files are delivered in the same format you use in-house, a simple re-wrapping process to “clean” and normalize the files can be a very lightweight process that adds little or no latency into the workflow.
  • Talk to your suppliers and vendors to make sure you’re aware of any proprietary metadata that may be being passed into your workflow.
  • If you have an automated file-QC tool, check whether it has a “dark metadata” test and switch it on – unless you definitely use proprietary metadata in your workflow, this won’t generate false positives and shouldn’t add any significant length to the test plan.

We’ll be looking at some of the other security concerns in future blogs, but as long as you know your dark metadata, there’s little risk of viral infection from media files.

"Dalet: the art of data" - a FEED Magazine article
We all know media companies are gathering lots of data about their audiences. This can induce some anxiety in viewers, especially when they realise content providers sometimes know more about their behaviour than they know themselves. But the final goal of audience data collection is – or if it isn’t, you should really rethink your business priorities – to provide better, more useful services and, as a result, to increase revenue...Read more Read the full article
Metadata is “The New Gold”!
Plain and simple. Some might categorize the above statement as an exaggeration, nevertheless, we persist! A number of technology trends got the lion’s share of buzz and display at IBC’s 50th edition: cloud and smart hybrid infrastructure, video over IP, workflow orchestration and automation, and last but not least big data, machine learning and artificial intelligence (AI). As game-changing as they can be, we believe that these are in fact enablers of a much bigger trend: better serving and engaging with audiences. Content discovery for consumers It is well documented that the success of online video relies, in part, on metadata. Metadata-centric workflows give viewers the freedom to become more engaged. They can discover and explore more content, navigating directly to the most interesting scenes (including the intent of the scene, e.g. serene, suspense, etc.). Publishers can fully monetize viewer habits and experiences in the most effective way possible with a Media Asset Management (MAM) & Orchestration platform that allows end-to-end authoring and managing asset metadata in their production workflow. For on-demand video consumption, accurate description of content is key to help narrow recommendation engines to more relevant suggestions. For ad-based or product placement models, metadata helps define the optimal in-stream video insertion points, allowing publishers greater control and flexibility with their advertising strategies. Scene metadata such as character name, player name, topic, keyword, etc. become key. The more accurate and rich the description of these insertion points, the better the advertisers can pick the slots that fit both their target audience and the brand experience they look to create. Metadata-driven operations & workflows Today, metadata is also at the heart of the orchestration of workflows and automation of processes, both of which become increasingly important to streamline and scale production and distribution operations. Any process and/or action in the chain of operations can be triggered by any change or fulfillment of a condition on any of the fields of the data model. These configurable metadata-driven workflows are extremely powerful. While the industry has moved away from the simplicity of "one profile per customer", we can today create an environment where a single workflow can produce all the desired outputs just by changing the metadata that initiates a particular process. Managing complex media objects Metadata is core to structure and manage complex media objects and their relations, to enable operations like versioning and packaging of track stacks or compositions. To enable better collaboration and flawless content transformation and curation, organizations need to disintermediate the supply chain. One of the key concepts is to avoid the confirmation of a project or a composition until it actually needs to be packaged for an output. To enable this, media management platforms need to handle complex objects seamlessly so that users can work directly with the elementary components of a media package or project. This gives them the ability to manipulate, transform, share and repurpose content in a very efficient and agile way - a concept called transclusion. The Dalet Galaxy platform’s user tools, data model, and workflow engine offer a robust and configurable framework to power these types of operations. They support all the industry latest standards like IMF, DPP and many others. Augmenting media tagging & operations with AI Video is a complex medium that requires both human-authored metadata, which is flexible and traditionally more accurate, and automatically-created metadata, which is quickly growing thanks to AI-powered services. AI is indeed a key next step for the media industry. Dalet has recently showcased, at IBC 2017, some first prototypes connecting a selection of AI engines to the Dalet Galaxy platform in order to build new services that range from simple, automated content indexing and metadata generation, all the way to smart assistants that augment user operations. Deployable on-premises or in the cloud, these services produce time-correlated, multi-dimensional metadata from audio and video data, unlocking new insights. Coupling these services with the Dalet Galaxy platform provides broadcasters and media organizations with an end-to-end solution that will serve as an enabler to capture tomorrow's business opportunities and generate new benefits.
The Power of the Dalet Search
In today’s multi-platform world, simply put, finding stuff is becoming more complex. In the past, a mere browse through the shelves would suffice. But the digital era brings forth the "hoarding" syndrome. Just think, for example, of your own collection of home pictures – I know mine are in an unmanaged mess. But before we get into searching, we first need to address quantifying things. This is where a MAM's role is to be the record keeper of your valuable content and its associated information. More importantly, having a metadata model extensible enough to address the multiple levels and hierarchy of data is key to the success of your search power. As the amount of content owned, archived and distributed by broadcasters is rapidly growing, it is also evolving, resulting in an exponential expansion of files that must be managed. What was once a one-to-one relationship between the "record" and the media, has evolved into a model where a complex collection of elements (audio, video, text, captions, etc.) forms a record relationship. And don’t even get me started on versioning. To illustrate what I’m talking about, let’s look at the example of the TV Series “24,” starring Keifer Sutherland. You could annotate an episode with the actor’s name, the actor’s character’s name, the actor’s birthday, and so on ... and for each element of that collection (let’s say the source master, the poster, the caption). Having the ability to define a taxonomy and ontology so that when I specify that “24” ALWAYS has Jack Bauer in all the episodes and that the character Jack Bauer is played by actor Keifer Sutherland, we can then have a way to inherit that information down the tree for any element that is part of that tree: Series/Season/Episode. Then for the users, only saying that “this” video is actually 24/season2/ep7 will automatically inherit/apply all it's “parent” associated metadata... without needing to enter each individual value. This greatly reduces the amount of data entry (and time) necessary to quantify something when considering the immense amount of content associated with any given record. But the big impact of the rich metadata engine found in our MAM is its ability to not only search but to discover as well. What I mean is that there are typically two methods of searching: The first is explicit search – the user chooses the necessary fields to conduct their search, and then enters the values to obtain a result, e.g. looking for “Videos” with “Jack Bauer” in “Season 2.” The result is a list that the user must filter through to find what they want. The second way to search is through discovery, with the MAM's ability to display facets. For example, I could type “Actor’s height” (6'2") in “Action role,” “On Location” (Los Angeles). The return would display facets organized by user-defined relevancy, such as Series, Media Type, Actor Name, to then produce a resulting list along with facet boxes that the user can "filter down" within the search. The above example would show: "I found 12 Videos with Keifer Sutherland as an actor," and “I found 34 assets shot in Los Angeles.” And then by checking the 12 Videos of Keifer and the 34 in Los Angeles to cross-eliminate, I would find that there are actually three assets of Keifer in Los Angeles. And then you would also see that the character Jack Bauer also has a cameo on “The Simpsons.” Rich metadata allows us to create relationship between assets at multiple levels. Those various facets allow you to not only navigate through hundreds if not thousands of media assets, but to easily discover specific content as well. And finally, having immediate access to these results for viewing or editing is what makes the Dalet MAM a harmonious ecosystem for not only information but also action/manipulation of said assets.
CCW, SOA, FIMS and the King & Queen of the Media Industry
All-Star Panel Sessions at CCW 2014 The NAB-backed CCW held some impressive panels, and our own Stephane Guez (Dalet CTO) and Luc Comeau (Dalet Business Development Manager) participated in two of the show’s hot topics. MAM, It’s All About Good Vocabulary – Luc Comeau, Senior Business Development Manager The saying goes, “behind every great man, there is a greater woman.” Within the panel – “Content Acquisition and Management Platform: A Service-Oriented Approach” – there was a lot of talk about content being king. In my view then, metadata is his queen. Metadata gives you information that a MAM can capitalize on and allows you to build the workflow to enable your business vision. Done correctly and enterprise MAM will give you visibility into the entire organization, allowing you to better orchestrate both the technical and human process. Because at the end of the day, it’s the visibility of the entire organization that allows you to make better decisions, like whether or not you need to make a change or adapt your infrastructure to accommodate new workflows. In our session, the conversation very quickly headed towards the topic of interoperability. Your MAM must have a common language to interface with all the players. If it doesn’t, you will spend an enormous amount of time translating so these players can work together. And if the need arises, and it usually does, you may need to replace one component with another that speaks a foreign language, well then, you are back to square one. A common framework will ensure a smooth sequence through production and distribution. A common framework, perhaps, such as FIMS… The One Thing Everyone Needs to Know About FIMS – Stephane Guez, Dalet CTO I was invited by Janet Gardner, president of Perspective Media Group, Inc., to participate in the FIMS (Framework for Interoperable Media Services) conference panel she moderated at CCW 2014. The session featured Loic Barbou, chair of the FIMS Technical Board, Jacki Guerra, VP, Media Asset Services for A+E Networks, and Roman Mackiewicz, CIO Media Group at Bloomberg – two broadcasters that are deploying FIMS-compliant infrastructures. The aim of the session was to get the broadcasters’ points of views on their usage of the FIMS standard. The FIMS project was initiated to define standards that enable media systems to be built using a Service Orientated Architecture (SOA). FIMS has enormous potential benefits for both media organizations and the vendors/manufacturers that supply them, defining common interfaces for archetypal media operations such as capture, transfer, transform, store and QC. Global standardization of these interfaces will enable us, as an industry, to respond more quickly and cost effectively to the innovation and the constantly evolving needs and demands of media consumers. Having begun in December 2009, the FIMS project is about to enter it’s 6th year, but the immense scale of the task is abundantly clear, with the general opinion of the panelists being that we are at the beginning of a movement – still very much a work-in-progress with a lot of work ahead of us. One thing, however, was very clear from the discussion: Broadcasters need to be the main driver for FIMS. In doing so, they will find there are challenges and trade offs. FIMS cannot be adopted overnight. There are many existing, complex installations that rely on non-FIMS equipment. It will take some time before these systems can be converted to a FIMS-compliant infrastructure. Along with the technology change, there is the need to evolve the culture. For many, FIMS will put IT at the center of their production. A different world and skill set, many organizations will need to adapt both their workforce and workflow to truly reap the advantages of FIMS.
More Secrets of Metadata
Followers of Bruce’s Shorts may remember an early episode on the Secrets of Metadata where I talked about concentrating on your metadata for your business, because it adds the value that you need. It seems the world is catching onto the idea of business value of metadata, and I don’t even have to wrestle a snake to explain it! Over the last 10 years of professional media file-based workflows, there have been many attempts at creating standardized metadata schemes. A lot of these have been generated by technologists trying to do the right thing or trying to fix a particular technical problem. Many of the initiatives have suffered from limited deployment and limited adoption because the fundamental questions they were asking centered on technology and not the business application. If you center your metadata around a business application, then you automatically take into account the workflows required to create, clean, validate, transport, store and consume that metadata. If you center the metadata around the technology, then some or all of those aspects are forgotten – and that’s where the adoption of metadata standards falls down. Why? It’s quite simple. Accurate metadata can drive business decisions that in turn improves efficiency and covers the cost of the metadata creation. Many years ago, I was presenting with the head of a well-known post house in London. He stood on stage and said in his best Australian accent “I hate metadata." You guys want me to make accurate, human oriented metadata in my facility for no cost, so that you guys can increase your profits at my expense.” Actually he used many shorter words that I’m not able to repeat here J. The message that he gave is still completely valid today: If you’re going to create accurate metadata, then who is going to consume it? If the answer is no one, ever, then you’re doing something that costs money for no results. That approach does not lead to a good long-term business. If the metadata is consumed within your own organization, then you ask the question: “Does it automate one or many processes downstream?” The automation might be a simple error check or a codec choice or an email generation or a target for a search query. The more consuming processes there are for a metadata field, the more valuable it can become. If the metadata is consumed in a different organization, then you have added value to the content by creating metadata. The value might be expressed in financial terms or in good-will terms, but fundamentally a commercial transaction is taking place by the creation of that metadata. The UK’s Digital Production Partnership and the IRT in Germany have both made great progress towards defining just enough metadata to reduce friction in B2B (business to business) file transfer in the broadcast world. Cablelabs continues to do the same for the cable world and standards bodies such as SMPTE are working with the EBU to make a core metadata definition that accelerates B2B ecommerce type applications. I would love to say that we’ve cracked the professional metadata problem, but the reality is that we’re still half way through the journey. I honestly don’t know how many standards we need. A single standard that covers every media application will be too big and unwieldy. A different standard for each B2B transaction type will cost too much to implement and sustain. I’m thinking we’ll be somewhere between these two extremes in the “Goldilocks zone,” where there are just enough schemas and the implementation cost is justified by the returns that a small number of standards can bring. As a Media Asset Management company, we spend our daily lives wrestling with the complexities of metadata. I live in hope that at least the B2B transaction element of that metadata will one day be as easy to author and as interoperable as a web page. Until then, why not check out the power of search from Luc’s blog. Without good metadata, it would be a lot less exciting.
Why Doesn’t Anyone Label The Audio?
The great thing about language is its ability to allow us to exchange ideas and concepts, and hopefully create a business by doing so. With the increasing number of multi-platform delivery opportunities, the increasing bandwidths and channel densities, we are also seeing an increasing opportunity for content owners to create revenue with their content. Successfully exploiting that opportunity involves tailoring the version of the content meant for the audience to reduce friction and increase enjoyment of the viewer / listener. The blockbuster movie community has known for a long time that efficiently making versions of a movie and its collection of trailers on a territory by territory basis can make a significant difference to the number of people who watch that movie. I believe that we are entering an era where turbo-charging the versioning efficiency of media companies is going to be a dominant differentiator. To reduce the costs of versioning and to make life simple for the creative human processes, it is necessary to automate the processes that can be done by machines (or in our case, software). To a company that deals with video, all issues will looks like video issue. The processes for segmenting video content and replacing elements are pretty well understood. Organizations like the UK's DPP have created standards for interchanging that segmentation information. In today’s blog, I'm going to assume that the video issues are largely understood and look at a “simple” issue that two customers approached me about here at the SMPTE Australia show. Right now, on the planet, there are many more languages spoken than there are scripts for writing those languages down. There are also many more scripts than there are countries in the world. This makes the labeling of languages and scripts an interesting challenge for any media company, as the variables are virtually endless. There are many schemes used in the world for labeling audio and any naïve person entering the industry would assume that there must be some sort of global tag that everyone uses for identification ... right? Wrong. Traditionally, TV stations, broadcasters, content creators and others have created content for a specific market. Broadcasters, distributors, aggregators and others have sent their content to territories with only a handful of languages to cope with. Usually proprietary solutions for “track tagging” are developed and deployed. The compelling business need to streamline and standardize the labeling of audio channels hasn’t really existed until now. The internationalization of distribution compels us to find an agreed way in which labeling can be done. Thankfully, someone got there before the media folks. The internet community has been here before - and quite recently. The internet standard RFC5646 is very thorough and copes with the identification of primary languages as well as dialects, extinct languages and imaginary vocabularies such as Klingon. With such a comprehensive and interoperable specification that is widely used for the delivery of web content to billions of devices every day, you'd think that any media system designer worth his or her salt would have this electronic document in their favorites list for regular look-up. You'd think ... The MXF community knows a good thing when it sees it, so you'll find that when it comes to a standardized way to tag tracks in MXF – the SMPTE standard ST 377-4 uses RFC5646 as its vocabulary for labeling. ST 377-4 additionally recognizes that each channel of an audio mix might contain a different language. Each channel might also belong to a group intended as a stereo group, or a surround sound group, or a mono-group of one channel. This hard grouping defines the relationship of channels that should not be split. Going further, ST 377-4 defines groups of groups that are used as metadata to enable easy versioning so that, for example, a French group might consist of a French stereo group, a clean M&E surround mix and a French mono audio description channel. Reality ST 377-4 with RFC5646 solves a difficult problem in a simple and elegant way. Up until now, it's been easier for media companies to do their own thing and invent their own metadata vocabularies with proprietary labeling methods rather than use a standard. Today, to get cost effective interoperability we're starting to rely on standards more and more so that we don't have to stand the cost of an infinite number of proprietary connectors to make things work. As you see more versions of more programs being created, spare a thought for the future costs and revenues of media that needs to be exchanged. A little up-front-standardized metadata builds the launch ramp for a future searchable and accessible library of internationalized content. Standardized audio metadata and subtitle metadata - it may be a tiny-tiny addition to your assets, but over time it helps you find, use and monetize versioned content with no effort at all. Take action now and learn the difference between en-US and en-GB. It's more than just spelling.
Life before and after DPP (Digital Production Partnership)
People that know me will be aware that file-based workflows are a passion of mine. Ten years ago I was co-author of the MXF (Media Exchange Format) specification and ever since I have been engaged in taking this neatSMPTE standard and using it to create a business platform for media enterprises of every size and scale. This is why I’m so excited by the Digital Production Partnership (DPP): it represents the first ratified national Application Specification of the MXF standard and is set to revolutionize the way that media facilities and broadcasters work.To explain what I mean, let’s compare life with a DPP ecosystem to life without. Less pain to feel the gain 
In a standardized DPP world, there would be a limited amount of pain and cost felt by everybody but this would be shared equally amongst the organizations involved and it would be a limited cost, which is incurred only once. After this point, our industry has a fantastic common interchange format to help encourage partnerships and build businesses. In an unstandardized world, where different facilities have decided to use different tools and variants of MXF or other formats, the major cost becomes the lack of third-party interoperability. Each time content is exchanged between different facilities, a media transcode or rewrap in that format is required. This means that all vendors in all the facilities will ultimately support all the file formats andmetadata. The engineering required to implement and test takes time and costs money on an on-going basis. Interoperable metadata helps the content creator 
In a world that has adopted DPP, media and metadata interoperability is not an issue since the format is built on a strong, detailed common interchange specification. In this homogeneous scenario the resources that would have been used in the interoperability engineering process can be used in more creative and productive ways, such as programme making. Programme making is a process where most broadcasters utilise external resources. In a world without DPP, whenever a broadcaster or production facility receives a new file from an external facility, such as a Post House, the question must be asked whether this file meets the requirements of their in-house standard. That evaluation process can lead to extra QC costs in addition to possible media ingest, transcoding, conformance and metadata re-keying costs that need to be taken into account. Building a business platform
 This heterogeneous environment is an issue not just for interaction with external facilities: often different departments within the same major broadcaster will adopt slightly different file standards and metadata making interoperability a big issue to them. As a result, today only about 70 per cent of transactions within companies are file-based – the remainder employ tape. However, this is much higher than where external agencies are involved – here, only 10 – 15 per cent of transactions are file-based. The essence of the problem is the lack of a common interchange format to enable these transactions. DPP is the first open public interchange format that is specifically designed to address this issue. DPP is intended to transform today’s 20 per cent trickle into an 80 per cent flood in the shortest time. To find out more about DPP and how it can transform the way your operation works and also your effectiveness working with other organizations read AmberFin’s White Paper on DPP.
Reinheitsgebot: A clear and positive influence on the definition of European media file exchange and delivery formats
It doesn’t take much research into either Reinheitsgebot or file specifications to realise that this title is almost complete nonsense. When Reinheitsgebot, aka the “German Beer Purity Law,” was first endorsed by the duchy of Bavaria 499 years ago (23rd April 1516) it actually had nothing to do with the purity of beer and everything to do with the price of bread – banning the use of wheat in beer to ensure that there was no competition between brewers and bakers for limited supply. Reinheitsgebot has come to represent a mark of quality in beer and something that German brewers are very proud of, but as the law spread across what is now modern Germany in the 16th century, it actually lead to the disappearance of many highly regarded regional specialities and variations. By contrast, the definition of file formats for exchange and delivery in the media industry has everything to do with the purity, or quality, of media files – indeed the initiative that has lead to the publication of the ARD-ZDF MXF Profiles in the German-speaking community was lead by the group looking at quality control and management. This has represented a fairly significant change in mind-set in our approach to QC. Within reason, the file format should not really affect the “quality” of the media (assuming sufficient bit-rate). However, to have a consistent file-QC process, you need to start with consistent files, and the simplest way to do this is to restrict the “ingredients” in order to deliver a consistent “flavour” of file. By restricting the variations, we considerably simplify QC processes, mitigate risk of both QC and workflow errors occurring downstream, and reduce the cost of implementation through decreased on-boarding requirements. This point is critical, and for illustration, one need only refer to the results of the IRT MXF plug-fest that takes place each year. At the 2014 event, outputs and interoperability of 24 products from 14 vendors, restricted to four common essence types and two wrapper types, were tested. Even with these restrictions, a total of 4,439 tests were conducted. Assuming each test takes an average of 60 seconds, that equates to very nearly two whole man-weeks of testing before we even consider workflow-breaking issues such as time-code support, frame accuracy, audio/video off-set, etc. Constrained media file specifications equate to far fewer variations, simplifying the on-boarding process and enabling media organizations to easily facilitate thorough automated and human QC, while focusing on the quality of the media, not the interoperability of the file. However, the file specifications themselves may not completely answer all our problems. Referring back to the German beer market, despite the regulation being lifted in 1988 following a ruling by the European Court of Justice, many breweries and beers still claim compliance with Reinheitsgebot, even though very, very few beers actually do. We have two issues in media that are equivalent – future proofing and compliance. When introduced, Reinheitsgebot specified three permitted ingredients – water, barley and hops. Unknowingly, however, brewers were adding another ingredient – either natural airborne yeast, or yeast cultivated from previous brews, a necessary addition for the fermentation process. Without launching into a convoluted discussion about “unknown, unknowns,” from this we learn that we have to accept the extreme difficulties of scoping future requirements. Reinheitsgebot was replaced in 1993 by the Provisional German Beer Law, allowing for ingredients such as yeast and wheat, without which the famous Witbier (wheat beer) would not exist – one of the German beer industry’s biggest exports. Globally, this has lead to much confusion over what Reinheitsgebot compliance means, especially with many wheat beers claiming adherence. In the media industry, the UK DPP launched a compliance program run by the AMWA, but there are many more companies claiming compliance than appear on the official list. While I suspect that many beers have been consumed in the writing of media file specifications, in reality it is unlikely that the story of the German beer purity law has had much impact – it may still have some lessons to teach us though. And now, time for a beer! Cheers! Note: this article also appeared in the June 2015 issue of TV Technology Europe
The other day, a member of our talented development team commented, quite accurately, that every time we return from an NAB Show, we nearly always refer to it as the biggest, busiest and best NAB ever. If you’ve ever watched or read one of my presentations or blogs on workflow, you may recollect that I’m a fan of the Toyota Production System and the “Kaizen” concept of continuous improvement. However, I do confess that, following my colleagues’ observation, I momentarily felt a certain amount of pressure to come back from NAB 2015 with evidence that it really was bigger, busier and better than previous years. However, earlier today I was talking to the editor of one of our excellent industry magazines about the most likely themes and trends for this year’s show and something struck me. Although I’m not much of a fan of “buzzword bingo,” given the host of announcements we at Dalet have for this year’s show, I’d place a bet on us sweeping the board. Even before the show, we’ll bring UHD to Dalet AmberFin – supporting UHD inputs in our next release at the end of March. By decoupling format from transport mechanism, Video over IP is one of the most revolutionary changes to the industry in some time, and our Dalet Brio video server platform is spearheading that charge. Building on all of this, Dalet Galaxy, our media asset management platform, continues to facilitate and enhance collaborative workflows with new features for user interaction and geographically dispersed operations –I can barely contain myself from mentioning the “C” word! It doesn’t stop there though. Back in September, we got quite emotional about being one of the first vendors to have a product certified for the creation of UK DPP files. The DPP has led the way in specifying standards and operational guidelines for file delivery and as other regions has followed, Dalet has been right there supporting them. Demonstrating our continued commitment to international standards that improve, ease and simplify the lives of our customers, we’ve now implemented the FIMS capture service in the Brio video server. I believe that initiatives like FIMS become ever more important as the video world increasingly leverages IT technology and, specifically, interaction between control and capture devices as we move to an era of hybrid SDI and IP acquisition. Despite regulatory rulings in the US and elsewhere, captioning and subtitling technology has seen little innovation in the last few years. Since Dalet and AmberFin came together a year ago, we’ve really focused on this as an area where our knowledge and expertise can benefit the industry as a whole. We’re now ready to show you what we’ve been up to and how we can simplify captioning workflows and bring them into multi-platform, multi-version workflows in an effective and efficient way. You’re probably aware of the Dalet Academy, which was launched with much fanfare in January this year. The response from the wider industry has simply been immense, and we now have many thousands of followers subscribed to the Bruce’s Shorts videos and reading our educational blog. For NAB 2015, we’ll be donning our robes and mortarboards to bring the Dalet Academy to the stage, live on our booth (SL4525). Bruce will be there – in his actual shorts – to present special live editions of the video series with support from other Dalet and industry experts for more short seminars. All of the presentations at the show will be followed by a special round-table discussion (limited seating). And while you’re keeping your media knowledge in good shape, there will also be an opportunity to win prizes that are sure to keep you in good shape too! To make sure the excitement doesn’t overwhelm too much, we’re keeping a couple of bits of news to ourselves until the show itself, but if you want to find out more on any of the topics I’ve touched on here, be sure to get in touch, book an appointment, or read more on our dedicated NAB page. As for our development team – sorry guys, I can already tell you that this year is going to be the biggest, busiest and best NAB Show so far!
Media Asset Data Models in MAM Systems – An Evolutionary View
As I was working on a presentation I gave recently on the data model our Dalet Galaxy MAM system is built upon, I realized that looking at the evolution of this data model was a nice way of explaining it. It only made sense to share it with a wider audience. By illustrating how media assets are tracked and cataloged within a MAM system, and how that model has changed significantly over time, I hope to provide a deeper understanding of the changing needs in our industry and how we can not only continue to address these needs, but also begin to predict and plan for new ones. Let’s take look at what I call the “Dark Ages of MAM,” when our operations were almost exclusively tape-based, and there was no real MAM system. What we had were tapes with stickers (“metadata”) on them or, best case scenario, a tape management database. Figure 1 - The Dark Ages - Tapes and Stickers Figure 2 - The Stone Age - One file at a time Then, as professional media workflows started to introduce file-based workflows, we saw the first MAM systems appear, i.e., a digital catalog to organize and track your media assets. This is a time I call the “Stone Age.” The asset was represented in a very simple way: one descriptive metadata set that pointed to one media file (audio or video). This was intended to allow users to search for and find their media assets, along with some information on them. Figure 3 - The Iron Age - Multiple versions of the file Then, things got a bit more complex in the “Iron Age.” We no longer had a single file attached to a metadata record. You needed multiple versions of that media asset, in multiple formats; let’s say one version for proxy viewing, and a few different versions for archiving, distribution to FTP or web sites, etc. And then again, as time went by, things became even more advanced, and we reached what I would call the “Industrial Age.” The asset was not just a single media file anymore; it became a combination of many individual building blocks, with a master video track, individual audio tracks for multiplelanguages, caption or subtitle files, and even secondary video files and still images. And from this you then had to create different “virtual versions,” each with a different subset of files and their own specific metadata, in order to manage and track the delivery to the many new linear or non-linear platforms. And of course, all of these needed to be linked in order to track the various relationships. This “Industrial Age,” as I like to call it, is the time we are in today. The complex data model I describe above allows us to automate production and delivery workflows in an efficient way, by building media production factories for delivering multilingual, multiplatform content. And since a number of standards have recently emerged for delivering these complex bundles (AS-02 and IMF, for example), we have really reached a point where the full preparation, assembly and delivery workflows can be highly optimized. Figure 4 – The Industrial Age - An asset is more than just one media file but a bundle with various versions derived from it As the term “evolution” would imply, this “Industrial Age” is just another phase in the progression of MAM platforms, which are only going to become more advanced and more complex in the future. The next challenge for MAM platforms (or more accurately, the engineers who develop them) will be to include in their data model all the new requirements and paradigms of social media platforms and semantic technologies. The MAM data model will need to be aware not only of what’s happening inside the media factory but also of everything happening in the whole wide world of the semantic web. For us, this will be the next step in this long journey of constantly evolving our products’ data models. We have already begun the process, and it looks like it’s going to be a lot of fun for our engineering teams ;-). Figure 5 – The Networked Age – Metadata relations will include Social Media and Semantic technologies
5 Reasons why we need more than ultra HD to save TV
If you were lucky (or unlucky) enough to get to CES in Las Vegas this year, then you will know that UHD (Ultra High Definition TV) was the talking point of the show. By and large the staff on the booths were there to sell UHD TVs as pieces of furniture and few of them know the techno-commercial difficulties of putting great pictures onto those big, bright, curved(?) and really, really thin displays: In my upcoming webinar on the 29th January I will be looking into the future and predicting some of the topics that I think will need to be addressed over the next few years if TV as we know it is to survive. 1. Interoperability The number of screens and display devices is increasing. The amount of content available for viewing is going up but the number of viewers is not changing greatly. This means that we either have to extract more revenue from each user or reduce the cost of making that content. Having systems that don’t effectively inter-operate adds cost, wastes time and delivers no value so the consumer. Essence interoperability (video & audio) is gradually improving thanks to education campaigns (from AmberFin and others) as well as vendors with proprietary formats reverting to open standards because the cost of maintaining the proprietary formats is too great. Metadata interoperability is the next BIG THING. Tune in to the webinar to discover the truth about essence interoperability and then imagine how much unnecessary cost exists in the broken metadata flows that exists between companies and between departments. 2. Interlace must die UHD may be the next big thing, but just like HDTV it is going to have to show a lot of old content to be a success. Flick through the channels tonight and ask yourself “How much of the content was shot & displayed progressively”. On a conventional TV channel the answer is probably “none”. Showing progressive content on a progressive screen via an interlaced TV value chain is nuts. It reduces quality and increases bitrate. Anyone looking at some of the poor pictures shown at CES will recognise the signs of demonstrations conceived by marketers who did not understand the effects of interlace on an end to end chain. Re-using old content involves up-scaling & deinterlacing existing content – 90% of which is interlaced. In the webinar, I’ll use AmberFin’s experience in making the world’s finest progressive pictures to explain why interlace is evil and what you can do about it. 3. Automating infrastructure Reducing costs means spending money on the things that are important and balancing expenditure between what is important today and what is important tomorrow. There is no point in investing money in MAMs and Automation if your infrastructure won’t support it and give you the flexibility you need. You’ll end up redesigning your automation strategy forever. The folks behind explain this much more succinctly and cleverly than I could ever do. In the webinar, I’ll explain the difference between different virtualization techniques and why they’re important. 4. Trust confidence & QC More and more automation brings efficiency, cost savings and scale, but also means that a lot of the visibility of content is lost. Test and measurement give you the metrics to know about that content. Quality Control gives you decisions that can be used to change your Quality Assurance processes. These processes in turn allow your business to deliver media product that delivers the right technical quality for the creative quality your business is based on. So here’s the crunch. The more you automate, the less you interact with the media, the more you have to trust the metadata and pre-existing knowledge about the media. How do you know it’s right? How do you know that the trust you have in that media is founded? For example. A stranger walks up to you in the street and offers you a glass of water. Would you drink it? Probably not. If that person was your favourite TV star with a camera crew filming you – would you drink it now? Probably? Trust means a lot in life and in business. I’ll explore more of this in the webinar. 5. Separating the pipe from the content If, like me, you’re seeing more grey hair appearing on the barber’s floor with each visit then you may remember the good old days when the capture standard (PAL) was the same as the contribution standard (PAL) and the mixing desk standard (PAL) and the editing standard (PAL) and the playout standard (PAL) and the transmission standard (PAL). Today we could have capture format (RED), a contribution standard (Aspera FASP), a mixing desk standard (HDSDI), an editing standard (MXF DNxHD),a playout standard (XDCAM-HDSDI) and a transmission standard (DVB-T2) that are all different. The world is moving to IP. What does that mean? How does it behave? A quick primer on the basics will be included in the webinar. Why not sign up below before it’s too late? Places are limited – I know it will be a good one. Register for our next webinar on: Wednesday 29th January at: 1pm GMT, 2pm CET, 8am EST, 5am PST OR 5pm GMT, 6pm CET, 12pm EST, 9am PST ‘til next time. I hope you found this blog post interesting and helpful. If so, why not sign-up to receive notifications of new blog posts as they are published?
Does it take you 1/2 million years to test your workflow?
It is now obligatory to start every broadcast technology blog post, article or presentation with a statement reminding us that we are now living in a multi-format, multi-platform world, where consumers want to view the content they choose, when they want it, where they want it, on the device they want. However, unlike other marketing platitudes, this one is actually true: Many of us in this industry spend our days trying to develop infrastructures that will allow us to deliver content to different platforms, ageing prematurely in the process because to be honest, it's a really hard thing to do. So why is it so hard? Why is it so hard? Let me explain: For each device, you have to define the resolution: a new iPad has more pixels than HDTV, for example (2048 wide), and is 4:3 aspect ratio. Android phones have different screen sizes and resolutions. Don’t even get me started on interlaced or progressive. That video has to be encoded using the appropriate codec – and of course different devices use different codecs. Along with the pictures there will be sound. Which could be in mono, stereo or surround sound, which in turn could be 5.1, 7.1 or something more exotic. The sound could be encoded in a number of different ways. Digital audio sampling could be at 44.1kHz or 48kHz and a whole range of bit depths. Then the audio and video need to be brought together with the appropriate metadata in a wrapper. The wrapper needs to be put into a delivery stream. If it is for mobile use, we now routinely adopt one of the three different adaptive bitrate formats, which means essentially we have to encode the content at three different data rates for the target device to switch between. If you want to achieve the admirable aim of making your content available on all common platforms, then you have to take into consideration every combination of resolution, video codec, audio codec, track layout, timecode options, metadata and ancillary data formats and bitrate options. This is a very large number. And it does not stop there. That is only the output side. What about the input? How many input formats do you have to support? Are you getting SD and HD originals? What about 2k and, in the not too distant future, 4K originated material? If you are producing in-house, you may have ARRI raw and REDCODE (R3D) files floating around. The content will arrive in different forms, on different platforms, with different codecs and in different wrappers. We are on to the third revision of the basic MXF specification, for example. Any given end-to-end workflows could involve many, many thousands of input to output processes, each with their own special variants of audio, video, control and metadata formats, wrappers and bitrates. Each time a new input or output type is defined the number increases many-fold. Quality Control All of which is just mind-boggling. Until you consider quality control. If you were to test, in real time, every variant of, say, a three minute pop video, it would take a couple of hundred years. This is clearly not going to happen. It’s all right, I hear you say. All we need do is define a test matrix so that we know we can transform content from any source to any destination. If the test matrix works, then we know that real content will work, too. Well, up to a point. I have done the calculations on this and, to complete a test matrix that really does cover every conceivable input format, through every server option, to every delivery format for every service provider, on every variant of essence and metadata, it is likely to take you half a million years. Maybe a bit more. So are you going to start at workflow path one and test every case, working until some time after the sun explodes? Of course not. But what is the solution? Do you just ignore all the possible content flows and focus on the relatively few that make you money? Do you accept standardized processing which may make you look just like your competitors; or do you implement something special for key workflows even though the cost of doing it – and testing it – may be significant? We have never had to face these questions before. Apart from one pass through a standards converter for content to cross the Atlantic, everything worked pretty much the same way. Now we have to consider tough questions about guaranteeing the quality of experience, and make difficult commercial judgments on the right way to go. If you want to find out more about how to solve your interoperability dilemma, why don't you register for our next webinar on: Wednesday 29th January at: 1pm GMT, 2pm CET, 8am EST, 5am PST OR 5pm GMT, 6pm CET, 12pm EST, 9am PST I hope you found this blog post interesting and helpful. If so, why not sign-up to receive notifications of new blog posts as they are published?