Menu extends

Jul 09, 2015
Why Ingest to the Cloud?
Control your investment and secure your content by ingesting to the Cloud. Learn why the sky is no longer the limit when it comes to securely storing your content by ingesting to the Cloud. Dalet lays down the not-so-cloudy facts in this week’s blog post.

Why Ingest to the Cloud?

Control your investment and secure your content by ingesting to the Cloud. Learn why the sky is no longer the limit when it comes to securely storing your content by ingesting to the Cloud. Dalet lays down the not-so-cloudy facts in this week’s blog post.

With Cloud storage becoming cheaper and the data transfer to services such as Amazon S3 storage being free of charge, there are numerous reasons why ingesting to the Cloud should be part of any media organization’s workflow. So, stop trying to calculate how much storage your organization consumes by day, month or year, or whether you need a NAS, a SAN or a Grid, and find out why Cloud could be just what your organization needs.

Easy Sharing of Content

Instead of production crews or field journalists spending copious amounts of time and money shipping hard drives to the home site or being limited by the bandwidth of an FTP server when uploading content, with object storage services like Amazon S3 or Microsoft Azure, uploading content to the Cloud has become easy and cheap. Once content is uploaded to the Cloud, anyone with secure credentials can access it from anywhere in the world.

Rights Access to Content

In recent news, cloud storage services such as Apple iCloud were hacked and private content was stolen, increasing the concern about security and access rights to content in the Cloud. With secure connections such as VPN and rights access management tools, you can specify, by user, group access rights and duration of how long content can be accessed on the Cloud. Both Microsoft and Amazon have setup security features to protect your data as well as to replicate content to more secure locations. 

Cloud Services to Process the Data

By uploading content to the Cloud, in the backend you can setup services and workflows to run QC checks on the content, stream media, transcode to multiple formats, and organize the content for search and retrieval using a Media Asset Management (MAM) System hosted on the Cloud.

Cloud Scalability

Rather than buying an expensive tape library or continuing to purchase more hardware for a spinning disk storage, with cloud storage, one can scale down or scale up with the click of a button. No need for over-provisioning.

Disaster Recovery

An organization can easily set up secure data replication from one site to another or institute replication rules to copy content to multiple virtual containers, offering assurance that content will not be lost. Amazon S3 provides durable infrastructure to store important data and is designed for durability of 99.99999999% of objects.

Moving Towards an OPEX Model

As operations and storage move to the Cloud, you can control your investment by paying as you use services and storing content on the Cloud. Instead of investing on infrastructure maintenance and support, with operations on the Cloud, you can focus the investment on what makes a difference, the content and not the infrastructure to support it. 

Why Upload to the Cloud?

The Cloud is no longer a technology of the future, with cloud storage adopted by Google, Facebook and Instagram, Cloud technology is the reality of today. By adopting this technology you control your investment by usage needs, backup your data and provide secure access to content to anyone with credentials anywhere in the world. The biggest limitation now is bandwidth, and the hurdle is adjusting the current infrastructure to support Cloud operations. Many organizations are turning towards a hybrid Cloud model, where content and services are hosted both locally and via Cloud solutions. 
 
Learning from the Cloud experience, Dalet has made initiatives over the past few years to evolve existing tools and services for the Cloud. Dalet now offers direct ingest from the Dalet Brio video server to Amazon S3 Storage and, at NAB this year in Las Vegas, Dalet showcased the first MAM-based Newsroom on the Cloud
 
To learn more about Dalet ingest solutions, please visit the ingest application page.
YOU MAY ALSO LIKE...
Virtualization - is it always best? And is cloud a one-size-fits-all solution?
I like to be told that I'm wrong. It usually means that I've made some broad sweeping assumption that over-simplifies the world. My most recent blunder was assuming that the whole world will obviously move 100% of its media operations to the cloud. It seems to me that in the space of a few short years, the media industry has changed its mindset from cloud is unsafe through a brief dally with cloud is good and has now ended up with everything cloud as the way to go. Considering current global events around mobility and remote working, this is a highly topical discussion. One-size-fits-all solutions do not exist! In the unused bit at the back of my brain, I know that there is no such thing as a one size fits all solution, but at the same time I cling to the everything cloud marketing philosophy as some kind of justification for forward motion. Very often, it's a mix of technologies that gives the best performance for a given price and it's the ability to choose the right technology at the right time for the right job at the right price that ensures that any business continues to thrive. Transcoding is a curious business. To select a service or a device, you first must be sure that it meets your needs for scaling, deinterlacing, frame rate conversion, image filtering, SDR and HDR conversion, range of codecs, compression efficiency and compression quality. In today's time-pressed environment choices are often done with a service rate card rather than by testing with real content and real people. This is a shame because very often the idea of taking a high-quality device with a Capex price tag is eliminated, even though the per-transcode costs of an alternative service can be higher for a lower quality. Nothing is ever simple - what's the real business problem? So why all this heavy philosophy? Dalet asked me to look at a hardware accelerator for an offline transcoder. I initially thought that I had stepped into a time machine because that sort of solution is just not fashionable now. I stopped and thought about it for a while in the context of todays reduced operating margins, remote infrastructure requirements and ever-increasing platform support requirements. If you have a fixed and stable volume of content that needs to be converted every day / week / month then actually the costing of that core transcode is a key fixed cost of the business. If a hardware accelerator reduces that fixed cost with a one-off investment rather than a pay as you go continuous commitment, then it is a no-brainer providing you still have a local data center to house it and you have the ability to manage it remotely. There is a business sweet spot for accelerators! So I found myself looking at an HEVC encoding accelerator, connected to a cloud-enabled Dalet AmberFin transcode farm and realized that it was the right solution for many customers to fulfil their core needs of doing a lot of transcoding for the minimum TCO (Total Cost of Ownership). Like many things in engineering, it might not be fashionable or glamorous, but for the right application it makes good business sense. It serves the needs of working and managing remotely, since you can build a hybrid architecture that works in the background and yet, can be accessed anytime, anywhere, assuming your data center has some solid business continuity in place (let’s face it, who doesn’t these days?) As 2020 makes its way, with major issues at global scale, it seems that there is a sweet spot for hardware accelerators - high throughput with less energy consumption than a raw software solution. It also seems that I should avoid jumping on today's fashionable technology for everything and to keep my mind open to a wider range of practical solutions to solving real business problems!
The story behind Dalet StoreFront: open innovation, team collaboration and workflow expansion
Economists have an unusual word to describe the value in simple commodities like gold and platinum. It’s “fungible,” meaning that a substance is exchangeable. One piece of gold is the same as another piece. When you’re ordering gold, you don’t need to specify anything apart from how much of it you want to buy. You can split it up, mould it, melt it, recombine it and absolutely nothing has changed. You still only ever need to specify the weight of gold that you want to buy - or sell. Most things are not like that. Cars aren’t fungible. Nor are houses. And nor is media. You can’t buy media by the ounce. Media has many more dimensions and characteristics, all of which affect its value. But that’s only part of the story, because any given piece of media will have a different value to different buyers. Wildlife footage has very little value to an organisation that specialises in motor racing. Let’s look at this in more detail The media landscape today is significantly different in almost every way to how it was thirty years ago. Films are now files. Negatives are numbers. Cupboards full of tapes and reels have migrated to the cloud. And “supervising” all of this is a Media Asset Management system (MAM). Files are not physical things, and that opens up an incredible range of possibilities, but, because you can’t store non-physical things on shelves, you need an all-embracing MAM system like Dalet to keep track of all the ephemeral properties of millions of blobs of data. What is Dalet StoreFront? Dalet StoreFront is a window into the hidden value of a media organisation's media assets. It allows existing Dalet users to display their content to other media organisations, safely and simply. Essentially, it uses Dalet’s ability to orchestrate content to provide a browsing and fulfillment back-end to Dalet StoreFront’s users. The beauty of this arrangement is that there is virtually zero extra effort needed to prepare media. All the information - the metadata - about the media would have been input to the Dalet MAM as part of the normal process of onboarding media files. This is likely to include information about the title, authors, rights (including restrictions about usage) and also data about what’s contained within the clip, possibly including timecode references. There would also be information about format, resolution and whether or not the clip is in HDR, for example. This metadata, which needs to be there anyway as part of the normal usage of a Dalet MAM system, is exactly what’s needed as the basis for a transaction with a potential buyer. And because of the richness of the metadata, Dalet StoreFront is able to make sure that a media purchaser only sees content that it is allowed to acquire. Dalet StoreFront in Use Imagine a subscription based television provider specialising in travel and wildlife programming. Their world-class media content – programs, trailers, and B-roll content – needs to be distributed to a global network of broadcasters and partners. In a traditional model, the broadcaster/partner would need to email a request for materials. This request could be for marketing material to promote a program, highlights or materials to create the highlights, or the program itself. On the receiving end of the request – the television provider would need to check the rights of the content, the agreement with the partner, search the materials and send over a selection proxy assets. Once confirmed there is yet another step is to finalize the transaction and send assets, hopefully in the right format, via a file transfer service like Aspera. Every step requires manual interaction and investigation. When pressed for time, corners get cut and a sampling of what could be offered from the rich archives is shared for consideration. It’s a daunting process that affects the entire operation and more importantly, could shortchange the impact of the final material if a lesser quality asset was provided. Marketers Love Self-Serve Partners and broadcasters require marketing materials to promote programming. Eliminating the one-to-one requests, access to assets is predetermined, so only pre-approved marketing content is exposed to shoppers. Not only does this simplify the mass distribution of marketing material for new shows, it also makes it far more efficient to serve those broadcasters looking for a very specific asset...one that could promote the re-airing of an older program in a specific region. Find that B-Roll! Even a five-second shot used as B-roll can make all the difference to a producer looking for a specific shot to use in their highlight reel or production. Dalet StoreFront flips the traditional model and lets producers browse the catalog as opposed to an individual sending producers a handful of shots that may or may not be relevant. Dalet StoreFront broadens the selection to include ALL suitable content available for use. Requests for assets are sent to sales bringing the process down to a few simple steps. Prep Your Content for Global Delivery Much like marketing, handing localization of programming content under the traditional model involved many steps and efficiencies. New programs slated for worldwide distribution often need to be dubbed/subtitled in multiple languages. Dalet StoreFront presents localization entities (ex. companies like SDI Media, a Dalet customer) with the required proxy videos to begin their work. This eliminates the guesswork of who is to translate what along with the transfer back and forth of materials. The Dalet MAM back-end manages the delivery in the right file format, and delivers it to the relevant, pre-configured endpoint (e.g. a cloud secure storage location, a CMS, etc). Add More Angles to Your Fast Breaking News With news organizations constantly updating their catalog, Dalet StoreFront answers the call for immediacy and access to assets that will help journalists deliver hyper-local reporting. News organisations can share and deliver media as soon as content has been ingested and logged into the Dalet MAM. It doesn’t matter whether content has been on the system for years, or has just arrived. It’s all equally available giving newsrooms the material they need to build breaking stories and journalists the right media to localize their stories or bring in historical context. Open Up Your Archives… Safely! In the world of sports, archived content becomes even more valuable with time. Iconic plays and players are safely preserved in well-guarded content vaults. The sheer value of the material means no direct access for outside partners. Dalet StoreFront connects to the Dalet MAM archive, creating a separate security layer that tethers the archived assets in a safe manner. This allows clubs, leagues and other partners to browse the archives and select the materials they want to use in their productions whether it’s for highlights, programs or game recap. The Dalet back-end manages the entire process from presenting the materials, to requests for assets and delivery. Running on Amazon Web Services, Dalet StoreFront makes these and many other workflow scenarios happen. Every shape and form of content becomes searchable, browsable - and obtainable. It’s safe, efficient, and is set to transform the way businesses find, acquire and incorporate content into their own productions. Do You Need Dalet StoreFront? If your organization needs to seamlessly connect and expose content inventory to your community, empowering discoverability of untapped content, ripe for monetization and licensing, then Dalet StoreFront is the right solution for you! A Cloud-native SaaS service running on Amazon Web Services, Dalet StoreFront brings in untapped revenues connecting content to clients. Learn more and request a Dalet StoreFront demo at https://www.dalet.com/business-services/storefront.
A Brand New Knowledge Base for Ooyala Flex Media Platform
Now part of Dalet, the Ooyala Flex Media Platform and complementary offerings, are constantly being refreshed with new features and a fully revamped user interface. We continuously strive to bring our clients the best experience, and with that in mind, we have fully refreshed the Ooyala Flex Media Platform Knowledge Base, aligned to our new product design. Ta-daah! Check out those slick icons. Increased collaboration, quality and regularity Learning from product development CICD best practises, we have brought this pipeline into how we document the Ooyala Flex Media Platform. This means increased collaboration, quality and regularity to which we update our documentation. We have taken a fresh look at the information architecture and the way users access content, by looking at 3 key personas: developers, administrators and end users. To achieve this, we have created independent guides for configuring the platform, developing with the API/SDK, and using each application. Within these guides, a user can search, navigate, and identify the category their question fits into, speeding the route to the information required. Each section starts with a menu that targets hot topics and recent updates that highlights what functionality we, and you, get excited about. For any other feature, the release notes are all accessible. Better navigation, faster results Delving into the guide, the reader will find that every article has its own helpful table of contents with anchored section titles. The “copy link” icon facilitates quick and easy sharing of content. So take a look at our new Knowledge Base, also available from the Ooyala Flex Media Platform landing page on dalet.com. We’ve received lots of positive feedback from our users, so please do get in touch with any suggestions for improvement. Happy reading :)
The Power of the Dalet Search
In today’s multi-platform world, simply put, finding stuff is becoming more complex. In the past, a mere browse through the shelves would suffice. But the digital era brings forth the "hoarding" syndrome. Just think, for example, of your own collection of home pictures – I know mine are in an unmanaged mess. But before we get into searching, we first need to address quantifying things. This is where a MAM's role is to be the record keeper of your valuable content and its associated information. More importantly, having a metadata model extensible enough to address the multiple levels and hierarchy of data is key to the success of your search power. As the amount of content owned, archived and distributed by broadcasters is rapidly growing, it is also evolving, resulting in an exponential expansion of files that must be managed. What was once a one-to-one relationship between the "record" and the media, has evolved into a model where a complex collection of elements (audio, video, text, captions, etc.) forms a record relationship. And don’t even get me started on versioning. To illustrate what I’m talking about, let’s look at the example of the TV Series “24,” starring Keifer Sutherland. You could annotate an episode with the actor’s name, the actor’s character’s name, the actor’s birthday, and so on ... and for each element of that collection (let’s say the source master, the poster, the caption). Having the ability to define a taxonomy and ontology so that when I specify that “24” ALWAYS has Jack Bauer in all the episodes and that the character Jack Bauer is played by actor Keifer Sutherland, we can then have a way to inherit that information down the tree for any element that is part of that tree: Series/Season/Episode. Then for the users, only saying that “this” video is actually 24/season2/ep7 will automatically inherit/apply all it's “parent” associated metadata... without needing to enter each individual value. This greatly reduces the amount of data entry (and time) necessary to quantify something when considering the immense amount of content associated with any given record. But the big impact of the rich metadata engine found in our MAM is its ability to not only search but to discover as well. What I mean is that there are typically two methods of searching: The first is explicit search – the user chooses the necessary fields to conduct their search, and then enters the values to obtain a result, e.g. looking for “Videos” with “Jack Bauer” in “Season 2.” The result is a list that the user must filter through to find what they want. The second way to search is through discovery, with the MAM's ability to display facets. For example, I could type “Actor’s height” (6'2") in “Action role,” “On Location” (Los Angeles). The return would display facets organized by user-defined relevancy, such as Series, Media Type, Actor Name, to then produce a resulting list along with facet boxes that the user can "filter down" within the search. The above example would show: "I found 12 Videos with Keifer Sutherland as an actor," and “I found 34 assets shot in Los Angeles.” And then by checking the 12 Videos of Keifer and the 34 in Los Angeles to cross-eliminate, I would find that there are actually three assets of Keifer in Los Angeles. And then you would also see that the character Jack Bauer also has a cameo on “The Simpsons.” Rich metadata allows us to create relationship between assets at multiple levels. Those various facets allow you to not only navigate through hundreds if not thousands of media assets, but to easily discover specific content as well. And finally, having immediate access to these results for viewing or editing is what makes the Dalet MAM a harmonious ecosystem for not only information but also action/manipulation of said assets.
CCW, SOA, FIMS and the King & Queen of the Media Industry
All-Star Panel Sessions at CCW 2014 The NAB-backed CCW held some impressive panels, and our own Stephane Guez (Dalet CTO) and Luc Comeau (Dalet Business Development Manager) participated in two of the show’s hot topics. MAM, It’s All About Good Vocabulary – Luc Comeau, Senior Business Development Manager The saying goes, “behind every great man, there is a greater woman.” Within the panel – “Content Acquisition and Management Platform: A Service-Oriented Approach” – there was a lot of talk about content being king. In my view then, metadata is his queen. Metadata gives you information that a MAM can capitalize on and allows you to build the workflow to enable your business vision. Done correctly and enterprise MAM will give you visibility into the entire organization, allowing you to better orchestrate both the technical and human process. Because at the end of the day, it’s the visibility of the entire organization that allows you to make better decisions, like whether or not you need to make a change or adapt your infrastructure to accommodate new workflows. In our session, the conversation very quickly headed towards the topic of interoperability. Your MAM must have a common language to interface with all the players. If it doesn’t, you will spend an enormous amount of time translating so these players can work together. And if the need arises, and it usually does, you may need to replace one component with another that speaks a foreign language, well then, you are back to square one. A common framework will ensure a smooth sequence through production and distribution. A common framework, perhaps, such as FIMS… The One Thing Everyone Needs to Know About FIMS – Stephane Guez, Dalet CTO I was invited by Janet Gardner, president of Perspective Media Group, Inc., to participate in the FIMS (Framework for Interoperable Media Services) conference panel she moderated at CCW 2014. The session featured Loic Barbou, chair of the FIMS Technical Board, Jacki Guerra, VP, Media Asset Services for A+E Networks, and Roman Mackiewicz, CIO Media Group at Bloomberg – two broadcasters that are deploying FIMS-compliant infrastructures. The aim of the session was to get the broadcasters’ points of views on their usage of the FIMS standard. The FIMS project was initiated to define standards that enable media systems to be built using a Service Orientated Architecture (SOA). FIMS has enormous potential benefits for both media organizations and the vendors/manufacturers that supply them, defining common interfaces for archetypal media operations such as capture, transfer, transform, store and QC. Global standardization of these interfaces will enable us, as an industry, to respond more quickly and cost effectively to the innovation and the constantly evolving needs and demands of media consumers. Having begun in December 2009, the FIMS project is about to enter it’s 6th year, but the immense scale of the task is abundantly clear, with the general opinion of the panelists being that we are at the beginning of a movement – still very much a work-in-progress with a lot of work ahead of us. One thing, however, was very clear from the discussion: Broadcasters need to be the main driver for FIMS. In doing so, they will find there are challenges and trade offs. FIMS cannot be adopted overnight. There are many existing, complex installations that rely on non-FIMS equipment. It will take some time before these systems can be converted to a FIMS-compliant infrastructure. Along with the technology change, there is the need to evolve the culture. For many, FIMS will put IT at the center of their production. A different world and skill set, many organizations will need to adapt both their workforce and workflow to truly reap the advantages of FIMS.
An IBC preview that won’t leave you dizzy
When we write these blog entries each week, we normally ensure we have a draft a few days in advance to make sure we have plenty of time to review, edit and make sure that the content is worth publishing. This entry was late, very late. This pre-IBC post has been hugely challenging to write for two reasons: Drone-mounted Moccachino machines are not on the agenda – but Bruce’s post last week definitely has me avoiding marketing “spin.” There are so many things I could talk about, it’s been a struggle to determine what to leave out. Earlier this year, at the NAB Show, we announced the combination of our Workflow Engine, including the Business Process Model & Notation (BPMN) 2.0-compliant workflow designer, and our Dalet AmberFin media processing platform. Now generally available in the AmberFin v11 release, we’ll be demonstrating how customers are using this system to design, automate and monitor their media transcode and QC workflows, in mission-critical multi-platform distribution operations. Talking of multi-platform distribution, our Dalet Galaxy media asset management now has the capability to publish directly to social media outlets such as Facebook and Twitter, while the new Media Packages feature simplifies the management of complex assets, enabling users to see all of the elements associated with a specific asset, such as different episodes, promos etc., visually mapped out in a clear and simple way. Making things simple is somewhat of a theme for Dalet at IBC this year. Making ingest really easy for Adobe Premiere users, the new Adobe Panel for Dalet Brio enables users to start, stop, monitor, quality check and ingest directly from the Adobe Premiere Pro interface with new recordings brought directly into the edit bin. We’ll also be demonstrating the newly redesigned chat and messaging module in Dalet Galaxy, Dalet WebSpace and the Dalet On-the-Go mobile application. The modern, and familiar, chat interface has support for persistent chats, group chats, messaging offline users and much more. Legislation and consolidation of workflows mean that captioning and subtitling are a common challenge for many facilities. We are directly addressing that challenge with a standards-based, cross-platform strategy for the handling of captioning workflows across Dalet Galaxy, Dalet Brio and Dalet AmberFin. With the ability to read and write standards-constrained TTML, caption and subtitle data is searchable and editable inside the Dalet Galaxy MAM, while Dalet Brio is able to capture caption- and subtitle-containing ancillary data packets to disk and play them back. Dalet AmberFin natively supports the extraction and insertion of subtitle and caption data to and from .SCC and .STL formats respectively, while tight integration with other vendors extends support for other vendors. There are so many other exciting new features I could talk about, but it’s probably best to see them for yourself live in Amsterdam. Of course, if you’re not going to the show, you can always get the latest by subscribing to the blog, or get in touch with your local representative to get more information. There, and I didn’t even mention buzzwords 4K and cloud… …yet!
AmsterMAM – What’s New With Dalet at IBC (Part 1)
If you’re a regular reader of this blog, you may also receive our newsletters (if not, email us and we’ll sign you up) – the latest edition of which lists 10 reasons to visit Dalet at the upcoming IBC show (stand 8.B77). Over the next couple of weeks, I’m going to be using this blog to expand on some of those reasons, starting this week with a focus on Media Asset Management (MAM) and the Dalet Galaxy platform. Three years ago, putting together an educational seminar for SMPTE, Bruce Devlin (star of this blog and Chief Media Scientist at Dalet) interviewed a number of MAM vendors and end users about what a MAM should be and do. Pulling together the responses – starting with a large number of post-it notes and ending with a large Venn diagram – it was obvious that what “MAM” means to you is very dependent on how you want to use it. What we ended up with was a “core” of functionality that was common to all MAM-driven workflows and a number of outer circles with workflow-specific tasks. This is exactly how Dalet Galaxy is built – a unified enterprise MAM core, supporting News, Production, Sports, Archive, Program Prep and Radio, with task-specific tools unique to each business solution. At IBC we’ll be showcasing these workflows individually, but based on the same Dalet Galaxy core. For news, we have two demonstrations. Dalet News Suite is our customizable, Enterprise multimedia news production and distribution system. This IBC we’ll be showcasing new integration with social media and new tools for remote, mobile and web-based working. We’ll also be demonstrating our fully-packaged, end-to-end solution for small and mid-size newsrooms, Dalet NewsPack. In sports workflows, quick turnaround and metadata entry is essential – we’ll be showing how Dalet Sports Factory, with new advanced logging capabilities, enables fast, high-quality sports production and distribution. IBC sees the European debut of the new Dalet Galaxy-based Dalet Radio Suite, the most comprehensive, robust and flexible radio production and playout solution available, featuring Dalet OneCut editing, a rock-solid playout module featuring integration with numerous third parties and class-leading multi-site operations. Dalet Media Life provides a rich set of user tools for program prep, archive and production workflows. New for IBC this year, we’ll be previewing new “track stack” functionality for multilingual and multi-channel audio workflows, extended integration with Adobe Premiere and enhanced workflow automation. If you want to see how the Dalet Galaxy platform can support your workflow, or be central to multiple workflows click here to book at meeting at IBC or get in touch with our sales team. You can also find out more about what we’re showing at IBC here.
More Secrets of Metadata
Followers of Bruce’s Shorts may remember an early episode on the Secrets of Metadata where I talked about concentrating on your metadata for your business, because it adds the value that you need. It seems the world is catching onto the idea of business value of metadata, and I don’t even have to wrestle a snake to explain it! Over the last 10 years of professional media file-based workflows, there have been many attempts at creating standardized metadata schemes. A lot of these have been generated by technologists trying to do the right thing or trying to fix a particular technical problem. Many of the initiatives have suffered from limited deployment and limited adoption because the fundamental questions they were asking centered on technology and not the business application. If you center your metadata around a business application, then you automatically take into account the workflows required to create, clean, validate, transport, store and consume that metadata. If you center the metadata around the technology, then some or all of those aspects are forgotten – and that’s where the adoption of metadata standards falls down. Why? It’s quite simple. Accurate metadata can drive business decisions that in turn improves efficiency and covers the cost of the metadata creation. Many years ago, I was presenting with the head of a well-known post house in London. He stood on stage and said in his best Australian accent “I hate metadata." You guys want me to make accurate, human oriented metadata in my facility for no cost, so that you guys can increase your profits at my expense.” Actually he used many shorter words that I’m not able to repeat here J. The message that he gave is still completely valid today: If you’re going to create accurate metadata, then who is going to consume it? If the answer is no one, ever, then you’re doing something that costs money for no results. That approach does not lead to a good long-term business. If the metadata is consumed within your own organization, then you ask the question: “Does it automate one or many processes downstream?” The automation might be a simple error check or a codec choice or an email generation or a target for a search query. The more consuming processes there are for a metadata field, the more valuable it can become. If the metadata is consumed in a different organization, then you have added value to the content by creating metadata. The value might be expressed in financial terms or in good-will terms, but fundamentally a commercial transaction is taking place by the creation of that metadata. The UK’s Digital Production Partnership and the IRT in Germany have both made great progress towards defining just enough metadata to reduce friction in B2B (business to business) file transfer in the broadcast world. Cablelabs continues to do the same for the cable world and standards bodies such as SMPTE are working with the EBU to make a core metadata definition that accelerates B2B ecommerce type applications. I would love to say that we’ve cracked the professional metadata problem, but the reality is that we’re still half way through the journey. I honestly don’t know how many standards we need. A single standard that covers every media application will be too big and unwieldy. A different standard for each B2B transaction type will cost too much to implement and sustain. I’m thinking we’ll be somewhere between these two extremes in the “Goldilocks zone,” where there are just enough schemas and the implementation cost is justified by the returns that a small number of standards can bring. As a Media Asset Management company, we spend our daily lives wrestling with the complexities of metadata. I live in hope that at least the B2B transaction element of that metadata will one day be as easy to author and as interoperable as a web page. Until then, why not check out the power of search from Luc’s blog. Without good metadata, it would be a lot less exciting.
MXF AS02 and IMF: What's the Difference and Can They Work Together?
If you read my previous posts about IMF, you will already know what it is and how it works. But one of the questions I often get is "how is IMF different from AS02 and will it replace it? After all, don’t they both claim to provide a solution to versioning problems?". In a nutshell, the answer is yes, IMF and AS02 are different and no, IMF will not replace AS02; in fact the two complement and enhance each other. Let me explain: MXF AS02 (for broadcast versioning) and IMF (for movie versioning) grew up at the same time. And while both had very similar requirements in the early stages, we soon ended up in a situation where the level of sophistication required by the broadcasters’ versioning process never really reached critical industry mass. Efforts were continually made to merge the MXF AS02 work and the IMF work to prevent duplication of effort and to ensure that the widest number of interoperable applications could be met with the minimum number of specifications. When it came to merging the AS02 and IMF work, we looked at the question of what would be a good technical solution for all of the versioning that takes place in an increasingly complex value chain. It was clear that in the studio business there was a need for IMF, and that the technical solution should recognize the scale of the challenge. It came down to a very simple technical decision, and a simple case of math. AS02 does all of its versioning using binary MXF files, while IMF does all of its versioning using human-readable XML files. There are maybe 20 or 30 really good MXF binary programmers in the world today; XML is much more generic, and there must be hundreds of thousands of top quality XML programmers out there. Given the growing amount of localized versioning that we are now faced with, it makes sense to use a more generic technology like XML to represent the various content versions whilst maintaining the proven AS02 media wrapping to store the essence components. In a nutshell this is the main difference between AS02 and IMF. Both standards have exactly the same pedigree and aim to solve exactly the same problems, but IMF benefits from a more sophisticated versioning model and therefore requires a greater degree of customization – and XML is a better means of achieving this. IMF is not going to replace AS02. Rather the goal is to get to a place where we have a standardized IMF package as a means of exchanging versioned packages within the workflow. IMF will actually enhance the AS02 bundles that represent componentized clips that are already ingested, transcoded and interchanged today.
Shared Storage for Media Workflows… Part 1
In part one of this article, Dalet Director of Marketing Ben Davenport lists and explains the key concepts to master when selecting storage for media workflows. Part two, authored by Quantum Senior Product Marketing Manager Janet Lafleur, focuses on storage technologies and usages. The first time I edited any media, I did it with a razor and some sticky tape. It wasn’t a complicated edit – I was stitching together audio recordings of two movements of a Mozart piano concerto. It also wasn’t that long ago and I confess that every subsequent occasion I used a DAW (Digital Audio Workstation). I’m guessing that there aren’t many (or possibly any) readers of this blog that remember splicing video tape together (that died off with helical-scan) but there are probably a fair few who have, in the past, performed a linear edit with two or more tape machines and a switcher. Today, however, most media operations (even down to media consumption) are non-linear; this presents some interesting challenges when storing, and possibly more importantly, recalling media. To understand why this is so challenging, we first need to think about the elements of the media itself and then the way in which these elements are accessed. Media Elements The biggest element, both in terms of complex and data, is video. High Definition (HD) video, for example, will pass “uncompressed” down a serial digital interface (SDI) cable at 1.5Gbps. Storing and moving content at these data rates is impractical for most media facilities, so we compress the signal by removing psychovisually, spatially, and often temporally redundant elements. Most compressions schemes will ensure that decompressing or decoding the file requires less processing cycles that the compression process. However, it is inevitable that some cycles are necessary and, as video playback has a critical temporal element, it will always be necessary to “read ahead” in a video file and buffer at the playback client. Where temporally redundant components are also removed, such as in a MPEG LongGOP compression scheme like Sony XDCAM HD, the buffering requirements are significantly increased as the client will need to read all the temporal references, typically a minimum of one second of video, or 1Gb of data. When compared to video, the data rate of audio and ancillary data (captions, etc.) is small enough that often it is stored “uncompressed” and therefore requires less in the way of CPU cycles ahead of playback – this does, however, introduce some challenges for storage in the way that audio samples and ancillary data are accessed. Media Access Files containing video, even when compressed, are big - 50Mbps is about as low a bit rate as most media organizations will go. On its own, that might sound well within the capabilities of even consumer devices – typically a 7200rpm hard disk would have a “disk-to-buffer” transfer rate of around 1Gbps, but this is not the whole story. 50Mbps is the video bit rate – audio and ancillary data results in an additional 8-16Mbps Many operations will run “as fast as possible” - although processing cycles are often the restricting factor here, but even a playback or review process will likely include “off-speed” playback up to 8 or 16 times faster than real-time – the latter requiring over 1Gbps Many operations will utilize multiple streams of video Sufficient bandwidth is therefore the first requirement for media operations, but this is not the only thing to consider. If we take a simple example of a user reviewing a piece of long-form material, a documentary for instance, in a typical manual QC of checking the beginning, middle and end of the media. As the media is loaded into the playback client, the start of the file(s) will be read from storage and, more than likely, buffered into memory. The user’s actions here are fairly predictable, and therefore developing and optimizing a storage system with deterministic behavior in this scenario is highly achievable. However, the user then jumps to a pseudo-random point in the middle of the program; at this point the playback client needs to do a number of things. First, it is likely that the player will need to read the header (or footer) of the file(s) to find the location of the video/audio/ancillary data samples that the user has chosen – a small, contained read operation where any form, if buffering, is probably undesirable. The player will then read the media elements themselves, but these too are read operations of varying sizes: Video: If a “LongGOP” encoded file, potentially up to twice the duration of the “GOP” – in XDCAM HD, 1 sec ~6MB Audio: A minimum of a video frames-worth of samples ~6KB Ancillary data: Dependent on what is stored, but considering captions and picture descriptions ~6B Architecting a storage system that ensures that these reads of significantly different orders happen quickly and efficiently to provide the user with a responsive and deterministic way for dozens of clients often accessing the exact same file(s) requires significant expertise and testing. Check back tomorrow for part two of “Shared Storage for Media Workflows,” where Janet Lafleur looks at how storage can be designed and architected to respond to these demands!
Shared Storage for Media Workflows… Part 2
In this guest blog post, Quantum Senior Product Marketing Manager Janet Lafleur shares in-depth insights on storage technologies as well as general usage recommendations. Read part one of this two-part series here, written by Dalet Director of Marketing Ben Davenport, which details the key challenges for storage in today’s media workflows. Storage Technologies for Media Workflows Video editing has always placed higher demands on storage than any other file-based applications, and with today’s higher resolution formats, streaming video content demands even more performance from storage systems, with 4K raw requiring 1210 MB/sec per stream—7.3 times more throughput than raw HD. In the early days of non-linear editing, this level of performance could only be achieved with direct attached storage (DAS). As technology progressed, we were able to add shared collaboration even with many HD streams. Unfortunately, with the extreme demands of 4K and beyond, many workflows are resorting to DAS again, despite its drawbacks. With DAS, sharing large media files between editors and moving the content through the workflow means copying the files across the network or on reusable media such as individual USB and Thunderbolt-attached hard drives. That’s not only expensive because it duplicates the storage capacity required; it also diminishes user productivity and can break version control protocols. NAS vs. SAN for media workflows For media workflows, the most common shared storage systems are scale-out Network Attached Storage (NAS), which delivers files over Ethernet, and shared SAN, which deliver content over Fibre Channel. Scale-out NAS aggregates I/O across a cluster of nodes, each with its own network connection, for far better performance than traditional NAS. However, even the industry-leading NAS solutions running on 10 Gb Ethernet struggle to deliver more than 400MB for a single data stream. In contrast, shared Storage Area Network (SAN) solutions can provide the 1.6 GB/sec performance required for editing streaming video files at resolutions at or greater than 2K uncompressed. In a shared SAN, access to shared volumes is carefully controlled by a server that manages file locking, space allocation and access authorization. By placing this server outside the data path – between the client and the storage – shared SAN eliminates the NAS bottleneck and improves the overall storage performance. Fortunately, there are media storage solutions that provide both NAS and SAN access from a shared storage infrastructure, giving the choice of IP or Fibre Channel protocols depending on user or application requirements. Object storage for large-scale digital libraries Regardless of whether it’s SAN or NAS, most disk storage systems are built with RAID. Using today’s multi-terabyte drives and RAID 6, it’s possible to manage a single RAID array up to 12 drives with a total usable capacity of about 38 terabytes. However, even a modestly sized online asset collection requires an array larger than 12 disks, putting it at higher risk of data loss from hardware failure. The alternative is dividing data across multiple RAID arrays, which increases the cost as well as management complexity. Also, failure of a 4TB or larger drive can result in increased risk and degraded performance for 24-48 hours or more while the RAID array rebuilds depending on the load of work being done. Object storage offers a fundamentally different, more flexible approach to disk storage. Object storage uses a flat namespace and abstracts the data addressing from the physical storage, allowing digital libraries to scale indefinitely. Unlike RAID, object storage can be dispersed geographically to protect from disk, node, rack, or even site failures without replication. When a drive fails, the object storage redistributes the erasure code data without degrading user performance. Because object storage is scalable, secure and cost-effective, and enables content to be accessible at disk access speeds from multiple locations, it’s ideal for content repositories. Object storage can be deployed with a file system layer using Fibre Channel or IP connectivity, or can be integrated directly into a media asset manager or other workflow application through HTTP REST. The best object storage implementations allow both. Choosing the right storage for every step in the workflow An ideal storage solution allows a single content repository to be shared throughout the workflow, but stored and accessed according to the performance and cost requirements for each workflow application. Shared SAN for editing, ingest and delivery. To meet the high-performance storage demands of full-resolution video content, a SAN with Fibre Channel connections should be deployed for video editing workstations, ingest and delivery servers, and any other workflow operation that requires the 700 MB/sec per user read or write performance needed to stream files at 2K resolution or above. Object storage or scale-out NAS for transcoding, rendering and delivery. Transcoding and rendering servers should be connected storage that can deliver 70-110 MB/sec over Ethernet with high IOPS (Input/Output Operations Per Second) performance for much smaller files, often only 4-8K in size. While scale-out NAS and object storage can both fulfill this requirement, solutions that can be managed seamlessly alongside SAN-based online storage greatly simplify management and can reduce costs. Object storage or LTO/LTFS tape for archiving. For large-scale asset libraries, durability and lower costs are paramount. Both object storage and LTO/LTFS tape libraries meet these requirements. But for facilities doing content monetization, object storage offers the advantage of supporting transcode and delivery operations while also offering economical, scalable long-term data protection. Policy-based automation to migrate and manage all storage types. No workflow storage solution with multiple storage types is truly complete without automation. With intelligent automation, content can be easily migrated between and managed across different types of storage based on workflow-specific policies. At a time where the digital footprint of content is growing exponentially due to higher-resolution formats, additional distribution formats, and more cameras capturing more footage, the opportunities for content creators and owners have never been greater. The trick is keeping that content readily available and easily accessible for users and workflow applications to do their magic. By choosing the right storage solutions and carefully planning, facilities can move forward with new technologies to meet new demands, without disrupting their workflow.
5 reasons why media delivery standards might be good for your business
Like me, I am sure that you have been to a restaurant in a group and everyone orders from the set menu EXCEPT for that one person who orders the exotic, freshly prepared fugu, which requires an extra 30 minutes of preparation from a licensed fugu chef so that the customers don't die eating it. Restaurant etiquette means that our main course is served at the same time, forcing everyone to spend a long time hungry, waiting for the special case. And if you split the bill equally, the special case becomes subsidised by the people wanting the set meal. Does this model relate to the media industry? Is there a cost for being special? How can we reduce that cost? What gets done with the cost savings? How can you help? Fortunately those 5 questions lead into 5 reasons why delivery standards might be a good idea. 1. The set meal is more efficient than the a la carte I must confess that when I write this blog while hungry there will be a lot of food analogies. I'm quite simple really. In the "set meal" case - you can see how it's easier for the kitchen to make a large volume of the most common meal and to deliver it more quickly and accurately than a large number of individual cases. In the file delivery world, the same is true. By restricting the number of choices to a common subset that meet a general business need, it is a lot easier to test the implementations by multiple vendors and to ensure that interoperability is maximised for minimum cost. In a world where every customer can choose a different mix of codecs, audio layout, subtitle & caption formats, you quickly end up with an untestable mess. In that chaotic world, you will also get a lot of rejects. It always surprises me, how few companies have any way of measuring the cost of those rejects, even though they are known to cause pain in the workflow. A standardised, business-oriented delivery specification should help to reduce all of these problems. 2. Is there a cost for being special? I often hear the statement – "It's only an internal format - we don't need to use a standard". The justification is often that the company can react more quickly and cheaply. Unfortunately, every decision has a lifespan. These short-term special decisions often start with a single vendor implementing the special internal format. Time passes and then a second vendor implements it, then a third. Ultimately the custom cost engineering the special internal format is spent 3 or 4 times with different vendors. Finally the original equipment will end of life and the whole archive will have to be migrated. This is often the most costly part of the life cycle as the obsolete special internal format is carefully converted into something new and hopefully more interchangeable. Is there a cost of being special? Oh yes, and it is often over and over again. 3. How can we reduce costs? The usual way to reduce costs is to increase automation and to increase "lights out" operation. In the file delivery world, this means automation of transcode AND metadata handling AND QC AND workflow. At Dalet and AmberFin, all these skills are well understood and mastered. The cost savings come about when the number of variables in the system is reduced and the reliability increases. Limiting the choices on metadata, QC metrics, transcode options, workflow branches increases the likelihood of success. Learning from experiences of the Digital Production Partnership in the UK, it seems that tailoring a specific set of QC tests to a standardised delivery specification with standardised metadata will increase efficiency and reduce costs. The Joint Task Force on File Formats and Media Interoperability is building on the UK's experience to create an American standard that will continue to deliver these savings 4. What gets done with the cost savings? The nice thing about the open standards approach is the savings are shared between the vendors who make the software (they don't have to spend as much money testing special formats) and the owners of that software (who spend less time and effort on-boarding, interoperability testing and regression testing when they upgrade software versions.) 5. How can you help? The easiest way is to add your user requirements to the Joint Task Force on File Formats and Media Interoperability list. These user requirements will be used to prioritise the standardisation work and help deliver a technical solution to a commercial problem. For an overview of some of the thinking behind the technology, you could check out my NAB2014 video on the subject, or the presentation given by Clyde Smith of Fox. Until next time.