Menu extends

Aug 25, 2015
An IBC preview that won’t leave you dizzy

An IBC preview that won’t leave you dizzy

When we write these blog entries each week, we normally ensure we have a draft a few days in advance to make sure we have plenty of time to review, edit and make sure that the content is worth publishing. This entry was late, very late. This pre-IBC post has been hugely challenging to write for two reasons:
  1. Drone-mounted Moccachino machines are not on the agenda – but Bruce’s post last week definitely has me avoiding marketing “spin.”
  2. There are so many things I could talk about, it’s been a struggle to determine what to leave out. 

Earlier this year, at the NAB Show, we announced the combination of our Workflow Engine, including the Business Process Model & Notation (BPMN) 2.0-compliant workflow designer, and our Dalet AmberFin media processing platform. Now generally available in the AmberFin v11 release, we’ll be demonstrating how customers are using this system to design, automate and monitor their media transcode and QC workflows, in mission-critical multi-platform distribution operations.

Talking of multi-platform distribution, our Dalet Galaxy media asset management now has the capability to publish directly to social media outlets such as Facebook and Twitter, while the new Media Packages feature simplifies the management of complex assets, enabling users to see all of the elements associated with a specific asset, such as different episodes, promos etc., visually mapped out in a clear and simple way.

Making things simple is somewhat of a theme for Dalet at IBC this year. Making ingest really easy for Adobe Premiere users, the new Adobe Panel for Dalet Brio enables users to start, stop, monitor, quality check and ingest directly from the Adobe Premiere Pro interface with new recordings brought directly into the edit bin.

We’ll also be demonstrating the newly redesigned chat and messaging module in Dalet Galaxy, Dalet WebSpace and the Dalet On-the-Go mobile application. The modern, and familiar, chat interface has support for persistent chats, group chats, messaging offline users and much more.

Legislation and consolidation of workflows mean that captioning and subtitling are a common challenge for many facilities. We are directly addressing that challenge with a standards-based, cross-platform strategy for the handling of captioning workflows across Dalet Galaxy, Dalet Brio and Dalet AmberFin. With the ability to read and write standards-constrained TTML, caption and subtitle data is searchable and editable inside the Dalet Galaxy MAM, while Dalet Brio is able to capture caption- and subtitle-containing ancillary data packets to disk and play them back. Dalet AmberFin natively supports the extraction and insertion of subtitle and caption data to and from .SCC and .STL formats respectively, while tight integration with other vendors extends support for other vendors.

There are so many other exciting new features I could talk about, but it’s probably best to see them for yourself live in Amsterdam. Of course, if you’re not going to the show, you can always get the latest by subscribing to the blog, or get in touch with your local representative to get more information.

There, and I didn’t even mention buzzwords 4K and cloud… …yet!

YOU MAY ALSO LIKE...
Dalet Wins TVBEurope Best of Show Award at IBC2018
TVBEurope has announced that Dalet OnePlay has won Best of Show award at IBC 2018. TVBEurope’s Best of Show Awards are judged by a panel of engineers and industry experts on the criteria of innovation, feature set, cost efficiency and performance in serving the industry. Dalet OnePlay is an extension of Dalet Galaxy five that not only automates the control of all devices in the studio, it also fully leverages the MAM, NRCS and workflow orchestration capabilities of the platform to open up new forms of audience engagement and revenue opportunities, all the while optimizing the costs of the entire operations Dalet OnePlay benefits any production of scripted shows, making it an ideal solution for newscasts, sports magazines, live and live-to-tape studio shows. Learn more about Dalet OnePlay Learn more about TVBEurope Best of Show Awards
The Power of the Dalet Search
In today’s multi-platform world, simply put, finding stuff is becoming more complex. In the past, a mere browse through the shelves would suffice. But the digital era brings forth the "hoarding" syndrome. Just think, for example, of your own collection of home pictures – I know mine are in an unmanaged mess. But before we get into searching, we first need to address quantifying things. This is where a MAM's role is to be the record keeper of your valuable content and its associated information. More importantly, having a metadata model extensible enough to address the multiple levels and hierarchy of data is key to the success of your search power. As the amount of content owned, archived and distributed by broadcasters is rapidly growing, it is also evolving, resulting in an exponential expansion of files that must be managed. What was once a one-to-one relationship between the "record" and the media, has evolved into a model where a complex collection of elements (audio, video, text, captions, etc.) forms a record relationship. And don’t even get me started on versioning. To illustrate what I’m talking about, let’s look at the example of the TV Series “24,” starring Keifer Sutherland. You could annotate an episode with the actor’s name, the actor’s character’s name, the actor’s birthday, and so on ... and for each element of that collection (let’s say the source master, the poster, the caption). Having the ability to define a taxonomy and ontology so that when I specify that “24” ALWAYS has Jack Bauer in all the episodes and that the character Jack Bauer is played by actor Keifer Sutherland, we can then have a way to inherit that information down the tree for any element that is part of that tree: Series/Season/Episode. Then for the users, only saying that “this” video is actually 24/season2/ep7 will automatically inherit/apply all it's “parent” associated metadata... without needing to enter each individual value. This greatly reduces the amount of data entry (and time) necessary to quantify something when considering the immense amount of content associated with any given record. But the big impact of the rich metadata engine found in our MAM is its ability to not only search but to discover as well. What I mean is that there are typically two methods of searching: The first is explicit search – the user chooses the necessary fields to conduct their search, and then enters the values to obtain a result, e.g. looking for “Videos” with “Jack Bauer” in “Season 2.” The result is a list that the user must filter through to find what they want. The second way to search is through discovery, with the MAM's ability to display facets. For example, I could type “Actor’s height” (6'2") in “Action role,” “On Location” (Los Angeles). The return would display facets organized by user-defined relevancy, such as Series, Media Type, Actor Name, to then produce a resulting list along with facet boxes that the user can "filter down" within the search. The above example would show: "I found 12 Videos with Keifer Sutherland as an actor," and “I found 34 assets shot in Los Angeles.” And then by checking the 12 Videos of Keifer and the 34 in Los Angeles to cross-eliminate, I would find that there are actually three assets of Keifer in Los Angeles. And then you would also see that the character Jack Bauer also has a cameo on “The Simpsons.” Rich metadata allows us to create relationship between assets at multiple levels. Those various facets allow you to not only navigate through hundreds if not thousands of media assets, but to easily discover specific content as well. And finally, having immediate access to these results for viewing or editing is what makes the Dalet MAM a harmonious ecosystem for not only information but also action/manipulation of said assets.
CCW, SOA, FIMS and the King & Queen of the Media Industry
All-Star Panel Sessions at CCW 2014 The NAB-backed CCW held some impressive panels, and our own Stephane Guez (Dalet CTO) and Luc Comeau (Dalet Business Development Manager) participated in two of the show’s hot topics. MAM, It’s All About Good Vocabulary – Luc Comeau, Senior Business Development Manager The saying goes, “behind every great man, there is a greater woman.” Within the panel – “Content Acquisition and Management Platform: A Service-Oriented Approach” – there was a lot of talk about content being king. In my view then, metadata is his queen. Metadata gives you information that a MAM can capitalize on and allows you to build the workflow to enable your business vision. Done correctly and enterprise MAM will give you visibility into the entire organization, allowing you to better orchestrate both the technical and human process. Because at the end of the day, it’s the visibility of the entire organization that allows you to make better decisions, like whether or not you need to make a change or adapt your infrastructure to accommodate new workflows. In our session, the conversation very quickly headed towards the topic of interoperability. Your MAM must have a common language to interface with all the players. If it doesn’t, you will spend an enormous amount of time translating so these players can work together. And if the need arises, and it usually does, you may need to replace one component with another that speaks a foreign language, well then, you are back to square one. A common framework will ensure a smooth sequence through production and distribution. A common framework, perhaps, such as FIMS… The One Thing Everyone Needs to Know About FIMS – Stephane Guez, Dalet CTO I was invited by Janet Gardner, president of Perspective Media Group, Inc., to participate in the FIMS (Framework for Interoperable Media Services) conference panel she moderated at CCW 2014. The session featured Loic Barbou, chair of the FIMS Technical Board, Jacki Guerra, VP, Media Asset Services for A+E Networks, and Roman Mackiewicz, CIO Media Group at Bloomberg – two broadcasters that are deploying FIMS-compliant infrastructures. The aim of the session was to get the broadcasters’ points of views on their usage of the FIMS standard. The FIMS project was initiated to define standards that enable media systems to be built using a Service Orientated Architecture (SOA). FIMS has enormous potential benefits for both media organizations and the vendors/manufacturers that supply them, defining common interfaces for archetypal media operations such as capture, transfer, transform, store and QC. Global standardization of these interfaces will enable us, as an industry, to respond more quickly and cost effectively to the innovation and the constantly evolving needs and demands of media consumers. Having begun in December 2009, the FIMS project is about to enter it’s 6th year, but the immense scale of the task is abundantly clear, with the general opinion of the panelists being that we are at the beginning of a movement – still very much a work-in-progress with a lot of work ahead of us. One thing, however, was very clear from the discussion: Broadcasters need to be the main driver for FIMS. In doing so, they will find there are challenges and trade offs. FIMS cannot be adopted overnight. There are many existing, complex installations that rely on non-FIMS equipment. It will take some time before these systems can be converted to a FIMS-compliant infrastructure. Along with the technology change, there is the need to evolve the culture. For many, FIMS will put IT at the center of their production. A different world and skill set, many organizations will need to adapt both their workforce and workflow to truly reap the advantages of FIMS.
AmsterMAM – What’s New With Dalet at IBC (Part 1)
If you’re a regular reader of this blog, you may also receive our newsletters (if not, email us and we’ll sign you up) – the latest edition of which lists 10 reasons to visit Dalet at the upcoming IBC show (stand 8.B77). Over the next couple of weeks, I’m going to be using this blog to expand on some of those reasons, starting this week with a focus on Media Asset Management (MAM) and the Dalet Galaxy platform. Three years ago, putting together an educational seminar for SMPTE, Bruce Devlin (star of this blog and Chief Media Scientist at Dalet) interviewed a number of MAM vendors and end users about what a MAM should be and do. Pulling together the responses – starting with a large number of post-it notes and ending with a large Venn diagram – it was obvious that what “MAM” means to you is very dependent on how you want to use it. What we ended up with was a “core” of functionality that was common to all MAM-driven workflows and a number of outer circles with workflow-specific tasks. This is exactly how Dalet Galaxy is built – a unified enterprise MAM core, supporting News, Production, Sports, Archive, Program Prep and Radio, with task-specific tools unique to each business solution. At IBC we’ll be showcasing these workflows individually, but based on the same Dalet Galaxy core. For news, we have two demonstrations. Dalet News Suite is our customizable, Enterprise multimedia news production and distribution system. This IBC we’ll be showcasing new integration with social media and new tools for remote, mobile and web-based working. We’ll also be demonstrating our fully-packaged, end-to-end solution for small and mid-size newsrooms, Dalet NewsPack. In sports workflows, quick turnaround and metadata entry is essential – we’ll be showing how Dalet Sports Factory, with new advanced logging capabilities, enables fast, high-quality sports production and distribution. IBC sees the European debut of the new Dalet Galaxy-based Dalet Radio Suite, the most comprehensive, robust and flexible radio production and playout solution available, featuring Dalet OneCut editing, a rock-solid playout module featuring integration with numerous third parties and class-leading multi-site operations. Dalet Media Life provides a rich set of user tools for program prep, archive and production workflows. New for IBC this year, we’ll be previewing new “track stack” functionality for multilingual and multi-channel audio workflows, extended integration with Adobe Premiere and enhanced workflow automation. If you want to see how the Dalet Galaxy platform can support your workflow, or be central to multiple workflows click here to book at meeting at IBC or get in touch with our sales team. You can also find out more about what we’re showing at IBC here.
A Three-Platform Approach: Dalet Galaxy, Dalet Brio and Dalet AmberFin
So far, 2014 has been the year of mergers and acquisitions within the broadcast industry. As previously reported on this blog, not all this M&A activity is driven by the same customer-focused aims. However, in the case of Dalet, our recent strategic acquisition of AmberFin has the customer clearly in mind. The merging of the two companies enables our new enlarged and enriched company to cover significantly more bases within file-based workflow environments. From IBC 2014, Dalet will offer three technology platforms: Dalet Galaxy, Dalet Brio and Dalet AmberFin, leveraging the knowledge and technologies of both companies to deliver a broader and deeper set of solutions. It’s worth looking under the hood and understanding why this is so important. For readers that are new to some parts of the Dalet product family, let me shed a little light on these platforms: Dalet Galaxy is the latest and most advanced version of the Dalet Media Asset Management (MAM) platform and the most recent evolution of Dalet Enterprise Edition. The landmark development initiative leverages more than 10 years of successful MAM development and customer input. Dalet Galaxy is the industry's first business-centric, MAM platform developed to manage media workflows, systems and assets throughout the multimedia production and distribution chain. Dalet Brio is an innovative and cost-effective platform for broadcast customers looking for non-proprietary solutions to digitize and playback their content. Constructed using Dalet Brio servers (IT-based ingest and playout servers for SD and HD content), it also provides a powerful set of user tools and applications to help deliver video workflows. Dalet AmberFin is a high-quality, scalable transcoding platform with fully integrated ingest, mastering, QC and review functionality, enabling facilities to make great pictures in a scalable, reliable and interoperable way. AmberFin software runs on cost-effective, commodity IT hardware that can adapt and grow 
as the needs of your business change. Advanced Integration Capabilities to deliver new workflows As a specialist in MAM-driven workflows, Dalet has been actively looking at delivering end-to-end workflows, and we all know that one of the biggest problems we encounter is making the various workflow components work together efficiently and intelligently. This is the reason we, at Dalet and AmberFin, have always been strong supporters of industry standards as a means to ease integration issues when building workflows. Each of the three Dalet platforms possess powerful integration capabilities, based on standards and APIs, which enable every product built on these platforms to be integrated within overall workflows. Most importantly, we believe that the greatest added value we can bring to our customers comes from tight integration between these three platforms, empowering workflow optimization that previously was unimaginable. This vision goes well beyond what any industry standard or even proprietary API can achieve. Let’s take an example: in today’s modern workflows media will be transcoded at a variety of touch points in the production and distribution process, potentially degrading the source quality over successive generations. At Dalet, we strive within the AmberFin platform to minimize quality degradation at each step of the process, but we recognize this is not enough. In fact we still believe that “the best transcode is no transcode.” This can only be achieved by exploiting key metadata (technical, editorial and rights metadata) stored in the MAM platform in order to make smart decisions on when to transcode or not, and what type of transcode profile to apply. And this is just one of the ideas we have. At IBC this year, we will be showcasing some fantastic new features and facilities that are possible using the new extended and enriched Dalet portfolio of workflow solutions. Check out here our exciting theatre line-up for the next few days. We’re still booking demos, so it’s not too late to book a meeting: http://www.dalet.com/events/ibc-amsterdam-2014. To learn more about Dalet’s strategic acquisition of AmberFin, download the following white paper: http://www.dalet.com/white-paper/dalet-and-amberfin.
An Amsterdam Education! … No, Not That Type of Education
Maybe it’s a result of having two teachers as parents, but I am passionate about education and, particularly, education in our industry. Technology and innovation move forward so fast in our business that even as a seasoned industry professional it can sometimes be tricky to keep pace. That’s why I’m so excited to be doing something a little different with the Dalet Theater at IBC this year – here’s what we’ve got going on. Dalet @ IBC One of the primary reasons for visiting the IBC Show is to find out what’s new. Each morning, about an hour after the show opens, we will host a short presentation to explore all the key announcements that Dalet is making at IBC. Whatever your reasons for visiting IBC, this is a great opportunity to find out what’s new. Bruce’s (Orange) Shorts After a short break, Bruce Devlin (aka Mr. MXF) will be back on stage to preview a brand new series of Bruce’s shorts, due out later this year. Every day at 13:00 and 16:00 Bruce will present two short seminars on new technologies and trends. Partners with Dalet Across the globe, Dalet works with a number of distributors and resellers who package Dalet solutions and applications with other tools to meet the needs of their geographies. We’ve invited some of our partners to talk about how they’ve used Dalet and other technologies to address the needs of their regions (12:00). Product Focus If you want to know a little bit more about Dalet products and give your feet a bit of a rest, at 14:00 each day we’ll be focusing in on part of the Dalet portfolio. Click here to see what’s on when! Case Studies There’s no better way to learn than from someone else’s success. We will feature a number of case studies at 15:00, followed by Q&A, based on the most cutting-edge deployments of the past year. Dalet Keynote The big one…each day of the show (Friday through Monday), at 17:00, we’ve partnered with industry giants, including Adobe, Quantum and others, to bring you Dalet Keynotes, which will focus on the biggest challenges facing our industry today. There will also be some light refreshments and an opportunity to network with speakers and peers after the presentation. We’re expecting standing-room-only for the Dalet Keynote sessions so register your interest (Dalet+Adobe; Dalet+Quantum) and we’ll do our best to save you a seat. It’s going to be an amazing lineup with something for everybody – be sure to check the full Dalet Theater schedule and stop by the stand during the show for the latest additions and updates. Of course, if you want talk one-on-one with a Dalet solutions expert or have an in-depth demo tailored to your requirement, you can click here to book a meeting with us at the show. We'll be in hall 8, stand 8.B77. We can’t wait to see you there – but if you’re more of a planner and want to know what to expect elsewhere on the Dalet stand, visit our dedicated IBC page on the Dalet website. Who knows, you might even stumble across some intriguing bits of information or a clue (or two) for what we might be announcing at the show (hint, hint!). We’re looking forward to seeing you in Amsterdam! Until then…
More Secrets of Metadata
Followers of Bruce’s Shorts may remember an early episode on the Secrets of Metadata where I talked about concentrating on your metadata for your business, because it adds the value that you need. It seems the world is catching onto the idea of business value of metadata, and I don’t even have to wrestle a snake to explain it! Over the last 10 years of professional media file-based workflows, there have been many attempts at creating standardized metadata schemes. A lot of these have been generated by technologists trying to do the right thing or trying to fix a particular technical problem. Many of the initiatives have suffered from limited deployment and limited adoption because the fundamental questions they were asking centered on technology and not the business application. If you center your metadata around a business application, then you automatically take into account the workflows required to create, clean, validate, transport, store and consume that metadata. If you center the metadata around the technology, then some or all of those aspects are forgotten – and that’s where the adoption of metadata standards falls down. Why? It’s quite simple. Accurate metadata can drive business decisions that in turn improves efficiency and covers the cost of the metadata creation. Many years ago, I was presenting with the head of a well-known post house in London. He stood on stage and said in his best Australian accent “I hate metadata." You guys want me to make accurate, human oriented metadata in my facility for no cost, so that you guys can increase your profits at my expense.” Actually he used many shorter words that I’m not able to repeat here J. The message that he gave is still completely valid today: If you’re going to create accurate metadata, then who is going to consume it? If the answer is no one, ever, then you’re doing something that costs money for no results. That approach does not lead to a good long-term business. If the metadata is consumed within your own organization, then you ask the question: “Does it automate one or many processes downstream?” The automation might be a simple error check or a codec choice or an email generation or a target for a search query. The more consuming processes there are for a metadata field, the more valuable it can become. If the metadata is consumed in a different organization, then you have added value to the content by creating metadata. The value might be expressed in financial terms or in good-will terms, but fundamentally a commercial transaction is taking place by the creation of that metadata. The UK’s Digital Production Partnership and the IRT in Germany have both made great progress towards defining just enough metadata to reduce friction in B2B (business to business) file transfer in the broadcast world. Cablelabs continues to do the same for the cable world and standards bodies such as SMPTE are working with the EBU to make a core metadata definition that accelerates B2B ecommerce type applications. I would love to say that we’ve cracked the professional metadata problem, but the reality is that we’re still half way through the journey. I honestly don’t know how many standards we need. A single standard that covers every media application will be too big and unwieldy. A different standard for each B2B transaction type will cost too much to implement and sustain. I’m thinking we’ll be somewhere between these two extremes in the “Goldilocks zone,” where there are just enough schemas and the implementation cost is justified by the returns that a small number of standards can bring. As a Media Asset Management company, we spend our daily lives wrestling with the complexities of metadata. I live in hope that at least the B2B transaction element of that metadata will one day be as easy to author and as interoperable as a web page. Until then, why not check out the power of search from Luc’s blog. Without good metadata, it would be a lot less exciting.
Why Ingest to the Cloud?
With Cloud storage becoming cheaper and the data transfer to services such as Amazon S3 storage being free of charge, there are numerous reasons why ingesting to the Cloud should be part of any media organization’s workflow. So, stop trying to calculate how much storage your organization consumes by day, month or year, or whether you need a NAS, a SAN or a Grid, and find out why Cloud could be just what your organization needs. Easy Sharing of Content Instead of production crews or field journalists spending copious amounts of time and money shipping hard drives to the home site or being limited by the bandwidth of an FTP server when uploading content, with object storage services like Amazon S3 or Microsoft Azure, uploading content to the Cloud has become easy and cheap. Once content is uploaded to the Cloud, anyone with secure credentials can access it from anywhere in the world. Rights Access to Content In recent news, cloud storage services such as Apple iCloud were hacked and private content was stolen, increasing the concern about security and access rights to content in the Cloud. With secure connections such as VPN and rights access management tools, you can specify, by user, group access rights and duration of how long content can be accessed on the Cloud. Both Microsoft and Amazon have setup security features to protect your data as well as to replicate content to more secure locations. Cloud Services to Process the Data By uploading content to the Cloud, in the backend you can setup services and workflows to run QC checks on the content, stream media, transcode to multiple formats, and organize the content for search and retrieval using a Media Asset Management (MAM) System hosted on the Cloud. Cloud Scalability Rather than buying an expensive tape library or continuing to purchase more hardware for a spinning disk storage, with cloud storage, one can scale down or scale up with the click of a button. No need for over-provisioning. Disaster Recovery An organization can easily set up secure data replication from one site to another or institute replication rules to copy content to multiple virtual containers, offering assurance that content will not be lost. Amazon S3 provides durable infrastructure to store important data and is designed for durability of 99.99999999% of objects. Moving Towards an OPEX Model As operations and storage move to the Cloud, you can control your investment by paying as you use services and storing content on the Cloud. Instead of investing on infrastructure maintenance and support, with operations on the Cloud, you can focus the investment on what makes a difference, the content and not the infrastructure to support it. Why Upload to the Cloud? The Cloud is no longer a technology of the future, with cloud storage adopted by Google, Facebook and Instagram, Cloud technology is the reality of today. By adopting this technology you control your investment by usage needs, backup your data and provide secure access to content to anyone with credentials anywhere in the world. The biggest limitation now is bandwidth, and the hurdle is adjusting the current infrastructure to support Cloud operations. Many organizations are turning towards a hybrid Cloud model, where content and services are hosted both locally and via Cloud solutions. Learning from the Cloud experience, Dalet has made initiatives over the past few years to evolve existing tools and services for the Cloud. Dalet now offers direct ingest from the Dalet Brio video server to Amazon S3 Storage and, at NAB this year in Las Vegas, Dalet showcased the first MAM-based Newsroom on the Cloud. To learn more about Dalet ingest solutions, please visit the ingest application page.
MXF AS02 and IMF: What's the Difference and Can They Work Together?
If you read my previous posts about IMF, you will already know what it is and how it works. But one of the questions I often get is "how is IMF different from AS02 and will it replace it? After all, don’t they both claim to provide a solution to versioning problems?". In a nutshell, the answer is yes, IMF and AS02 are different and no, IMF will not replace AS02; in fact the two complement and enhance each other. Let me explain: MXF AS02 (for broadcast versioning) and IMF (for movie versioning) grew up at the same time. And while both had very similar requirements in the early stages, we soon ended up in a situation where the level of sophistication required by the broadcasters’ versioning process never really reached critical industry mass. Efforts were continually made to merge the MXF AS02 work and the IMF work to prevent duplication of effort and to ensure that the widest number of interoperable applications could be met with the minimum number of specifications. When it came to merging the AS02 and IMF work, we looked at the question of what would be a good technical solution for all of the versioning that takes place in an increasingly complex value chain. It was clear that in the studio business there was a need for IMF, and that the technical solution should recognize the scale of the challenge. It came down to a very simple technical decision, and a simple case of math. AS02 does all of its versioning using binary MXF files, while IMF does all of its versioning using human-readable XML files. There are maybe 20 or 30 really good MXF binary programmers in the world today; XML is much more generic, and there must be hundreds of thousands of top quality XML programmers out there. Given the growing amount of localized versioning that we are now faced with, it makes sense to use a more generic technology like XML to represent the various content versions whilst maintaining the proven AS02 media wrapping to store the essence components. In a nutshell this is the main difference between AS02 and IMF. Both standards have exactly the same pedigree and aim to solve exactly the same problems, but IMF benefits from a more sophisticated versioning model and therefore requires a greater degree of customization – and XML is a better means of achieving this. IMF is not going to replace AS02. Rather the goal is to get to a place where we have a standardized IMF package as a means of exchanging versioned packages within the workflow. IMF will actually enhance the AS02 bundles that represent componentized clips that are already ingested, transcoded and interchanged today.
Shared Storage for Media Workflows… Part 1
In part one of this article, Dalet Director of Marketing Ben Davenport lists and explains the key concepts to master when selecting storage for media workflows. Part two, authored by Quantum Senior Product Marketing Manager Janet Lafleur, focuses on storage technologies and usages. The first time I edited any media, I did it with a razor and some sticky tape. It wasn’t a complicated edit – I was stitching together audio recordings of two movements of a Mozart piano concerto. It also wasn’t that long ago and I confess that every subsequent occasion I used a DAW (Digital Audio Workstation). I’m guessing that there aren’t many (or possibly any) readers of this blog that remember splicing video tape together (that died off with helical-scan) but there are probably a fair few who have, in the past, performed a linear edit with two or more tape machines and a switcher. Today, however, most media operations (even down to media consumption) are non-linear; this presents some interesting challenges when storing, and possibly more importantly, recalling media. To understand why this is so challenging, we first need to think about the elements of the media itself and then the way in which these elements are accessed. Media Elements The biggest element, both in terms of complex and data, is video. High Definition (HD) video, for example, will pass “uncompressed” down a serial digital interface (SDI) cable at 1.5Gbps. Storing and moving content at these data rates is impractical for most media facilities, so we compress the signal by removing psychovisually, spatially, and often temporally redundant elements. Most compressions schemes will ensure that decompressing or decoding the file requires less processing cycles that the compression process. However, it is inevitable that some cycles are necessary and, as video playback has a critical temporal element, it will always be necessary to “read ahead” in a video file and buffer at the playback client. Where temporally redundant components are also removed, such as in a MPEG LongGOP compression scheme like Sony XDCAM HD, the buffering requirements are significantly increased as the client will need to read all the temporal references, typically a minimum of one second of video, or 1Gb of data. When compared to video, the data rate of audio and ancillary data (captions, etc.) is small enough that often it is stored “uncompressed” and therefore requires less in the way of CPU cycles ahead of playback – this does, however, introduce some challenges for storage in the way that audio samples and ancillary data are accessed. Media Access Files containing video, even when compressed, are big - 50Mbps is about as low a bit rate as most media organizations will go. On its own, that might sound well within the capabilities of even consumer devices – typically a 7200rpm hard disk would have a “disk-to-buffer” transfer rate of around 1Gbps, but this is not the whole story. 50Mbps is the video bit rate – audio and ancillary data results in an additional 8-16Mbps Many operations will run “as fast as possible” - although processing cycles are often the restricting factor here, but even a playback or review process will likely include “off-speed” playback up to 8 or 16 times faster than real-time – the latter requiring over 1Gbps Many operations will utilize multiple streams of video Sufficient bandwidth is therefore the first requirement for media operations, but this is not the only thing to consider. If we take a simple example of a user reviewing a piece of long-form material, a documentary for instance, in a typical manual QC of checking the beginning, middle and end of the media. As the media is loaded into the playback client, the start of the file(s) will be read from storage and, more than likely, buffered into memory. The user’s actions here are fairly predictable, and therefore developing and optimizing a storage system with deterministic behavior in this scenario is highly achievable. However, the user then jumps to a pseudo-random point in the middle of the program; at this point the playback client needs to do a number of things. First, it is likely that the player will need to read the header (or footer) of the file(s) to find the location of the video/audio/ancillary data samples that the user has chosen – a small, contained read operation where any form, if buffering, is probably undesirable. The player will then read the media elements themselves, but these too are read operations of varying sizes: Video: If a “LongGOP” encoded file, potentially up to twice the duration of the “GOP” – in XDCAM HD, 1 sec ~6MB Audio: A minimum of a video frames-worth of samples ~6KB Ancillary data: Dependent on what is stored, but considering captions and picture descriptions ~6B Architecting a storage system that ensures that these reads of significantly different orders happen quickly and efficiently to provide the user with a responsive and deterministic way for dozens of clients often accessing the exact same file(s) requires significant expertise and testing. Check back tomorrow for part two of “Shared Storage for Media Workflows,” where Janet Lafleur looks at how storage can be designed and architected to respond to these demands!
Shared Storage for Media Workflows… Part 2
In this guest blog post, Quantum Senior Product Marketing Manager Janet Lafleur shares in-depth insights on storage technologies as well as general usage recommendations. Read part one of this two-part series here, written by Dalet Director of Marketing Ben Davenport, which details the key challenges for storage in today’s media workflows. Storage Technologies for Media Workflows Video editing has always placed higher demands on storage than any other file-based applications, and with today’s higher resolution formats, streaming video content demands even more performance from storage systems, with 4K raw requiring 1210 MB/sec per stream—7.3 times more throughput than raw HD. In the early days of non-linear editing, this level of performance could only be achieved with direct attached storage (DAS). As technology progressed, we were able to add shared collaboration even with many HD streams. Unfortunately, with the extreme demands of 4K and beyond, many workflows are resorting to DAS again, despite its drawbacks. With DAS, sharing large media files between editors and moving the content through the workflow means copying the files across the network or on reusable media such as individual USB and Thunderbolt-attached hard drives. That’s not only expensive because it duplicates the storage capacity required; it also diminishes user productivity and can break version control protocols. NAS vs. SAN for media workflows For media workflows, the most common shared storage systems are scale-out Network Attached Storage (NAS), which delivers files over Ethernet, and shared SAN, which deliver content over Fibre Channel. Scale-out NAS aggregates I/O across a cluster of nodes, each with its own network connection, for far better performance than traditional NAS. However, even the industry-leading NAS solutions running on 10 Gb Ethernet struggle to deliver more than 400MB for a single data stream. In contrast, shared Storage Area Network (SAN) solutions can provide the 1.6 GB/sec performance required for editing streaming video files at resolutions at or greater than 2K uncompressed. In a shared SAN, access to shared volumes is carefully controlled by a server that manages file locking, space allocation and access authorization. By placing this server outside the data path – between the client and the storage – shared SAN eliminates the NAS bottleneck and improves the overall storage performance. Fortunately, there are media storage solutions that provide both NAS and SAN access from a shared storage infrastructure, giving the choice of IP or Fibre Channel protocols depending on user or application requirements. Object storage for large-scale digital libraries Regardless of whether it’s SAN or NAS, most disk storage systems are built with RAID. Using today’s multi-terabyte drives and RAID 6, it’s possible to manage a single RAID array up to 12 drives with a total usable capacity of about 38 terabytes. However, even a modestly sized online asset collection requires an array larger than 12 disks, putting it at higher risk of data loss from hardware failure. The alternative is dividing data across multiple RAID arrays, which increases the cost as well as management complexity. Also, failure of a 4TB or larger drive can result in increased risk and degraded performance for 24-48 hours or more while the RAID array rebuilds depending on the load of work being done. Object storage offers a fundamentally different, more flexible approach to disk storage. Object storage uses a flat namespace and abstracts the data addressing from the physical storage, allowing digital libraries to scale indefinitely. Unlike RAID, object storage can be dispersed geographically to protect from disk, node, rack, or even site failures without replication. When a drive fails, the object storage redistributes the erasure code data without degrading user performance. Because object storage is scalable, secure and cost-effective, and enables content to be accessible at disk access speeds from multiple locations, it’s ideal for content repositories. Object storage can be deployed with a file system layer using Fibre Channel or IP connectivity, or can be integrated directly into a media asset manager or other workflow application through HTTP REST. The best object storage implementations allow both. Choosing the right storage for every step in the workflow An ideal storage solution allows a single content repository to be shared throughout the workflow, but stored and accessed according to the performance and cost requirements for each workflow application. Shared SAN for editing, ingest and delivery. To meet the high-performance storage demands of full-resolution video content, a SAN with Fibre Channel connections should be deployed for video editing workstations, ingest and delivery servers, and any other workflow operation that requires the 700 MB/sec per user read or write performance needed to stream files at 2K resolution or above. Object storage or scale-out NAS for transcoding, rendering and delivery. Transcoding and rendering servers should be connected storage that can deliver 70-110 MB/sec over Ethernet with high IOPS (Input/Output Operations Per Second) performance for much smaller files, often only 4-8K in size. While scale-out NAS and object storage can both fulfill this requirement, solutions that can be managed seamlessly alongside SAN-based online storage greatly simplify management and can reduce costs. Object storage or LTO/LTFS tape for archiving. For large-scale asset libraries, durability and lower costs are paramount. Both object storage and LTO/LTFS tape libraries meet these requirements. But for facilities doing content monetization, object storage offers the advantage of supporting transcode and delivery operations while also offering economical, scalable long-term data protection. Policy-based automation to migrate and manage all storage types. No workflow storage solution with multiple storage types is truly complete without automation. With intelligent automation, content can be easily migrated between and managed across different types of storage based on workflow-specific policies. At a time where the digital footprint of content is growing exponentially due to higher-resolution formats, additional distribution formats, and more cameras capturing more footage, the opportunities for content creators and owners have never been greater. The trick is keeping that content readily available and easily accessible for users and workflow applications to do their magic. By choosing the right storage solutions and carefully planning, facilities can move forward with new technologies to meet new demands, without disrupting their workflow.
Life before and after DPP (Digital Production Partnership)
People that know me will be aware that file-based workflows are a passion of mine. Ten years ago I was co-author of the MXF (Media Exchange Format) specification and ever since I have been engaged in taking this neatSMPTE standard and using it to create a business platform for media enterprises of every size and scale. This is why I’m so excited by the Digital Production Partnership (DPP): it represents the first ratified national Application Specification of the MXF standard and is set to revolutionize the way that media facilities and broadcasters work.To explain what I mean, let’s compare life with a DPP ecosystem to life without. Less pain to feel the gain 
In a standardized DPP world, there would be a limited amount of pain and cost felt by everybody but this would be shared equally amongst the organizations involved and it would be a limited cost, which is incurred only once. After this point, our industry has a fantastic common interchange format to help encourage partnerships and build businesses. In an unstandardized world, where different facilities have decided to use different tools and variants of MXF or other formats, the major cost becomes the lack of third-party interoperability. Each time content is exchanged between different facilities, a media transcode or rewrap in that format is required. This means that all vendors in all the facilities will ultimately support all the file formats andmetadata. The engineering required to implement and test takes time and costs money on an on-going basis. Interoperable metadata helps the content creator 
In a world that has adopted DPP, media and metadata interoperability is not an issue since the format is built on a strong, detailed common interchange specification. In this homogeneous scenario the resources that would have been used in the interoperability engineering process can be used in more creative and productive ways, such as programme making. Programme making is a process where most broadcasters utilise external resources. In a world without DPP, whenever a broadcaster or production facility receives a new file from an external facility, such as a Post House, the question must be asked whether this file meets the requirements of their in-house standard. That evaluation process can lead to extra QC costs in addition to possible media ingest, transcoding, conformance and metadata re-keying costs that need to be taken into account. Building a business platform
 This heterogeneous environment is an issue not just for interaction with external facilities: often different departments within the same major broadcaster will adopt slightly different file standards and metadata making interoperability a big issue to them. As a result, today only about 70 per cent of transactions within companies are file-based – the remainder employ tape. However, this is much higher than where external agencies are involved – here, only 10 – 15 per cent of transactions are file-based. The essence of the problem is the lack of a common interchange format to enable these transactions. DPP is the first open public interchange format that is specifically designed to address this issue. DPP is intended to transform today’s 20 per cent trickle into an 80 per cent flood in the shortest time. To find out more about DPP and how it can transform the way your operation works and also your effectiveness working with other organizations read AmberFin’s White Paper on DPP.