Menu extends

Sep 09, 2015
CCW, SOA, FIMS and the King & Queen of the Media Industry
The NAB-backed CCW held some impressive panels, and our own Stephane Guez (Dalet CTO) and Luc Comeau (Dalet Business Development Manager) participated in two of the show’s hot topics.

CCW, SOA, FIMS and the King & Queen of the Media I

The NAB-backed CCW held some impressive panels, and our own Stephane Guez (Dalet CTO) and Luc Comeau (Dalet Business Development Manager) participated in two of the show’s hot topics.

All-Star Panel Sessions at CCW 2014


The NAB-backed CCW held some impressive panels, and our own Stephane Guez (Dalet CTO) and Luc Comeau (Dalet Business Development Manager) participated in two of the show’s hot topics.

MAM, It’s All About Good Vocabulary – Luc Comeau, Senior Business Development Manager
 
The saying goes, “behind every great man, there is a greater woman.” Within the panel – “Content Acquisition and Management Platform: A Service-Oriented Approach” – there was a lot of talk about content being king. In my view then, metadata is his queen. Metadata gives you information that a MAM can capitalize on and allows you to build the workflow to enable your business vision. Done correctly and enterprise MAM will give you visibility into the entire organization, allowing you to better orchestrate both the technical and human process. Because at the end of the day, it’s the visibility of the entire organization that allows you to make better decisions, like whether or not you need to make a change or adapt your infrastructure to accommodate new workflows.

In our session, the conversation very quickly headed towards the topic of interoperability. Your MAM must have a common language to interface with all the players. If it doesn’t, you will spend an enormous amount of time translating so these players can work together. And if the need arises, and it usually does, you may need to replace one component with another that speaks a foreign language, well then, you are back to square one. A common framework will ensure a smooth sequence through production and distribution. A common framework, perhaps, such as FIMS…
 
The One Thing Everyone Needs to Know About FIMS – Stephane Guez, Dalet CTO
 
I was invited by Janet Gardner, president of Perspective Media Group, Inc., to participate in the FIMS (Framework for Interoperable Media Services) conference panel she moderated at CCW 2014.  The session featured Loic Barbou, chair of the FIMS Technical Board, Jacki Guerra, VP, Media Asset Services for A+E Networks, and Roman Mackiewicz, CIO Media Group at Bloomberg – two broadcasters that are deploying FIMS-compliant infrastructures. The aim of the session was to get the broadcasters’ points of views on their usage of the FIMS standard. 
 
The FIMS project was initiated to define standards that enable media systems to be built using a Service Orientated Architecture (SOA). FIMS has enormous potential benefits for both media organizations and the vendors/manufacturers that supply them, defining common interfaces for archetypal media operations such as capture, transfer, transform, store and QC. Global standardization of these interfaces will enable us, as an industry, to respond more quickly and cost effectively to the innovation and the constantly evolving needs and demands of media consumers.
 
Having begun in December 2009, the FIMS project is about to enter it’s 6th year, but the immense scale of the task is abundantly clear, with the general opinion of the panelists being that we are at the beginning of a movement – still very much a work-in-progress with a lot of work ahead of us.
 
One thing, however, was very clear from the discussion: Broadcasters need to be the main driver for FIMS. In doing so, they will find there are challenges and trade offs. FIMS cannot be adopted overnight. There are many existing, complex installations that rely on non-FIMS equipment. It will take some time before these systems can be converted to a FIMS-compliant infrastructure. Along with the technology change, there is the need to evolve the culture. For many, FIMS will put IT at the center of their production. A different world and skill set, many organizations will need to adapt both their workforce and workflow to truly reap the advantages of FIMS.

YOU MAY ALSO LIKE...
The story behind Dalet StoreFront: open innovation, team collaboration and workflow expansion
Economists have an unusual word to describe the value in simple commodities like gold and platinum. It’s “fungible,” meaning that a substance is exchangeable. One piece of gold is the same as another piece. When you’re ordering gold, you don’t need to specify anything apart from how much of it you want to buy. You can split it up, mould it, melt it, recombine it and absolutely nothing has changed. You still only ever need to specify the weight of gold that you want to buy - or sell. Most things are not like that. Cars aren’t fungible. Nor are houses. And nor is media. You can’t buy media by the ounce. Media has many more dimensions and characteristics, all of which affect its value. But that’s only part of the story, because any given piece of media will have a different value to different buyers. Wildlife footage has very little value to an organisation that specialises in motor racing. Let’s look at this in more detail The media landscape today is significantly different in almost every way to how it was thirty years ago. Films are now files. Negatives are numbers. Cupboards full of tapes and reels have migrated to the cloud. And “supervising” all of this is a Media Asset Management system (MAM). Files are not physical things, and that opens up an incredible range of possibilities, but, because you can’t store non-physical things on shelves, you need an all-embracing MAM system like Dalet to keep track of all the ephemeral properties of millions of blobs of data. What is Dalet StoreFront? Dalet StoreFront is a window into the hidden value of a media organisation's media assets. It allows existing Dalet users to display their content to other media organisations, safely and simply. Essentially, it uses Dalet’s ability to orchestrate content to provide a browsing and fulfillment back-end to Dalet StoreFront’s users. The beauty of this arrangement is that there is virtually zero extra effort needed to prepare media. All the information - the metadata - about the media would have been input to the Dalet MAM as part of the normal process of onboarding media files. This is likely to include information about the title, authors, rights (including restrictions about usage) and also data about what’s contained within the clip, possibly including timecode references. There would also be information about format, resolution and whether or not the clip is in HDR, for example. This metadata, which needs to be there anyway as part of the normal usage of a Dalet MAM system, is exactly what’s needed as the basis for a transaction with a potential buyer. And because of the richness of the metadata, Dalet StoreFront is able to make sure that a media purchaser only sees content that it is allowed to acquire. Dalet StoreFront in Use Imagine a subscription based television provider specialising in travel and wildlife programming. Their world-class media content – programs, trailers, and B-roll content – needs to be distributed to a global network of broadcasters and partners. In a traditional model, the broadcaster/partner would need to email a request for materials. This request could be for marketing material to promote a program, highlights or materials to create the highlights, or the program itself. On the receiving end of the request – the television provider would need to check the rights of the content, the agreement with the partner, search the materials and send over a selection proxy assets. Once confirmed there is yet another step is to finalize the transaction and send assets, hopefully in the right format, via a file transfer service like Aspera. Every step requires manual interaction and investigation. When pressed for time, corners get cut and a sampling of what could be offered from the rich archives is shared for consideration. It’s a daunting process that affects the entire operation and more importantly, could shortchange the impact of the final material if a lesser quality asset was provided. Marketers Love Self-Serve Partners and broadcasters require marketing materials to promote programming. Eliminating the one-to-one requests, access to assets is predetermined, so only pre-approved marketing content is exposed to shoppers. Not only does this simplify the mass distribution of marketing material for new shows, it also makes it far more efficient to serve those broadcasters looking for a very specific asset...one that could promote the re-airing of an older program in a specific region. Find that B-Roll! Even a five-second shot used as B-roll can make all the difference to a producer looking for a specific shot to use in their highlight reel or production. Dalet StoreFront flips the traditional model and lets producers browse the catalog as opposed to an individual sending producers a handful of shots that may or may not be relevant. Dalet StoreFront broadens the selection to include ALL suitable content available for use. Requests for assets are sent to sales bringing the process down to a few simple steps. Prep Your Content for Global Delivery Much like marketing, handing localization of programming content under the traditional model involved many steps and efficiencies. New programs slated for worldwide distribution often need to be dubbed/subtitled in multiple languages. Dalet StoreFront presents localization entities (ex. companies like SDI Media, a Dalet customer) with the required proxy videos to begin their work. This eliminates the guesswork of who is to translate what along with the transfer back and forth of materials. The Dalet MAM back-end manages the delivery in the right file format, and delivers it to the relevant, pre-configured endpoint (e.g. a cloud secure storage location, a CMS, etc). Add More Angles to Your Fast Breaking News With news organizations constantly updating their catalog, Dalet StoreFront answers the call for immediacy and access to assets that will help journalists deliver hyper-local reporting. News organisations can share and deliver media as soon as content has been ingested and logged into the Dalet MAM. It doesn’t matter whether content has been on the system for years, or has just arrived. It’s all equally available giving newsrooms the material they need to build breaking stories and journalists the right media to localize their stories or bring in historical context. Open Up Your Archives… Safely! In the world of sports, archived content becomes even more valuable with time. Iconic plays and players are safely preserved in well-guarded content vaults. The sheer value of the material means no direct access for outside partners. Dalet StoreFront connects to the Dalet MAM archive, creating a separate security layer that tethers the archived assets in a safe manner. This allows clubs, leagues and other partners to browse the archives and select the materials they want to use in their productions whether it’s for highlights, programs or game recap. The Dalet back-end manages the entire process from presenting the materials, to requests for assets and delivery. Running on Amazon Web Services, Dalet StoreFront makes these and many other workflow scenarios happen. Every shape and form of content becomes searchable, browsable - and obtainable. It’s safe, efficient, and is set to transform the way businesses find, acquire and incorporate content into their own productions. Do You Need Dalet StoreFront? If your organization needs to seamlessly connect and expose content inventory to your community, empowering discoverability of untapped content, ripe for monetization and licensing, then Dalet StoreFront is the right solution for you! A Cloud-native SaaS service running on Amazon Web Services, Dalet StoreFront brings in untapped revenues connecting content to clients. Learn more and request a Dalet StoreFront demo at https://www.dalet.com/business-services/storefront.
A Brand New Knowledge Base for Ooyala Flex Media Platform
Now part of Dalet, the Ooyala Flex Media Platform and complementary offerings, are constantly being refreshed with new features and a fully revamped user interface. We continuously strive to bring our clients the best experience, and with that in mind, we have fully refreshed the Ooyala Flex Media Platform Knowledge Base, aligned to our new product design. Ta-daah! Check out those slick icons. Increased collaboration, quality and regularity Learning from product development CICD best practises, we have brought this pipeline into how we document the Ooyala Flex Media Platform. This means increased collaboration, quality and regularity to which we update our documentation. We have taken a fresh look at the information architecture and the way users access content, by looking at 3 key personas: developers, administrators and end users. To achieve this, we have created independent guides for configuring the platform, developing with the API/SDK, and using each application. Within these guides, a user can search, navigate, and identify the category their question fits into, speeding the route to the information required. Each section starts with a menu that targets hot topics and recent updates that highlights what functionality we, and you, get excited about. For any other feature, the release notes are all accessible. Better navigation, faster results Delving into the guide, the reader will find that every article has its own helpful table of contents with anchored section titles. The “copy link” icon facilitates quick and easy sharing of content. So take a look at our new Knowledge Base, also available from the Ooyala Flex Media Platform landing page on dalet.com. We’ve received lots of positive feedback from our users, so please do get in touch with any suggestions for improvement. Happy reading :)
"Dalet: the art of data" - a FEED Magazine article
We all know media companies are gathering lots of data about their audiences. This can induce some anxiety in viewers, especially when they realise content providers sometimes know more about their behaviour than they know themselves. But the final goal of audience data collection is – or if it isn’t, you should really rethink your business priorities – to provide better, more useful services and, as a result, to increase revenue...Read more Read the full article
Metadata is “The New Gold”!
Plain and simple. Some might categorize the above statement as an exaggeration, nevertheless, we persist! A number of technology trends got the lion’s share of buzz and display at IBC’s 50th edition: cloud and smart hybrid infrastructure, video over IP, workflow orchestration and automation, and last but not least big data, machine learning and artificial intelligence (AI). As game-changing as they can be, we believe that these are in fact enablers of a much bigger trend: better serving and engaging with audiences. Content discovery for consumers It is well documented that the success of online video relies, in part, on metadata. Metadata-centric workflows give viewers the freedom to become more engaged. They can discover and explore more content, navigating directly to the most interesting scenes (including the intent of the scene, e.g. serene, suspense, etc.). Publishers can fully monetize viewer habits and experiences in the most effective way possible with a Media Asset Management (MAM) & Orchestration platform that allows end-to-end authoring and managing asset metadata in their production workflow. For on-demand video consumption, accurate description of content is key to help narrow recommendation engines to more relevant suggestions. For ad-based or product placement models, metadata helps define the optimal in-stream video insertion points, allowing publishers greater control and flexibility with their advertising strategies. Scene metadata such as character name, player name, topic, keyword, etc. become key. The more accurate and rich the description of these insertion points, the better the advertisers can pick the slots that fit both their target audience and the brand experience they look to create. Metadata-driven operations & workflows Today, metadata is also at the heart of the orchestration of workflows and automation of processes, both of which become increasingly important to streamline and scale production and distribution operations. Any process and/or action in the chain of operations can be triggered by any change or fulfillment of a condition on any of the fields of the data model. These configurable metadata-driven workflows are extremely powerful. While the industry has moved away from the simplicity of "one profile per customer", we can today create an environment where a single workflow can produce all the desired outputs just by changing the metadata that initiates a particular process. Managing complex media objects Metadata is core to structure and manage complex media objects and their relations, to enable operations like versioning and packaging of track stacks or compositions. To enable better collaboration and flawless content transformation and curation, organizations need to disintermediate the supply chain. One of the key concepts is to avoid the confirmation of a project or a composition until it actually needs to be packaged for an output. To enable this, media management platforms need to handle complex objects seamlessly so that users can work directly with the elementary components of a media package or project. This gives them the ability to manipulate, transform, share and repurpose content in a very efficient and agile way - a concept called transclusion. The Dalet Galaxy platform’s user tools, data model, and workflow engine offer a robust and configurable framework to power these types of operations. They support all the industry latest standards like IMF, DPP and many others. Augmenting media tagging & operations with AI Video is a complex medium that requires both human-authored metadata, which is flexible and traditionally more accurate, and automatically-created metadata, which is quickly growing thanks to AI-powered services. AI is indeed a key next step for the media industry. Dalet has recently showcased, at IBC 2017, some first prototypes connecting a selection of AI engines to the Dalet Galaxy platform in order to build new services that range from simple, automated content indexing and metadata generation, all the way to smart assistants that augment user operations. Deployable on-premises or in the cloud, these services produce time-correlated, multi-dimensional metadata from audio and video data, unlocking new insights. Coupling these services with the Dalet Galaxy platform provides broadcasters and media organizations with an end-to-end solution that will serve as an enabler to capture tomorrow's business opportunities and generate new benefits.
The Power of the Dalet Search
In today’s multi-platform world, simply put, finding stuff is becoming more complex. In the past, a mere browse through the shelves would suffice. But the digital era brings forth the "hoarding" syndrome. Just think, for example, of your own collection of home pictures – I know mine are in an unmanaged mess. But before we get into searching, we first need to address quantifying things. This is where a MAM's role is to be the record keeper of your valuable content and its associated information. More importantly, having a metadata model extensible enough to address the multiple levels and hierarchy of data is key to the success of your search power. As the amount of content owned, archived and distributed by broadcasters is rapidly growing, it is also evolving, resulting in an exponential expansion of files that must be managed. What was once a one-to-one relationship between the "record" and the media, has evolved into a model where a complex collection of elements (audio, video, text, captions, etc.) forms a record relationship. And don’t even get me started on versioning. To illustrate what I’m talking about, let’s look at the example of the TV Series “24,” starring Keifer Sutherland. You could annotate an episode with the actor’s name, the actor’s character’s name, the actor’s birthday, and so on ... and for each element of that collection (let’s say the source master, the poster, the caption). Having the ability to define a taxonomy and ontology so that when I specify that “24” ALWAYS has Jack Bauer in all the episodes and that the character Jack Bauer is played by actor Keifer Sutherland, we can then have a way to inherit that information down the tree for any element that is part of that tree: Series/Season/Episode. Then for the users, only saying that “this” video is actually 24/season2/ep7 will automatically inherit/apply all it's “parent” associated metadata... without needing to enter each individual value. This greatly reduces the amount of data entry (and time) necessary to quantify something when considering the immense amount of content associated with any given record. But the big impact of the rich metadata engine found in our MAM is its ability to not only search but to discover as well. What I mean is that there are typically two methods of searching: The first is explicit search – the user chooses the necessary fields to conduct their search, and then enters the values to obtain a result, e.g. looking for “Videos” with “Jack Bauer” in “Season 2.” The result is a list that the user must filter through to find what they want. The second way to search is through discovery, with the MAM's ability to display facets. For example, I could type “Actor’s height” (6'2") in “Action role,” “On Location” (Los Angeles). The return would display facets organized by user-defined relevancy, such as Series, Media Type, Actor Name, to then produce a resulting list along with facet boxes that the user can "filter down" within the search. The above example would show: "I found 12 Videos with Keifer Sutherland as an actor," and “I found 34 assets shot in Los Angeles.” And then by checking the 12 Videos of Keifer and the 34 in Los Angeles to cross-eliminate, I would find that there are actually three assets of Keifer in Los Angeles. And then you would also see that the character Jack Bauer also has a cameo on “The Simpsons.” Rich metadata allows us to create relationship between assets at multiple levels. Those various facets allow you to not only navigate through hundreds if not thousands of media assets, but to easily discover specific content as well. And finally, having immediate access to these results for viewing or editing is what makes the Dalet MAM a harmonious ecosystem for not only information but also action/manipulation of said assets.
An IBC preview that won’t leave you dizzy
When we write these blog entries each week, we normally ensure we have a draft a few days in advance to make sure we have plenty of time to review, edit and make sure that the content is worth publishing. This entry was late, very late. This pre-IBC post has been hugely challenging to write for two reasons: Drone-mounted Moccachino machines are not on the agenda – but Bruce’s post last week definitely has me avoiding marketing “spin.” There are so many things I could talk about, it’s been a struggle to determine what to leave out. Earlier this year, at the NAB Show, we announced the combination of our Workflow Engine, including the Business Process Model & Notation (BPMN) 2.0-compliant workflow designer, and our Dalet AmberFin media processing platform. Now generally available in the AmberFin v11 release, we’ll be demonstrating how customers are using this system to design, automate and monitor their media transcode and QC workflows, in mission-critical multi-platform distribution operations. Talking of multi-platform distribution, our Dalet Galaxy media asset management now has the capability to publish directly to social media outlets such as Facebook and Twitter, while the new Media Packages feature simplifies the management of complex assets, enabling users to see all of the elements associated with a specific asset, such as different episodes, promos etc., visually mapped out in a clear and simple way. Making things simple is somewhat of a theme for Dalet at IBC this year. Making ingest really easy for Adobe Premiere users, the new Adobe Panel for Dalet Brio enables users to start, stop, monitor, quality check and ingest directly from the Adobe Premiere Pro interface with new recordings brought directly into the edit bin. We’ll also be demonstrating the newly redesigned chat and messaging module in Dalet Galaxy, Dalet WebSpace and the Dalet On-the-Go mobile application. The modern, and familiar, chat interface has support for persistent chats, group chats, messaging offline users and much more. Legislation and consolidation of workflows mean that captioning and subtitling are a common challenge for many facilities. We are directly addressing that challenge with a standards-based, cross-platform strategy for the handling of captioning workflows across Dalet Galaxy, Dalet Brio and Dalet AmberFin. With the ability to read and write standards-constrained TTML, caption and subtitle data is searchable and editable inside the Dalet Galaxy MAM, while Dalet Brio is able to capture caption- and subtitle-containing ancillary data packets to disk and play them back. Dalet AmberFin natively supports the extraction and insertion of subtitle and caption data to and from .SCC and .STL formats respectively, while tight integration with other vendors extends support for other vendors. There are so many other exciting new features I could talk about, but it’s probably best to see them for yourself live in Amsterdam. Of course, if you’re not going to the show, you can always get the latest by subscribing to the blog, or get in touch with your local representative to get more information. There, and I didn’t even mention buzzwords 4K and cloud… …yet!
AmsterMAM – What’s New With Dalet at IBC (Part 1)
If you’re a regular reader of this blog, you may also receive our newsletters (if not, email us and we’ll sign you up) – the latest edition of which lists 10 reasons to visit Dalet at the upcoming IBC show (stand 8.B77). Over the next couple of weeks, I’m going to be using this blog to expand on some of those reasons, starting this week with a focus on Media Asset Management (MAM) and the Dalet Galaxy platform. Three years ago, putting together an educational seminar for SMPTE, Bruce Devlin (star of this blog and Chief Media Scientist at Dalet) interviewed a number of MAM vendors and end users about what a MAM should be and do. Pulling together the responses – starting with a large number of post-it notes and ending with a large Venn diagram – it was obvious that what “MAM” means to you is very dependent on how you want to use it. What we ended up with was a “core” of functionality that was common to all MAM-driven workflows and a number of outer circles with workflow-specific tasks. This is exactly how Dalet Galaxy is built – a unified enterprise MAM core, supporting News, Production, Sports, Archive, Program Prep and Radio, with task-specific tools unique to each business solution. At IBC we’ll be showcasing these workflows individually, but based on the same Dalet Galaxy core. For news, we have two demonstrations. Dalet News Suite is our customizable, Enterprise multimedia news production and distribution system. This IBC we’ll be showcasing new integration with social media and new tools for remote, mobile and web-based working. We’ll also be demonstrating our fully-packaged, end-to-end solution for small and mid-size newsrooms, Dalet NewsPack. In sports workflows, quick turnaround and metadata entry is essential – we’ll be showing how Dalet Sports Factory, with new advanced logging capabilities, enables fast, high-quality sports production and distribution. IBC sees the European debut of the new Dalet Galaxy-based Dalet Radio Suite, the most comprehensive, robust and flexible radio production and playout solution available, featuring Dalet OneCut editing, a rock-solid playout module featuring integration with numerous third parties and class-leading multi-site operations. Dalet Media Life provides a rich set of user tools for program prep, archive and production workflows. New for IBC this year, we’ll be previewing new “track stack” functionality for multilingual and multi-channel audio workflows, extended integration with Adobe Premiere and enhanced workflow automation. If you want to see how the Dalet Galaxy platform can support your workflow, or be central to multiple workflows click here to book at meeting at IBC or get in touch with our sales team. You can also find out more about what we’re showing at IBC here.
More Secrets of Metadata
Followers of Bruce’s Shorts may remember an early episode on the Secrets of Metadata where I talked about concentrating on your metadata for your business, because it adds the value that you need. It seems the world is catching onto the idea of business value of metadata, and I don’t even have to wrestle a snake to explain it! Over the last 10 years of professional media file-based workflows, there have been many attempts at creating standardized metadata schemes. A lot of these have been generated by technologists trying to do the right thing or trying to fix a particular technical problem. Many of the initiatives have suffered from limited deployment and limited adoption because the fundamental questions they were asking centered on technology and not the business application. If you center your metadata around a business application, then you automatically take into account the workflows required to create, clean, validate, transport, store and consume that metadata. If you center the metadata around the technology, then some or all of those aspects are forgotten – and that’s where the adoption of metadata standards falls down. Why? It’s quite simple. Accurate metadata can drive business decisions that in turn improves efficiency and covers the cost of the metadata creation. Many years ago, I was presenting with the head of a well-known post house in London. He stood on stage and said in his best Australian accent “I hate metadata." You guys want me to make accurate, human oriented metadata in my facility for no cost, so that you guys can increase your profits at my expense.” Actually he used many shorter words that I’m not able to repeat here J. The message that he gave is still completely valid today: If you’re going to create accurate metadata, then who is going to consume it? If the answer is no one, ever, then you’re doing something that costs money for no results. That approach does not lead to a good long-term business. If the metadata is consumed within your own organization, then you ask the question: “Does it automate one or many processes downstream?” The automation might be a simple error check or a codec choice or an email generation or a target for a search query. The more consuming processes there are for a metadata field, the more valuable it can become. If the metadata is consumed in a different organization, then you have added value to the content by creating metadata. The value might be expressed in financial terms or in good-will terms, but fundamentally a commercial transaction is taking place by the creation of that metadata. The UK’s Digital Production Partnership and the IRT in Germany have both made great progress towards defining just enough metadata to reduce friction in B2B (business to business) file transfer in the broadcast world. Cablelabs continues to do the same for the cable world and standards bodies such as SMPTE are working with the EBU to make a core metadata definition that accelerates B2B ecommerce type applications. I would love to say that we’ve cracked the professional metadata problem, but the reality is that we’re still half way through the journey. I honestly don’t know how many standards we need. A single standard that covers every media application will be too big and unwieldy. A different standard for each B2B transaction type will cost too much to implement and sustain. I’m thinking we’ll be somewhere between these two extremes in the “Goldilocks zone,” where there are just enough schemas and the implementation cost is justified by the returns that a small number of standards can bring. As a Media Asset Management company, we spend our daily lives wrestling with the complexities of metadata. I live in hope that at least the B2B transaction element of that metadata will one day be as easy to author and as interoperable as a web page. Until then, why not check out the power of search from Luc’s blog. Without good metadata, it would be a lot less exciting.
Why Doesn’t Anyone Label The Audio?
The great thing about language is its ability to allow us to exchange ideas and concepts, and hopefully create a business by doing so. With the increasing number of multi-platform delivery opportunities, the increasing bandwidths and channel densities, we are also seeing an increasing opportunity for content owners to create revenue with their content. Successfully exploiting that opportunity involves tailoring the version of the content meant for the audience to reduce friction and increase enjoyment of the viewer / listener. The blockbuster movie community has known for a long time that efficiently making versions of a movie and its collection of trailers on a territory by territory basis can make a significant difference to the number of people who watch that movie. I believe that we are entering an era where turbo-charging the versioning efficiency of media companies is going to be a dominant differentiator. To reduce the costs of versioning and to make life simple for the creative human processes, it is necessary to automate the processes that can be done by machines (or in our case, software). To a company that deals with video, all issues will looks like video issue. The processes for segmenting video content and replacing elements are pretty well understood. Organizations like the UK's DPP have created standards for interchanging that segmentation information. In today’s blog, I'm going to assume that the video issues are largely understood and look at a “simple” issue that two customers approached me about here at the SMPTE Australia show. Right now, on the planet, there are many more languages spoken than there are scripts for writing those languages down. There are also many more scripts than there are countries in the world. This makes the labeling of languages and scripts an interesting challenge for any media company, as the variables are virtually endless. There are many schemes used in the world for labeling audio and any naïve person entering the industry would assume that there must be some sort of global tag that everyone uses for identification ... right? Wrong. Traditionally, TV stations, broadcasters, content creators and others have created content for a specific market. Broadcasters, distributors, aggregators and others have sent their content to territories with only a handful of languages to cope with. Usually proprietary solutions for “track tagging” are developed and deployed. The compelling business need to streamline and standardize the labeling of audio channels hasn’t really existed until now. The internationalization of distribution compels us to find an agreed way in which labeling can be done. Thankfully, someone got there before the media folks. The internet community has been here before - and quite recently. The internet standard RFC5646 is very thorough and copes with the identification of primary languages as well as dialects, extinct languages and imaginary vocabularies such as Klingon. With such a comprehensive and interoperable specification that is widely used for the delivery of web content to billions of devices every day, you'd think that any media system designer worth his or her salt would have this electronic document in their favorites list for regular look-up. You'd think ... The MXF community knows a good thing when it sees it, so you'll find that when it comes to a standardized way to tag tracks in MXF – the SMPTE standard ST 377-4 uses RFC5646 as its vocabulary for labeling. ST 377-4 additionally recognizes that each channel of an audio mix might contain a different language. Each channel might also belong to a group intended as a stereo group, or a surround sound group, or a mono-group of one channel. This hard grouping defines the relationship of channels that should not be split. Going further, ST 377-4 defines groups of groups that are used as metadata to enable easy versioning so that, for example, a French group might consist of a French stereo group, a clean M&E surround mix and a French mono audio description channel. Reality ST 377-4 with RFC5646 solves a difficult problem in a simple and elegant way. Up until now, it's been easier for media companies to do their own thing and invent their own metadata vocabularies with proprietary labeling methods rather than use a standard. Today, to get cost effective interoperability we're starting to rely on standards more and more so that we don't have to stand the cost of an infinite number of proprietary connectors to make things work. As you see more versions of more programs being created, spare a thought for the future costs and revenues of media that needs to be exchanged. A little up-front-standardized metadata builds the launch ramp for a future searchable and accessible library of internationalized content. Standardized audio metadata and subtitle metadata - it may be a tiny-tiny addition to your assets, but over time it helps you find, use and monetize versioned content with no effort at all. Take action now and learn the difference between en-US and en-GB. It's more than just spelling.
Why Ingest to the Cloud?
With Cloud storage becoming cheaper and the data transfer to services such as Amazon S3 storage being free of charge, there are numerous reasons why ingesting to the Cloud should be part of any media organization’s workflow. So, stop trying to calculate how much storage your organization consumes by day, month or year, or whether you need a NAS, a SAN or a Grid, and find out why Cloud could be just what your organization needs. Easy Sharing of Content Instead of production crews or field journalists spending copious amounts of time and money shipping hard drives to the home site or being limited by the bandwidth of an FTP server when uploading content, with object storage services like Amazon S3 or Microsoft Azure, uploading content to the Cloud has become easy and cheap. Once content is uploaded to the Cloud, anyone with secure credentials can access it from anywhere in the world. Rights Access to Content In recent news, cloud storage services such as Apple iCloud were hacked and private content was stolen, increasing the concern about security and access rights to content in the Cloud. With secure connections such as VPN and rights access management tools, you can specify, by user, group access rights and duration of how long content can be accessed on the Cloud. Both Microsoft and Amazon have setup security features to protect your data as well as to replicate content to more secure locations. Cloud Services to Process the Data By uploading content to the Cloud, in the backend you can setup services and workflows to run QC checks on the content, stream media, transcode to multiple formats, and organize the content for search and retrieval using a Media Asset Management (MAM) System hosted on the Cloud. Cloud Scalability Rather than buying an expensive tape library or continuing to purchase more hardware for a spinning disk storage, with cloud storage, one can scale down or scale up with the click of a button. No need for over-provisioning. Disaster Recovery An organization can easily set up secure data replication from one site to another or institute replication rules to copy content to multiple virtual containers, offering assurance that content will not be lost. Amazon S3 provides durable infrastructure to store important data and is designed for durability of 99.99999999% of objects. Moving Towards an OPEX Model As operations and storage move to the Cloud, you can control your investment by paying as you use services and storing content on the Cloud. Instead of investing on infrastructure maintenance and support, with operations on the Cloud, you can focus the investment on what makes a difference, the content and not the infrastructure to support it. Why Upload to the Cloud? The Cloud is no longer a technology of the future, with cloud storage adopted by Google, Facebook and Instagram, Cloud technology is the reality of today. By adopting this technology you control your investment by usage needs, backup your data and provide secure access to content to anyone with credentials anywhere in the world. The biggest limitation now is bandwidth, and the hurdle is adjusting the current infrastructure to support Cloud operations. Many organizations are turning towards a hybrid Cloud model, where content and services are hosted both locally and via Cloud solutions. Learning from the Cloud experience, Dalet has made initiatives over the past few years to evolve existing tools and services for the Cloud. Dalet now offers direct ingest from the Dalet Brio video server to Amazon S3 Storage and, at NAB this year in Las Vegas, Dalet showcased the first MAM-based Newsroom on the Cloud. To learn more about Dalet ingest solutions, please visit the ingest application page.
Life before and after DPP (Digital Production Partnership)
People that know me will be aware that file-based workflows are a passion of mine. Ten years ago I was co-author of the MXF (Media Exchange Format) specification and ever since I have been engaged in taking this neatSMPTE standard and using it to create a business platform for media enterprises of every size and scale. This is why I’m so excited by the Digital Production Partnership (DPP): it represents the first ratified national Application Specification of the MXF standard and is set to revolutionize the way that media facilities and broadcasters work.To explain what I mean, let’s compare life with a DPP ecosystem to life without. Less pain to feel the gain 
In a standardized DPP world, there would be a limited amount of pain and cost felt by everybody but this would be shared equally amongst the organizations involved and it would be a limited cost, which is incurred only once. After this point, our industry has a fantastic common interchange format to help encourage partnerships and build businesses. In an unstandardized world, where different facilities have decided to use different tools and variants of MXF or other formats, the major cost becomes the lack of third-party interoperability. Each time content is exchanged between different facilities, a media transcode or rewrap in that format is required. This means that all vendors in all the facilities will ultimately support all the file formats andmetadata. The engineering required to implement and test takes time and costs money on an on-going basis. Interoperable metadata helps the content creator 
In a world that has adopted DPP, media and metadata interoperability is not an issue since the format is built on a strong, detailed common interchange specification. In this homogeneous scenario the resources that would have been used in the interoperability engineering process can be used in more creative and productive ways, such as programme making. Programme making is a process where most broadcasters utilise external resources. In a world without DPP, whenever a broadcaster or production facility receives a new file from an external facility, such as a Post House, the question must be asked whether this file meets the requirements of their in-house standard. That evaluation process can lead to extra QC costs in addition to possible media ingest, transcoding, conformance and metadata re-keying costs that need to be taken into account. Building a business platform
 This heterogeneous environment is an issue not just for interaction with external facilities: often different departments within the same major broadcaster will adopt slightly different file standards and metadata making interoperability a big issue to them. As a result, today only about 70 per cent of transactions within companies are file-based – the remainder employ tape. However, this is much higher than where external agencies are involved – here, only 10 – 15 per cent of transactions are file-based. The essence of the problem is the lack of a common interchange format to enable these transactions. DPP is the first open public interchange format that is specifically designed to address this issue. DPP is intended to transform today’s 20 per cent trickle into an 80 per cent flood in the shortest time. To find out more about DPP and how it can transform the way your operation works and also your effectiveness working with other organizations read AmberFin’s White Paper on DPP.
FIMS: A Plug-and-Play Solution
With so many systems with proprietary interfaces in existence, IT-based media projects these days require a great deal of integrations. At Dalet alone, our applications must integrate with more than a hundred different third-party systems. With customers around the world using customized workflows and any number of different tools and solutions, it’s our job to make sure our own platforms can operate seamlessly within their setup. However, this makes solutions very complex and costly to deploy and maintain. Enter: FIMS. FIMS is the Framework for Interoperable Media Services. This framework simplifies the integration problem through better and stronger standards, which means vendors no longer need to build a custom integration for every single installation. Part of the deal involves developing applications with a standard interface. This could be simple, like transfer services, or more complex, such as the repository interface, which has many operations and options. In addition to these standard interfaces, FIMS employs a data model – a common representation for media assets – which incorporates standard metadata (such as EBUCore) that have been developed separately. The idea behind the set of standard interfaces and data model is to develop a Service-Oriented Architecture (SOA). By exposing media applications as services and allowing for a flexible architecture, we can leverage standard IT technologies and enable customers to build best-of-breed solutions. As a result, we create an ecosystem of standard interfaces that simplify the design, building and deployment of systems, as well as the maintenance of said systems over time. Because of the system’s flexibility, exposing one’s system as a FIMS system or integrating another tool through a FIMS interface does not require a complete architectural change. For vendors, this means we can build more elaborate integrations at a much lower cost. And because we reduce the number of custom interfaces, the cost to upgrade any given system is also already reduced. What’s more, vendors can offer customers more benefit through improved core applications, as – ideally – the time and money saved by not developing custom integrations can be reallocated towards developing media specific applications. From the media and broadcast company perspective – because let’s face it, it’s all about the customer at the end of the day – FIMS enables much better tracking and task management, as well as the ability to evolve seamlessly over time. For example, if you want to take advantage of a new transfer accelerator without needing to develop an elaborate custom interface, FIMS provides the framework to facilitate this. With new and improved technology being made available all the time, being able to readily integrate new solutions gives broadcasters a huge advantage. So – why aren’t all systems FIMS-compliant? FIMS is an ongoing effort and as such, is not without its challenges. We work in an industry that is undergoing constant change, which makes this effort a moving target. Companies have to agree to build on the standards, meaning that they must agree with the limits they impose. If you were thinking that FIMS sounded too good to be sure, you may be right; a FIMS-compliant system does have its tradeoffs. With the standardization and simplicity of design, cost-savings, etc., comes slightly looser integration and performance. But we see the long-term benefits far outweighing these short-term issues. In any case, for FIMS to fulfill its destiny as a plug-and-play solution for the broadcast and media industry, it’s crucial that every actor in the ecosystem plays the game. By cultivating an ecosystem of applications that can all play nice together, broadcasters will be able to build best-of-breed solutions that can be evolved over years to come while saving them money. Now in its sixth year of existence, FIMS continues to gain awareness slowly but surely. But when it comes to making this solution a widespread reality, it’s in the hands of the broadcast and media companies to make the demand. Want to know more about FIMS? Sign up now to receive our video presentation on what FIMS is, who benefits and why direct to your inbox, as well as our FIMS White Paper, coming this summer.