Menu extends

May 29, 2015
Dalet @ Broadcast Asia 2015

Dalet @ Broadcast Asia 2015

As we look forward to another exciting show at BCA 2015, it’s a good opportunity to reflect on the media and entertainment industry in the Asia-Pacific region.

There are a few things that always stand out when working in Asia. First is the wide availability and rapid adoption of the latest consumer technology. In M&E, this has driven the requirement to deliver to more and more platforms.
While multi-version workflows were always core to many projects in the region, especially where content is distributed in so many languages across a wide geography, this expansion to support multiple platforms has added a further dimension and really brought home the value and return on investment that well designed and deployed MAM-driven workflows can bring to an organizations. MAM-driven workflows such as the Dalet Galaxy-based solutions that have been and are currently being deployed at big-name broadcasters, content owners and content distributors across the whole region.

Of course, implementing a MAM and a MAM-driven workflow can represent a big change for the large number of media industry professionals here in Asia. Ensuring that we manage that change as we implement systems is just as important as the deployment of the technology itself. In recognition of this, Dalet has continued to expand our project management and training teams in the region, ensuring that all the support you need before, during and after installing projects is ready and available whenever you need it. Indeed, the team has grown so big, we’re moving to a new office – look out for a change of address soon.

If you’ll be at BroadcastAsia next week, we’d love to see you. We’ll be exhibiting in booth 5A5-12 and invite you to schedule a one-on-one meeting with a Dalet media workflow expert. And if you haven’t done so already, be sure to register for BCA today! Hope to see you there.

YOU MAY ALSO LIKE...
Watch out for these traps when using AI
EVERYTHING from mobile devices, to home appliances, and even in classrooms; artificial intelligence (AI) is powering a huge part of our daily lives. Thanks to the reducing cost of hardware and services, AI has become more accessible to businesses. Cloud computing and neural networks are more readily available, driving adoption of technologies like facial recognition, computer vision, and natural language processing (NLP). This is the same for the media industry. Even moderate-sized media companies are migrating to intelligent systems. However, implementing AI isn’t like plugging in a USB drive; there are many factors a broadcaster must consider before embarking on AI projects. Arnaud Elnecave, Vice President of Marketing, Dalet, shared with Tech Wire Asia in an interview, about the common traps businesses fall into when dealing with AI. Read more
Dalet Galaxy five Brings AI, Social Media and Hybrid Workflows to BroadcastAsia2018
Dalet, a leading provider of solutions and services for broadcasters and content professionals, today announced that it will showcase its award-winning Dalet Galaxy five media asset management and workflow orchestration platform for the first time at the BroadcastAsia2018 show, held in Singapore from June 26-28, on stand 4R4-06. “Dalet customers are looking for efficient ways to produce new forms of content, generated with efficient, AI-powered multiplatform productions and designed for cross-channel, complementary experiences that drive better engagement,” comments César Camacho, general manager, Dalet Asia Pacific. “At BroadcastAsia2018, Dalet will showcase how its evolutive Dalet Galaxy five platform is augmenting media operations with smart, orchestrated workflows, and enabling forward-thinking business models.” Dalet Galaxy five delivers three key enhancements; the first enables broadcasters and media organizations to leverage Artificial Intelligence (AI) across the workflow, the second leverages hybrid infrastructures with on-premises and Cloud deployments, the third puts social media at the core of the operations. BroadcastAsia2018 attendees can book a private demonstration or workflow consultation with a Dalet expert to learn more about the following Dalet Galaxy five feature highlights: Artificial Intelligence Framework Powered by Dalet Media Cortex, the new Dalet Galaxy five AI framework connects, orchestrates and fine-tunes purpose-driven combinations of AI models, enabling media organizations to build intelligent workflows that assist users with recommendations, facilitate collaboration with smart matching, and use predictive analytics for better provisioning and automated decision-making. Dalet Content Discovery, the first application to leverage this new AI framework, uses data generated by cognitive services, combined with existing metadata, to build smart content recommendations for editorial and creative teams. Learn more Social Media Framework The Dalet Social Media framework enables newsrooms to treat social media as an integrated part of their overall news operation. Journalists can harvest, analyze, produce and deliver fast-paced news on social media platforms alongside traditional outlets. The story-centric workflow offers familiar indicators such as number of views, likes, shares, as well as audience comments and threads. Visual engagement data lets journalists know how their posts are performing with their audience and discover new angles audiences are expecting. With analytical tools and dynamic content at their fingertips, journalists can quickly evolve posts into deeper stories or segue into a new angle. New capabilities include social media popularity indicators, Twitter harvester, scheduling and approval of social posts, expanded emoji library, contextual graphics and subtitles. Learn more Hybrid Workflows and Operations Dalet Galaxy five introduces new integrations with AWS infrastructure services, enabling hybrid scalable architectures that help minimize content handling costs and introduce more mobility in the user experience while enhancing the security of content. Enhanced Workflow Orchestration The new version of the Dalet Workflow Engine included in Dalet Galaxy five offers enhanced task management, high availability and new API services that simplify the integration with third-party systems. Recently added capabilities shown at BroadcastAsia2018 include top-down view of group tasks and shared task lists as well as task time tracking and data aggregation for reporting and analysis. Component-Based Workflows Power Multiplatform Productions Dalet Galaxy five disintermediates the media supply chain, allowing users to produce, manage and distribute content at the component level, a much more efficient and flexible way to facilitate complex and mass content volume workflows such as multiplatform news production and international programs preparation, localization and versioning. The model is particularly adapted to new families of standards such as the Interoperable Mastering Format or IMF. Dalet Galaxy five features a comprehensive set of tools within Dalet Webspace to manage end-to-end component-based IMF workflows from acquisition to production, versioning, packaging and distribution. This includes receiving IMF packages, referencing IMF assets in a central repository, leveraging IMF metadata, both technical and editorial, to search on all assets and data; generating proxies for preview of track files and CPLs in Dalet WebSpace; visualizing assets relationships (track files and CPLs, CPLs and OPLs), automating the creation of new versions, connecting CPLs using EIDR (Entertainment Identifier Registry) and ISAN (International Standard Audiovisual Number) ids. Learn more Enhanced Collaboration Beyond media management, Dalet Galaxy five is a full communication and collaboration platform for connected teams. New collaboration enhancements include Media Bins that provide the ultimate in media sharing flexibility; Dalet On-the-Go brings programs management tools to your phone and tablet; the updated Dalet Chat capabilities allow users to work by thread and embed assets such as graphics, audio and video files to provide a more comprehensive communication stream. Comprehensive Editing Experience Dalet Galaxy five offers a more comprehensive and collaborative editing experience to fit any workflow and editor's needs. Dalet Webspace features a story-boarder for cut editing anywhere; the Dalet OneCut built-in desktop editor gets richer with more editing tracks, new transitions and effects; the Dalet Xtend integration with Adobe Premiere Pro CC expands to support Adobe Projects as well as the Dalet CG-on-the-Timeline model. Enhanced Show Automation Expanding production control automation capabilities, Dalet Galaxy five offers a new version of Dalet On-Air, featuring new automation events macros for control of studio devices such as lighting, audio and camera position but also deep integration of these automation events with the full orchestration layer of the platform. Business Intelligence The new version of the Dalet Report Center features new set of data extractions and data model that supports analysis and reporting on a new set of business indicators. At BroadcastAsia2018, Dalet will illustrate this with some first business reports focused on a typical news operation and its contents. About Dalet Digital Media Systems Dalet solutions and services enable media organizations to create, manage and distribute content faster and more efficiently, fully maximizing the value of assets. Dalet products are built on three distinct platforms that, when combined, form versatile business solutions that power end-to-end workflows for news, sports, program preparation, production, archive and radio. Individually, Dalet platforms and products offer targeted applications with key capabilities to address critical media workflow functions such as ingest, QC, edit, transcode and multiplatform distribution. The foundation for Dalet productivity-enhancing workflow solutions, Dalet Galaxy is the enterprise Media Asset Management (MAM) & Orchestration platform that unifies the content chain by managing assets, metadata, workflows and processes across multiple and diverse production and distribution systems. Specially tailored for news and media workflows, this unique technology platform helps broadcasters and media professionals increase productivity while providing operational and business visibility. Dalet AmberFin is the high-quality, scalable transcoding platform with fully integrated ingest, mastering, QC and review functionalities, enabling facilities to make great pictures in a scalable, reliable and interoperable way. Addressing the demanding needs of studio production, multi-camera ingest, sports logging and highlights production, the innovative Dalet Brio video server platform combines density and cost-effectiveness with high reliability. Adopted by leading broadcasters, Dalet Cube is a suite of applications to create, manage and deliver graphics in a newsroom scenario. Dalet supports customers from the initial planning stages to well beyond project execution. Our global presence includes 17 offices strategically located throughout Europe, the Middle East, Asia Pacific, North America and South America, and a network of more than 60 professional partners serving 87 countries worldwide. This collective experience and knowledge enables our customers to realize potential increases in productivity, efficiency and value of their assets. The comprehensive Dalet Care program ensures deployments remain up and running with 24/7 support 365 days a year. Dalet systems are used around the world by many thousands of individual users at hundreds of TV and Radio content producers, including public broadcasters (ABS-CBN, BBC, CBC, DR, FMM, France TV, RAI, RFI, Russia Today, RT Malaysia, VOA), commercial networks and operators (Canal+, FOX, eTV, MBC Dubai, MediaCorp, Mediaset, Orange, Time Warner Cable, Warner Bros, Sirius XM Radio), and government organizations (Canadian House of Commons, Australian Parliament and UK Parliament). Dalet is traded on the NYSE-EURONEXT stock exchange (Eurolist C): ISIN: FR0011026749, Bloomberg DLT:FP, Reuters: DALE.PA. Dalet® is a registered trademark of Dalet Digital Media Systems. All other products and trademarks mentioned herein belong to their respective owners.
The Power of the Dalet Search
In today’s multi-platform world, simply put, finding stuff is becoming more complex. In the past, a mere browse through the shelves would suffice. But the digital era brings forth the "hoarding" syndrome. Just think, for example, of your own collection of home pictures – I know mine are in an unmanaged mess. But before we get into searching, we first need to address quantifying things. This is where a MAM's role is to be the record keeper of your valuable content and its associated information. More importantly, having a metadata model extensible enough to address the multiple levels and hierarchy of data is key to the success of your search power. As the amount of content owned, archived and distributed by broadcasters is rapidly growing, it is also evolving, resulting in an exponential expansion of files that must be managed. What was once a one-to-one relationship between the "record" and the media, has evolved into a model where a complex collection of elements (audio, video, text, captions, etc.) forms a record relationship. And don’t even get me started on versioning. To illustrate what I’m talking about, let’s look at the example of the TV Series “24,” starring Keifer Sutherland. You could annotate an episode with the actor’s name, the actor’s character’s name, the actor’s birthday, and so on ... and for each element of that collection (let’s say the source master, the poster, the caption). Having the ability to define a taxonomy and ontology so that when I specify that “24” ALWAYS has Jack Bauer in all the episodes and that the character Jack Bauer is played by actor Keifer Sutherland, we can then have a way to inherit that information down the tree for any element that is part of that tree: Series/Season/Episode. Then for the users, only saying that “this” video is actually 24/season2/ep7 will automatically inherit/apply all it's “parent” associated metadata... without needing to enter each individual value. This greatly reduces the amount of data entry (and time) necessary to quantify something when considering the immense amount of content associated with any given record. But the big impact of the rich metadata engine found in our MAM is its ability to not only search but to discover as well. What I mean is that there are typically two methods of searching: The first is explicit search – the user chooses the necessary fields to conduct their search, and then enters the values to obtain a result, e.g. looking for “Videos” with “Jack Bauer” in “Season 2.” The result is a list that the user must filter through to find what they want. The second way to search is through discovery, with the MAM's ability to display facets. For example, I could type “Actor’s height” (6'2") in “Action role,” “On Location” (Los Angeles). The return would display facets organized by user-defined relevancy, such as Series, Media Type, Actor Name, to then produce a resulting list along with facet boxes that the user can "filter down" within the search. The above example would show: "I found 12 Videos with Keifer Sutherland as an actor," and “I found 34 assets shot in Los Angeles.” And then by checking the 12 Videos of Keifer and the 34 in Los Angeles to cross-eliminate, I would find that there are actually three assets of Keifer in Los Angeles. And then you would also see that the character Jack Bauer also has a cameo on “The Simpsons.” Rich metadata allows us to create relationship between assets at multiple levels. Those various facets allow you to not only navigate through hundreds if not thousands of media assets, but to easily discover specific content as well. And finally, having immediate access to these results for viewing or editing is what makes the Dalet MAM a harmonious ecosystem for not only information but also action/manipulation of said assets.
CCW, SOA, FIMS and the King & Queen of the Media Industry
All-Star Panel Sessions at CCW 2014 The NAB-backed CCW held some impressive panels, and our own Stephane Guez (Dalet CTO) and Luc Comeau (Dalet Business Development Manager) participated in two of the show’s hot topics. MAM, It’s All About Good Vocabulary – Luc Comeau, Senior Business Development Manager The saying goes, “behind every great man, there is a greater woman.” Within the panel – “Content Acquisition and Management Platform: A Service-Oriented Approach” – there was a lot of talk about content being king. In my view then, metadata is his queen. Metadata gives you information that a MAM can capitalize on and allows you to build the workflow to enable your business vision. Done correctly and enterprise MAM will give you visibility into the entire organization, allowing you to better orchestrate both the technical and human process. Because at the end of the day, it’s the visibility of the entire organization that allows you to make better decisions, like whether or not you need to make a change or adapt your infrastructure to accommodate new workflows. In our session, the conversation very quickly headed towards the topic of interoperability. Your MAM must have a common language to interface with all the players. If it doesn’t, you will spend an enormous amount of time translating so these players can work together. And if the need arises, and it usually does, you may need to replace one component with another that speaks a foreign language, well then, you are back to square one. A common framework will ensure a smooth sequence through production and distribution. A common framework, perhaps, such as FIMS… The One Thing Everyone Needs to Know About FIMS – Stephane Guez, Dalet CTO I was invited by Janet Gardner, president of Perspective Media Group, Inc., to participate in the FIMS (Framework for Interoperable Media Services) conference panel she moderated at CCW 2014. The session featured Loic Barbou, chair of the FIMS Technical Board, Jacki Guerra, VP, Media Asset Services for A+E Networks, and Roman Mackiewicz, CIO Media Group at Bloomberg – two broadcasters that are deploying FIMS-compliant infrastructures. The aim of the session was to get the broadcasters’ points of views on their usage of the FIMS standard. The FIMS project was initiated to define standards that enable media systems to be built using a Service Orientated Architecture (SOA). FIMS has enormous potential benefits for both media organizations and the vendors/manufacturers that supply them, defining common interfaces for archetypal media operations such as capture, transfer, transform, store and QC. Global standardization of these interfaces will enable us, as an industry, to respond more quickly and cost effectively to the innovation and the constantly evolving needs and demands of media consumers. Having begun in December 2009, the FIMS project is about to enter it’s 6th year, but the immense scale of the task is abundantly clear, with the general opinion of the panelists being that we are at the beginning of a movement – still very much a work-in-progress with a lot of work ahead of us. One thing, however, was very clear from the discussion: Broadcasters need to be the main driver for FIMS. In doing so, they will find there are challenges and trade offs. FIMS cannot be adopted overnight. There are many existing, complex installations that rely on non-FIMS equipment. It will take some time before these systems can be converted to a FIMS-compliant infrastructure. Along with the technology change, there is the need to evolve the culture. For many, FIMS will put IT at the center of their production. A different world and skill set, many organizations will need to adapt both their workforce and workflow to truly reap the advantages of FIMS.
An IBC preview that won’t leave you dizzy
When we write these blog entries each week, we normally ensure we have a draft a few days in advance to make sure we have plenty of time to review, edit and make sure that the content is worth publishing. This entry was late, very late. This pre-IBC post has been hugely challenging to write for two reasons: Drone-mounted Moccachino machines are not on the agenda – but Bruce’s post last week definitely has me avoiding marketing “spin.” There are so many things I could talk about, it’s been a struggle to determine what to leave out. Earlier this year, at the NAB Show, we announced the combination of our Workflow Engine, including the Business Process Model & Notation (BPMN) 2.0-compliant workflow designer, and our Dalet AmberFin media processing platform. Now generally available in the AmberFin v11 release, we’ll be demonstrating how customers are using this system to design, automate and monitor their media transcode and QC workflows, in mission-critical multi-platform distribution operations. Talking of multi-platform distribution, our Dalet Galaxy media asset management now has the capability to publish directly to social media outlets such as Facebook and Twitter, while the new Media Packages feature simplifies the management of complex assets, enabling users to see all of the elements associated with a specific asset, such as different episodes, promos etc., visually mapped out in a clear and simple way. Making things simple is somewhat of a theme for Dalet at IBC this year. Making ingest really easy for Adobe Premiere users, the new Adobe Panel for Dalet Brio enables users to start, stop, monitor, quality check and ingest directly from the Adobe Premiere Pro interface with new recordings brought directly into the edit bin. We’ll also be demonstrating the newly redesigned chat and messaging module in Dalet Galaxy, Dalet WebSpace and the Dalet On-the-Go mobile application. The modern, and familiar, chat interface has support for persistent chats, group chats, messaging offline users and much more. Legislation and consolidation of workflows mean that captioning and subtitling are a common challenge for many facilities. We are directly addressing that challenge with a standards-based, cross-platform strategy for the handling of captioning workflows across Dalet Galaxy, Dalet Brio and Dalet AmberFin. With the ability to read and write standards-constrained TTML, caption and subtitle data is searchable and editable inside the Dalet Galaxy MAM, while Dalet Brio is able to capture caption- and subtitle-containing ancillary data packets to disk and play them back. Dalet AmberFin natively supports the extraction and insertion of subtitle and caption data to and from .SCC and .STL formats respectively, while tight integration with other vendors extends support for other vendors. There are so many other exciting new features I could talk about, but it’s probably best to see them for yourself live in Amsterdam. Of course, if you’re not going to the show, you can always get the latest by subscribing to the blog, or get in touch with your local representative to get more information. There, and I didn’t even mention buzzwords 4K and cloud… …yet!
AmsterMAM – What’s New With Dalet at IBC (Part 1)
If you’re a regular reader of this blog, you may also receive our newsletters (if not, email us and we’ll sign you up) – the latest edition of which lists 10 reasons to visit Dalet at the upcoming IBC show (stand 8.B77). Over the next couple of weeks, I’m going to be using this blog to expand on some of those reasons, starting this week with a focus on Media Asset Management (MAM) and the Dalet Galaxy platform. Three years ago, putting together an educational seminar for SMPTE, Bruce Devlin (star of this blog and Chief Media Scientist at Dalet) interviewed a number of MAM vendors and end users about what a MAM should be and do. Pulling together the responses – starting with a large number of post-it notes and ending with a large Venn diagram – it was obvious that what “MAM” means to you is very dependent on how you want to use it. What we ended up with was a “core” of functionality that was common to all MAM-driven workflows and a number of outer circles with workflow-specific tasks. This is exactly how Dalet Galaxy is built – a unified enterprise MAM core, supporting News, Production, Sports, Archive, Program Prep and Radio, with task-specific tools unique to each business solution. At IBC we’ll be showcasing these workflows individually, but based on the same Dalet Galaxy core. For news, we have two demonstrations. Dalet News Suite is our customizable, Enterprise multimedia news production and distribution system. This IBC we’ll be showcasing new integration with social media and new tools for remote, mobile and web-based working. We’ll also be demonstrating our fully-packaged, end-to-end solution for small and mid-size newsrooms, Dalet NewsPack. In sports workflows, quick turnaround and metadata entry is essential – we’ll be showing how Dalet Sports Factory, with new advanced logging capabilities, enables fast, high-quality sports production and distribution. IBC sees the European debut of the new Dalet Galaxy-based Dalet Radio Suite, the most comprehensive, robust and flexible radio production and playout solution available, featuring Dalet OneCut editing, a rock-solid playout module featuring integration with numerous third parties and class-leading multi-site operations. Dalet Media Life provides a rich set of user tools for program prep, archive and production workflows. New for IBC this year, we’ll be previewing new “track stack” functionality for multilingual and multi-channel audio workflows, extended integration with Adobe Premiere and enhanced workflow automation. If you want to see how the Dalet Galaxy platform can support your workflow, or be central to multiple workflows click here to book at meeting at IBC or get in touch with our sales team. You can also find out more about what we’re showing at IBC here.
More Secrets of Metadata
Followers of Bruce’s Shorts may remember an early episode on the Secrets of Metadata where I talked about concentrating on your metadata for your business, because it adds the value that you need. It seems the world is catching onto the idea of business value of metadata, and I don’t even have to wrestle a snake to explain it! Over the last 10 years of professional media file-based workflows, there have been many attempts at creating standardized metadata schemes. A lot of these have been generated by technologists trying to do the right thing or trying to fix a particular technical problem. Many of the initiatives have suffered from limited deployment and limited adoption because the fundamental questions they were asking centered on technology and not the business application. If you center your metadata around a business application, then you automatically take into account the workflows required to create, clean, validate, transport, store and consume that metadata. If you center the metadata around the technology, then some or all of those aspects are forgotten – and that’s where the adoption of metadata standards falls down. Why? It’s quite simple. Accurate metadata can drive business decisions that in turn improves efficiency and covers the cost of the metadata creation. Many years ago, I was presenting with the head of a well-known post house in London. He stood on stage and said in his best Australian accent “I hate metadata." You guys want me to make accurate, human oriented metadata in my facility for no cost, so that you guys can increase your profits at my expense.” Actually he used many shorter words that I’m not able to repeat here J. The message that he gave is still completely valid today: If you’re going to create accurate metadata, then who is going to consume it? If the answer is no one, ever, then you’re doing something that costs money for no results. That approach does not lead to a good long-term business. If the metadata is consumed within your own organization, then you ask the question: “Does it automate one or many processes downstream?” The automation might be a simple error check or a codec choice or an email generation or a target for a search query. The more consuming processes there are for a metadata field, the more valuable it can become. If the metadata is consumed in a different organization, then you have added value to the content by creating metadata. The value might be expressed in financial terms or in good-will terms, but fundamentally a commercial transaction is taking place by the creation of that metadata. The UK’s Digital Production Partnership and the IRT in Germany have both made great progress towards defining just enough metadata to reduce friction in B2B (business to business) file transfer in the broadcast world. Cablelabs continues to do the same for the cable world and standards bodies such as SMPTE are working with the EBU to make a core metadata definition that accelerates B2B ecommerce type applications. I would love to say that we’ve cracked the professional metadata problem, but the reality is that we’re still half way through the journey. I honestly don’t know how many standards we need. A single standard that covers every media application will be too big and unwieldy. A different standard for each B2B transaction type will cost too much to implement and sustain. I’m thinking we’ll be somewhere between these two extremes in the “Goldilocks zone,” where there are just enough schemas and the implementation cost is justified by the returns that a small number of standards can bring. As a Media Asset Management company, we spend our daily lives wrestling with the complexities of metadata. I live in hope that at least the B2B transaction element of that metadata will one day be as easy to author and as interoperable as a web page. Until then, why not check out the power of search from Luc’s blog. Without good metadata, it would be a lot less exciting.
Why Ingest to the Cloud?
With Cloud storage becoming cheaper and the data transfer to services such as Amazon S3 storage being free of charge, there are numerous reasons why ingesting to the Cloud should be part of any media organization’s workflow. So, stop trying to calculate how much storage your organization consumes by day, month or year, or whether you need a NAS, a SAN or a Grid, and find out why Cloud could be just what your organization needs. Easy Sharing of Content Instead of production crews or field journalists spending copious amounts of time and money shipping hard drives to the home site or being limited by the bandwidth of an FTP server when uploading content, with object storage services like Amazon S3 or Microsoft Azure, uploading content to the Cloud has become easy and cheap. Once content is uploaded to the Cloud, anyone with secure credentials can access it from anywhere in the world. Rights Access to Content In recent news, cloud storage services such as Apple iCloud were hacked and private content was stolen, increasing the concern about security and access rights to content in the Cloud. With secure connections such as VPN and rights access management tools, you can specify, by user, group access rights and duration of how long content can be accessed on the Cloud. Both Microsoft and Amazon have setup security features to protect your data as well as to replicate content to more secure locations. Cloud Services to Process the Data By uploading content to the Cloud, in the backend you can setup services and workflows to run QC checks on the content, stream media, transcode to multiple formats, and organize the content for search and retrieval using a Media Asset Management (MAM) System hosted on the Cloud. Cloud Scalability Rather than buying an expensive tape library or continuing to purchase more hardware for a spinning disk storage, with cloud storage, one can scale down or scale up with the click of a button. No need for over-provisioning. Disaster Recovery An organization can easily set up secure data replication from one site to another or institute replication rules to copy content to multiple virtual containers, offering assurance that content will not be lost. Amazon S3 provides durable infrastructure to store important data and is designed for durability of 99.99999999% of objects. Moving Towards an OPEX Model As operations and storage move to the Cloud, you can control your investment by paying as you use services and storing content on the Cloud. Instead of investing on infrastructure maintenance and support, with operations on the Cloud, you can focus the investment on what makes a difference, the content and not the infrastructure to support it. Why Upload to the Cloud? The Cloud is no longer a technology of the future, with cloud storage adopted by Google, Facebook and Instagram, Cloud technology is the reality of today. By adopting this technology you control your investment by usage needs, backup your data and provide secure access to content to anyone with credentials anywhere in the world. The biggest limitation now is bandwidth, and the hurdle is adjusting the current infrastructure to support Cloud operations. Many organizations are turning towards a hybrid Cloud model, where content and services are hosted both locally and via Cloud solutions. Learning from the Cloud experience, Dalet has made initiatives over the past few years to evolve existing tools and services for the Cloud. Dalet now offers direct ingest from the Dalet Brio video server to Amazon S3 Storage and, at NAB this year in Las Vegas, Dalet showcased the first MAM-based Newsroom on the Cloud. To learn more about Dalet ingest solutions, please visit the ingest application page.
Shared Storage for Media Workflows… Part 1
In part one of this article, Dalet Director of Marketing Ben Davenport lists and explains the key concepts to master when selecting storage for media workflows. Part two, authored by Quantum Senior Product Marketing Manager Janet Lafleur, focuses on storage technologies and usages. The first time I edited any media, I did it with a razor and some sticky tape. It wasn’t a complicated edit – I was stitching together audio recordings of two movements of a Mozart piano concerto. It also wasn’t that long ago and I confess that every subsequent occasion I used a DAW (Digital Audio Workstation). I’m guessing that there aren’t many (or possibly any) readers of this blog that remember splicing video tape together (that died off with helical-scan) but there are probably a fair few who have, in the past, performed a linear edit with two or more tape machines and a switcher. Today, however, most media operations (even down to media consumption) are non-linear; this presents some interesting challenges when storing, and possibly more importantly, recalling media. To understand why this is so challenging, we first need to think about the elements of the media itself and then the way in which these elements are accessed. Media Elements The biggest element, both in terms of complex and data, is video. High Definition (HD) video, for example, will pass “uncompressed” down a serial digital interface (SDI) cable at 1.5Gbps. Storing and moving content at these data rates is impractical for most media facilities, so we compress the signal by removing psychovisually, spatially, and often temporally redundant elements. Most compressions schemes will ensure that decompressing or decoding the file requires less processing cycles that the compression process. However, it is inevitable that some cycles are necessary and, as video playback has a critical temporal element, it will always be necessary to “read ahead” in a video file and buffer at the playback client. Where temporally redundant components are also removed, such as in a MPEG LongGOP compression scheme like Sony XDCAM HD, the buffering requirements are significantly increased as the client will need to read all the temporal references, typically a minimum of one second of video, or 1Gb of data. When compared to video, the data rate of audio and ancillary data (captions, etc.) is small enough that often it is stored “uncompressed” and therefore requires less in the way of CPU cycles ahead of playback – this does, however, introduce some challenges for storage in the way that audio samples and ancillary data are accessed. Media Access Files containing video, even when compressed, are big - 50Mbps is about as low a bit rate as most media organizations will go. On its own, that might sound well within the capabilities of even consumer devices – typically a 7200rpm hard disk would have a “disk-to-buffer” transfer rate of around 1Gbps, but this is not the whole story. 50Mbps is the video bit rate – audio and ancillary data results in an additional 8-16Mbps Many operations will run “as fast as possible” - although processing cycles are often the restricting factor here, but even a playback or review process will likely include “off-speed” playback up to 8 or 16 times faster than real-time – the latter requiring over 1Gbps Many operations will utilize multiple streams of video Sufficient bandwidth is therefore the first requirement for media operations, but this is not the only thing to consider. If we take a simple example of a user reviewing a piece of long-form material, a documentary for instance, in a typical manual QC of checking the beginning, middle and end of the media. As the media is loaded into the playback client, the start of the file(s) will be read from storage and, more than likely, buffered into memory. The user’s actions here are fairly predictable, and therefore developing and optimizing a storage system with deterministic behavior in this scenario is highly achievable. However, the user then jumps to a pseudo-random point in the middle of the program; at this point the playback client needs to do a number of things. First, it is likely that the player will need to read the header (or footer) of the file(s) to find the location of the video/audio/ancillary data samples that the user has chosen – a small, contained read operation where any form, if buffering, is probably undesirable. The player will then read the media elements themselves, but these too are read operations of varying sizes: Video: If a “LongGOP” encoded file, potentially up to twice the duration of the “GOP” – in XDCAM HD, 1 sec ~6MB Audio: A minimum of a video frames-worth of samples ~6KB Ancillary data: Dependent on what is stored, but considering captions and picture descriptions ~6B Architecting a storage system that ensures that these reads of significantly different orders happen quickly and efficiently to provide the user with a responsive and deterministic way for dozens of clients often accessing the exact same file(s) requires significant expertise and testing. Check back tomorrow for part two of “Shared Storage for Media Workflows,” where Janet Lafleur looks at how storage can be designed and architected to respond to these demands!
5 reasons why media delivery standards might be good for your business
Like me, I am sure that you have been to a restaurant in a group and everyone orders from the set menu EXCEPT for that one person who orders the exotic, freshly prepared fugu, which requires an extra 30 minutes of preparation from a licensed fugu chef so that the customers don't die eating it. Restaurant etiquette means that our main course is served at the same time, forcing everyone to spend a long time hungry, waiting for the special case. And if you split the bill equally, the special case becomes subsidised by the people wanting the set meal. Does this model relate to the media industry? Is there a cost for being special? How can we reduce that cost? What gets done with the cost savings? How can you help? Fortunately those 5 questions lead into 5 reasons why delivery standards might be a good idea. 1. The set meal is more efficient than the a la carte I must confess that when I write this blog while hungry there will be a lot of food analogies. I'm quite simple really. In the "set meal" case - you can see how it's easier for the kitchen to make a large volume of the most common meal and to deliver it more quickly and accurately than a large number of individual cases. In the file delivery world, the same is true. By restricting the number of choices to a common subset that meet a general business need, it is a lot easier to test the implementations by multiple vendors and to ensure that interoperability is maximised for minimum cost. In a world where every customer can choose a different mix of codecs, audio layout, subtitle & caption formats, you quickly end up with an untestable mess. In that chaotic world, you will also get a lot of rejects. It always surprises me, how few companies have any way of measuring the cost of those rejects, even though they are known to cause pain in the workflow. A standardised, business-oriented delivery specification should help to reduce all of these problems. 2. Is there a cost for being special? I often hear the statement – "It's only an internal format - we don't need to use a standard". The justification is often that the company can react more quickly and cheaply. Unfortunately, every decision has a lifespan. These short-term special decisions often start with a single vendor implementing the special internal format. Time passes and then a second vendor implements it, then a third. Ultimately the custom cost engineering the special internal format is spent 3 or 4 times with different vendors. Finally the original equipment will end of life and the whole archive will have to be migrated. This is often the most costly part of the life cycle as the obsolete special internal format is carefully converted into something new and hopefully more interchangeable. Is there a cost of being special? Oh yes, and it is often over and over again. 3. How can we reduce costs? The usual way to reduce costs is to increase automation and to increase "lights out" operation. In the file delivery world, this means automation of transcode AND metadata handling AND QC AND workflow. At Dalet and AmberFin, all these skills are well understood and mastered. The cost savings come about when the number of variables in the system is reduced and the reliability increases. Limiting the choices on metadata, QC metrics, transcode options, workflow branches increases the likelihood of success. Learning from experiences of the Digital Production Partnership in the UK, it seems that tailoring a specific set of QC tests to a standardised delivery specification with standardised metadata will increase efficiency and reduce costs. The Joint Task Force on File Formats and Media Interoperability is building on the UK's experience to create an American standard that will continue to deliver these savings 4. What gets done with the cost savings? The nice thing about the open standards approach is the savings are shared between the vendors who make the software (they don't have to spend as much money testing special formats) and the owners of that software (who spend less time and effort on-boarding, interoperability testing and regression testing when they upgrade software versions.) 5. How can you help? The easiest way is to add your user requirements to the Joint Task Force on File Formats and Media Interoperability list. These user requirements will be used to prioritise the standardisation work and help deliver a technical solution to a commercial problem. For an overview of some of the thinking behind the technology, you could check out my NAB2014 video on the subject, or the presentation given by Clyde Smith of Fox. Until next time.
Dalet acquires AmberFin – One Year On
To be precise, on the day this is published, it is one year, one month, one week and one day since Dalet acquired AmberFin on the 6th of April 2014. It seems like an appropriate opportunity to reflect on the last 13 months. It wasn’t entirely by accident, but we were certainly fortunate, that we finalised the acquisition on the eve of NAB 2014. This not only presented an ideal opportunity for both teams to come together (a rarity with Dalet spread across 18 offices worldwide) and jointly talk to customers, but also established a milestone to annually measure the integration of people & technology. People are at the heart of any business, but given the level of professional services that Dalet provides to our customers, they are absolutely core to our business. It was immediately obvious at NAB 2014 that the AmberFin team were going to merge well with Dalet, and over the past year, we have been able to “blend” skills and start sharing knowledge across the now united organisation. By way of example, Arnaud Elnecave, a long-serving Dalet employee, recently assumed the position of General Manager of Products, taking global responsibility for our packaged product and solutions business, including Dalet AmberFin and Dalet Brio, while Simon Adler, formerly of AmberFin, took over Arnaud’s previous role as General Manager for our West Coast operations. Integrating AmberFin technology into the Dalet offering started immediately following the acquisition, and at IBC last September, we showcased how AmberFin technology could be used in multi-lingual/multi-version workflows, using the transcoder as a render engine under the Dalet Track Stack tool. At NAB, we showed customers further benefits of the acquisition with the combination of the Dalet Workflow Engine and Dalet AmberFin as a user-intuitive solution for orchestrating media-centric workflows. Of course, we also brought together a huge wealth of knowledge. Following on from the success of the “AmberFin Academy” – the free educational program – and its feature series, “Bruce’s Shorts,” we launched the Dalet Academy in January of this year, featuring a much broader topic set with more contributors (including partners, customers and consultants), as well as more blogs, webinars and live events at trade shows and conferences. In the background, we have of course merged the operations of the two companies. This is never an easy task and I take this opportunity to thank those involved for making that happen so smoothly. So, what’s next? I can’t reveal too much of the roadmap, but it is safe to say that our investment in AmberFin will continue to reap benefits for our customers, whether they come to us for a standalone transcoder or an enterprise-wide media asset management system. Acquisitions aren’t necessarily easy to get right; in fact, one article from Business Review Europe sites a Harvard Business Review report showing M&A failure rates as high as 90%. We like to consider ourselves a part of the 10%. One year on, Dalet, our partners and our customers are seeing and will continue to see the benefits of an excellent match in vision, technology and people.
How to bring standards to your organisation
Back in the 1990s, I was told of an old maxim: "If you can't win the market place, win the standard." I thought that this was a cynical approach to standardisation until we looked through some examples of different markets where there are a small number of dominant players (e.g., CPUs for desktop PCs, GPU cards, tablet / smartphone OS) versus markets where there is enforced cooperation (Wi-Fi devices, network cabling, telephone equipment, USB connectivity). So, how does this affect technology in the media industry, and how can you use the power of standards in your organisation? It seems that the media technology industry hasn't made its mind up about what's best. We have come from a history that is strong in standardisation (SDI, colour spaces, sampling grids, etc.), and this has created a TV and film environment where the interchange of live or streaming content works quite well, although maybe not as cheaply and cleanly as we would like. When the material is offline or file-based, there are many more options. Some of them are single-vendor dominant (like QuickTime), some are standards-led (like MXF), some are open source (Ogg, Theora) and others are proprietary (LXF, FLV). Over any long timeframe, commercial strength beats technical strength. This guiding principal should help explain the dynamics of some of the choices made by organisations. Over the last 10 years, we have seen QuickTime chosen as an interchange format where short-term "I want it working and I want it now" decisions have been dominant. In other scenarios – as in the case of "I am generating thousands of assets a month and I want to still use them in six years time when Apple decides that wearables are more important than tablets" – MXF is often the standard of choice. Looking into the future, we can see that there are a number of disruptive technologies that could impact decision-making and dramatically change the economics of the media supply chain: IP transport (instead of SDI) High Dynamic Range (HDR) video 4k (or higher) resolution video Wide colour space video HEVC encoding for distribution High / mixed frame rate production Time Labelling as a replacement for timecode Specifications for managing workflows Some of these are clearly cooperative markets where long-term commercial reality will be a major force in the final outcome (e.g., IP transport). Other technologies could go either way – you could imagine a dominant camera manufacturer “winning” the high / mixed frame rate production world with a sexy new sensor. Actually, I don't think this will happen because we are up against the laws of physics, but you never know – there are lots of clever people out there! This leads us to the question of how you might get your organisation ahead of the game in these or other new technology areas. In some ways being active in a new standard is quite simple – you just need to take part. This can be costly unless you focus on the right technology and standards body for your organisation. You can participate directly or hire a consultant to do this speciality work for you. Listening, learning and getting the inside track on new technology is simply a matter of turning up and taking notes. Guiding the standards and exerting influence requires a contributor who is skilled in the technology as well as the arts of politics and process. For this reason, there are a number of consultants who specialise in this tricky but commercially important area of our business. Once you know “who” will participate, you also need to know “where” and “how.” Different standards organisations have different specialties. The ITU will work on the underlying definition of colour primaries for Ultra High Definition, SMPTE will define how those media files are carried and transported, and MPEG will define how they are used during encoding for final delivery. Figuring our which standards body is best suited for the economic interests of your organisation requires a clear understanding of you organisation’s economics and some vision about how exerting influence will improve those economics. Although a fun topic, it's a little outside today's scope! So how do you bring standards to your organisation? Step 1: join in and listen Step 2: determine whether or not exerting influence is to your advantage Step 3: actively contribute Step 4: sit back and enjoy the fruits of your labour For more on the topic, don't forget to listen to our webinars! Coming soon, I'll be talking about Business Process Management and standards – and why they matter. Until the next one...