Menu extends

Jan 21, 2015
Dalet and Adobe make sense of content creation and sharing in a super connected mobile world

Dalet and Adobe make sense of content creation

As we heard from Adobe’s Dennis Radeke at the June 25th Dalet/Adobe event that we recently staged in New York, change in the video industry is never evolutionary: it’s always revolutionary. Dennis pointed out that just as we’re getting comfortable with HD, all of a sudden we need to do it in ultraHD!




As we heard from Adobe’s Dennis Radeke at the June 25th Dalet/Adobe event that we recently staged in New York, change in the video industry is never evolutionary: it’s always revolutionary. Dennis pointed out that just as we’re getting comfortable with HD, all of a sudden we need to do it in ultraHD!

Today we’re experiencing an explosion of content, largely created by a corresponding explosion in HD content capture devices, backed up by today’s super connected world.

Against this background, Adobe is striving to achieve and maintain a market leadership position and to enable this, it requires strong relationships with like-minded companies who possess complimentary technologies and platforms to their own. The purpose of the well-attended New York event was to spotlight the relationship between Dalet and Adobe, illustrating how customers can combine the two companies’ offerings to evolve to a more interactive and collaborative model compared to traditional file-based workflows approaches.

In his presentation, Dennis explained how Adobe’s evolution of new and improved feature sets and enabling new workflows within media enterprise operations is central to maintaining their leadership position. In creating Adobe Premiere CC2014, the company has effectively re-written the rulebook in file-based workflows, and this platform is enjoying tremendous success with users and media operations worldwide.

However, Dennis was swift to recognize that no company offers a complete solution, from cradle to grave. He recognized that how users get into and out of the Adobe section of the production workflow is key to their success. To this end, Adobe has invested heavily in an open architecture approach called Content Panels, which provides the doorway into Adobe Premiere.

Some time ago at Dalet, we recognized the potential that Content Panels offer to create a new breed of supercharged production workflows and interactions. That combination of Adobe Premiere and Dalet’s Galaxy MAM platform is a prime example of the superior extended workflows that users can create with a single integrated user interface.

Within his presentation, Dennis explained how Adobe is moving into the area of collaborative editing. In Adobe Anywhere, the company has developed an enterprise class editing platform that is generating a lot of interest from the post community. This platform, combined with a MAM system to sit alongside, is the ideal solution for organizations wanting more exposure, collaboration and oversight over their rich content.

So, how does the Dalet Adobe Premiere plug-in work?
At Dalet, we have developed an intuitive Adobe Premiere plug-in, which enables users to employ our well-established browse, preview (in High-Res or Proxy resolution) and search facilities within the Premiere user interface.




Loading an asset to this interface requires no media movement and editing is achieved in place and collaboratively with other editors, even while recording is in progress. Importantly, Dalet MAM asset and logging metadata flows through the editing process in order to provide the editor with relevant metadata from producers such as editorial notes or from automated systems such as closed captions or QC.

With Dalet WebSpace tools, this editorial collaboration and contribution from other team members can be achieved concurrently with the editors.  Our Dalet web-based Storyboarder provides any user with a simple and intuitive means of quickly and accurately gathering shots in either private or shared bins accessible to both Galaxy and Premiere users.




This approach can be used for anything, from simple clipping, accessible by all, to full-fledged storyboarding, where you can assemble and review a sequence prior to sharing sequence with craft editors for more complex editing or saving that storyboard as an asset to be reused by all later.

This means that within Adobe Premiere, all Dalet MAM audio, video, clips and EDLs assets can be open in place together with all the parent asset’s metadata flowing through. Finished Premiere projects or sequences can be saved as Dalet assets as well, thus making them accessible to all authorized users for reuse, concurrently. And finally, Premiere conformed projects inherit all of the parent’s assets metadata used in the creation of this project, such as editorial notes, captions, rights etc, which in turn flows through to the finished piece in the MAM catalog. So for example, you don’t need to re-caption the promo if the sources you used had captions.

Finally with the advent of Adobe Anywhere, the same Dalet Plugin will soon be able to maintain the same user experience while connected to the Dalet MAM catalog and working with content, but instead, this content is served to the editor, say in the field, via Anywhere streaming and rendering platform.

At Dalet, we are really excited about the potential that this collaborative development project offers to broadcasters and facilities of all sizes. It illustrates what can be produced when like-minded companies come together with a shared vision and determination. If you would like to know more about this exciting development, then a good starting point is our short video on the issue, which you can download here.

Pictionary, Standards and MXF Interoperability
Four weeks ago, I posted in this blog about the IRT MXF plugfest, the new MXF profiles that were published in Germany this year by the ARD and ZDF, and how these new profiles would bring forth a new era in interoperability. This week, the first results of that plugfest and reaction from some of the end users and vendors were presented at a conference on file-based production also hosted by the IRT in Munich. As usual, the results were fascinating. As with all statistics, they could be manipulated to back up any point you wanted to make, but for me there were a couple of highlights. First, as mentioned in my last post, this was the 9th such MXF plugfest, and therefore we have a good historic dataset. Comparing previous years, there is an obvious and steady increase in both the interoperability of exchanged files and also compliance with MXF specifications. For most of the codecs and variants tested by the 15 vendors who took part, over 90% of files are now exchanging successfully (up from 70-80% five or more years ago). In one case, the new ARD-ZDF MXF profile HDF03a, 100% of the files submitted interchanged successfully. Quite interestingly, the same files all failed a standards compliance test using the IRT MXF analyser. This highlights one of the difficulties the industry faces today with file interoperability, even with constrained specifications such as the AMWA Application Specifications and ARD-ZDF MXF profiles. The IRT MXF analyser categorises test results as pass, fail, or with warning. It is notable that all files with MPEG 2 essence (e.g. XDCAM HD) either failed or had warnings, while AVC-Intra and DNx files each had a significant number that “passed.” However, when it came to interoperability, the differences between the different codecs were much less obvious. One theory would be that because MPEG 2 in MXF is the oldest and most widely used MXF variant, it has resulted in a near de facto standard that enables a reasonably high degree of interoperability – despite the fact that most of these files are not compliant with specifications. I mentioned in my previous post that the new ARD-ZDF profiles have accommodated this deviation from specification in legacy files by specifying broader decode parameters than encode parameters. This was the focus of my presentation at the conference this week, illustrated through the use of children’s toys and the game of Pictionary. However, the additional decoder requirements specified are not without issue. For example, if not impossible, it’s certainly impractical to test all the potential variations covered by the broader decoder specification given that it would be difficult to find test sources that exercise all the possible combinations of deviation from the encoder specification. In another area, while the profile says that the decoder should be able to accommodate files with ancillary data tracks, there is no guidance as to what should be done with the ancillary data, should it be present. As a vendor, that’s particularly problematic when trying to build a standard product for all markets where the requirements in such areas may vary by region. Overall though, while there are improvements that can, and will, be made, it’s clear that for vendors and end users alike the new profiles are a big step forward, and media facilities in Germany are likely to rapidly start seeing the benefit in the next 6-12 months. Exciting times lie ahead.
3 ways to fix QC Errors – Part 2 – What the DPP is doing about QC
Recently I spoke at a symposium on media QC run by the ARD-ZDF Medien-Akadamie and IRT in Munich, Germany. Andy Quested of the BBC, who spoke on behalf of the EBU, opened his presentation by asking how many of the 150 or so representatives of German language broadcasters in the audience were actually using automated QC in their workflows: Despite most of those in attendance having purchased and commissioned automated QC systems, it was possible to count those responding positively on one hand. In a previous blog post I wrote about how automated QC systems were under utilized and suggested three simple steps that can be taken to reduce the number of QC errors in a typical workflow. In following up on that post, here is how the work of the UK’s Digital Production Partnership (DPP) and EBU QC group reflects these suggestions. Reducing the number of QC tests When the EBU QC group started looking at automated QC tools they counted a staggering 471 different QC tests. By rationalizing the differently named or similar tests and removing those deemed unnecessary, the list was whittled down and turned into the periodic table of QC – now containing just over 100 different tests. This is still a large number so the DPP has reduced this to a list of about 40 critical tests for file delivery. The failure action for these tests have also been identified as either absolute requirements (must pass) or technical and editorial warnings. QC test visualization Each test in the EBU Periodic table of QC has been categorized into one of four groups: Regulatory – this means making sure that the media conforms to regulations or legislation such as the CALM act in the US or EBU R128 in Europe. A failure here may not actually mean that the quality of the media is poor. Absolute – physical parameters that can be measured against a published standard or recommendation. Objective – this refers to parameters that can be measured, but for which there is no published standard to describe what is or isn’t acceptable. Often, pass/fails in this category will require human judgment. Subjective – this refers to artifacts in video and audio that requires human eyes and ears to detect. These last two categories in particular require the QC events to be presented to operators in a way that effective evaluation can be made. EBU focuses on how to QC the workflow The work of the EBU group is ongoing and having now defined a common set of QC tests and categories, the current and future work is focused on QC workflows and developing KPIs (Key Performance Indicators) that will demonstrate exactly how efficient media workflows are with regard to QC. This is a key area and one where the EBU is well positioned to see this initiative come to fruition. As the EBU has stated, “Broadcasters moving to file-based production facilities have to consider how to use automated Quality Control (QC) systems. Manual quality control is simply not adequate anymore and it does not scale.” The EBU recognised QC as a key topic for the media industry in 2010, and in 2011 it started an EBU Strategic Programme on Quality Control, with the aim to collect requirements, experiences and to create recommendations for broadcasters implementing file-based QC in their facilities. I left Munich with the clear impression that real momentum is being generated by organizations such as the EBU and DPP in the field of media quality control. It is reassuring when you see that what you have been advising customers for years is supported by leading broadcast industry bodies - QC is key! At AmberFin, QC has been a passion of ours for many years. To understand our approach to this critical component of file-based workflows, why don’t you download our free Whitepaper on the issue. I hope you found this blog post interesting and helpful. If so, why not sign-up to receive notifications of new blog posts as they are published?
When is a workflow not a workflow - can airports learn from modern media workflows
Passing through Frankfurt airport last week I was reminded of the chaos at Amsterdam Schiphol airport when returning from IBC earlier this year. Like many airports, Frankfurt and Schiphol have replaced friendly-faced check-in clerks withautomated check-in and bag drop: As visitors returning from the conference and exhibitions queued up to use the shiny new automated bag drop, what started as friendly chatter about previous five days’ events turned to increasingly vocal demonstrations about the delays the new system was causing. The delays were largely caused by bags that slightly exceeded the weight or size limits, or were simply the wrong shape to fit the uniform dimensions of the drop-off – problems that a small amount of human judgment would have easily resolved. Eventually, a large team of KLM staff were dispatched to the scene to calm the mounting insurrection, help reduce the increasing delays and ensure people caught their flights. Workflow automation does not always increase efficiency and throughput It seems mad that a system billed as expediting the check-in process for customers and reducing costs for the airline actually had the opposite effect – but we are in danger of doing something very similar in the media industry. From the airlines perspective, the process of checking in a passenger and their baggage is actually very similar to the process of ingesting media. Before online check-in and automated bag drops, a check-in clerk would have verified a passengers ID, issued their boarding pass, asked the appropriate security questions and weighed and checked their baggage. Can we replace men with machines in media workflows? In a traditional ingest scenario we would have taken a tape, placed it in a VTR, visually verified the content and checked that it was successfully written to disk. Whether or not QC was formally a part of ingest, a human operator was likely to be interacting in someway with the media and able to apply judgment as to whether there was any issue with the media. With automation in media systems as advanced as it is, it is possible to pass media through aworkflow without a human ever viewing it end-to-end. Much like in an airport, if everything about the passengers and their baggage is within the defined constraints, the process will be quick and efficient – issues only arise when there is an exception – when the passenger’s bag is a kilo overweight, or the media file fails an automated QC. Combining automation with a human touch The challenge we have to face in the media industry as file-based delivery increases and SDI disappears is how we handle these exceptions in the workflow in a fast and effective way, combining automation with the human touch to ensure the quality of our output. In order to do this, we need to unify manual and automated QC through a single interface that enables users to both make judgment on automated measurements and add commentary to QC reports. Taking this approach ensures that media “failed” by automated QC can quickly move on (or back) in the workflow and where an error has been “over-ruled” by a human, the certificate of trust can follow the content. Once trusted, the media should pass through the rest of the workflow without issue before flying off into the sunset. At AmberFin, we have learned that whilst automation is good, there is still an important place for human intervention in media workflows. I can’t help wondering how long it will take – and how many travelers’ journeys will be affected – before the airlines come to the same conclusion. If you would like to learn more about AmberFin’s unique approach to enterprise-class workflow automation. I hope you found this blog post interesting and helpful. If so, why not sign-up to receive notifications of new blog posts as they are published?
Three steps to QC heaven – Practical hints to fix file-based QC errors
As previously written about on this blog, automated Quality Control (QC) within file-based production facilities has been regarded as a key issue for a number of years ago. Back in 2010, the EBU recognized QC as a key topic for the media industry and has subsequently stated that manual quality control processes are simply not adequate anymore and do not scale: So, you could be forgiven to think that this would have heralded a boom period for QC tool manufacturers. However, if you look at this market more carefully that prediction does not appear accurate. Following impressive launches and demonstrations at NAB and IBC in 2006, the potential savings in op-ex and gains in efficiency that automated QC tools offered grabbed the attention of budgeting and planning teams in media facilities worldwide. But nearly eight years later, and despite some really significant advances in their functionality, accuracy and performance of these tools, sadly, many of the automated QC tools bought and installed lie dormant or, at best, under-utilized. The most frequently given reason for this is simply that the systems would generate so many errors across so many metrics that it was nearly impossible for a piece of media to successfully pass. At AmberFin, we hate waste and love efficiency, so here are three simple steps to fix QC errors and make the best use of your automated QC. 1. Turn off the QC tests. No, really! Perhaps not all of them, but work out which ones are actually going to identify real problems downstream in the workflow or presentation of the media and turn off the remainder. Just last week we were talking to a customer who was having problems with every piece media failing QC due to audio peak levels. Clearly, there could have be an issue here, but the previous step in the workflow was to normalize the audio to meet EBU R128 loudness specifications, which it did – so the peak level errors were not only spurious, but the test itself unnecessary. 2. Visualize it! If you take the event data generated by an automated QC and present it in a clear, interactive way, it becomes much quicker and easier for operators to make sound judgments and distinguish real errors from marginal issues or “false positives” / “false negatives”. This is whyAmberFin created UQC and use it to validate our own ingest and transcode tools in iCR. The timeline gives a clear view of any problems detected and, alongside video and audio playback, makes it considerably faster and more efficient to identify genuine problems. 3. QC the workflow Toyota gained a reputation for building hig quality cars at a low price. Their QC process did not involve a single gigantic QC operation at the end of the production line. They implemented a production system where the processes themselves were checked – the theory being that if you start with the right input and have the right processes, then the output will also be right. We can implement the same idea in media workflows by identifying issues introduced in the workflow and fixing the workflow rather than fixing individual items of media. This should, in turn, reduce the number of error events reported by automated QC tools and further increase efficiency. Don't let your Automated QC tool sit Idle! If you have an Automated QC tool sitting idle and unloved, why not try these three easy steps to get closer to those promised savings and gains. If you are still trying to get your head around this important issue, then you can learn a great deal if you download AmberFin’s QCWhite Paper – Unified Quality Control from AmberFin.
The sidecar is dead – long live the sidecar
The DPP (Digital Production Partnership) has “no intention of taking over Europe”, let alone the world, however that has not stopped the world looking on with great interest and, in most cases, with great admiration. Models established for the UK media industry by the DPP will undoubtedly be adopted across the globe and that makes the announcements made during the DPP event at IBC highly significant: Notably, the host of updates to the file delivery specification, which will be the preferred method of delivery to UK Broadcasters from 1st October 14, includes, perhaps controversially, the deprecation of the XML sidecar. The DPP Technical Standards for file delivery, and the AMWA AS-11 specification on which they are based, specify that the descriptive metadata shall be stored within the MXF media file. Previous versions of the DPP specification have also included the requirement for an XML sidecar, carrying the same descriptive metadata, resulting in a duplication of the metadata. Removing the requirement for the XML sidecar greatly simplifies management and manipulation of the media as the descriptive metadata is no longer stored in multiple locations. A single storage location for the metadata facilitates easier interchange and interoperability and reduces the risk of erroneous or incomplete metadata. However, many file-based delivery operations have become dependent on XML sidecars to ‘register’ the receipt of media. This sidecar-driven registration of the media file is unlikely to go away for some time, but the inclusion of the DPP metadata within the media file itself means that the sidecar can become focused on transactional and operational, e.g. QC (quality control) metadata, which have equal inherent value as descriptive metadata (in some cases having a direct relationship to revenue) but are of a much more transient nature. The perpetual nature of descriptive metadata means that it’s natural home is within the media file. Until such time as an infrastructure exists for the exchange of transactional metadata associated with the transfer of media files between facilities, the only practical home for this data is in a sidecar. For now at least, the sidecar lives on!
Five Things that you Need to Know about DPP
When organizations use the term ‘revolutionary’ to describe a concept, I find that it's normally a pretty good cue to turn off and move on to the next thing. Usually, puffed up descriptions conceal a flaky or fundamentally compromised proposition. The first thing I can tell you about the Digital Production Partnership (DPP) is that it is not snake oil: Efficient & cost-effective DPP is a real life platform that offers the potential for broadcasters and facilities of all types to revolutionize their production workflows. It enables organizations to adopt digital file-based workflows in ways that are both efficient and cost effective. It enables organizations to adopt file-based technologies for intra- and inter-company media transfers. It finally consigns the ‘sneaker-net’ to the rubbish heap of history. An industry funded initiative 
So, the next big question is who is behind DPP – is it the brainchild of some multi-national corporation, developed to encourage broadcasters to buy more kit? No, just the opposite. DPP is an initiative formed by the UK’s public service broadcasters to help producers and broadcasters maximise the benefits of digital production. The partnership is funded by BBC, ITV and Channel 4, with representation from Channel 5, Sky, S4C and the independent sector on its working groups. DPP draws on industry experts from the worlds of technology and broadcast production to help fulfil its remit. Building on success of Media Exchange Format (MXF) 
Today, is unique. It has taken all the hard work it took to create the SMPTE MXF specification nearly 10 years ago and developed a set of Application Specifications for the UK industry that transform this technical standard into a real world business platform. Looking at it from an international perspective, DPP is the first of these Application Specifications to receive national scale adoption. I'm pretty certain that it won’t be the last. DPP is already a winner 
DPP has been successful in establishing a road map in digital production in the UK. It provides a framework that enables the UK industry to come together and share best practice in digital production and help producers and broadcasters maximize the potential of the digital revolution. Also, it leads the standardization of technical and metadata requirements within the UK, helping to ensure digital video content can be easily and cost-effectively distributed to audiences via multiple platforms. Strong Vendor support DPP is supported by many of the leading broadcast technology vendors. At a recent DPP Vendor Day, I counted 13 manufacturers present in the room – all co-operating to develop a harmonized digital file-based working environment. At AmberFin, we’re proud to say that we are at the leading edge of this cross industry co-operation. Already, we have introduced a family of new DPP compliant media ingest, media transcode, playback and quality control products that will, for the first time, provide broadcasters and content owners with efficient, targeted and cost-effective production tools. At AmberFin, we whole-heartedly support the DPP initiative here in the UK. We believe it has the ability to transform the UK broadcast industry. Furthermore, we believe it provides a blueprint that could be easily adopted in many other international markets. If you're not up to speed on DPP, then I recommend having a good read of our white paper and then checking out the DPP websire (url is in the white paper).
Why do we all need Broadcast IT Training?
When I'm standing in front of 100 engineers all expecting words of wisdom from me, it gives me a few moments to reflect on the fact that there are many individuals in the audience who know a LOTmore than me on the subjects that I am about to deliver. In fact it's remarkable that most lectures I give are to people who already have a lot of knowledge! The most recent lecture series was delivered on behalf of SMPTEfor the SMPTE regional seminars on the topic of File Basedworking. We covered a large range of topics including: * video basics * file basics * transfer basics * database and identification basics * how to glue workflows together * how to optimise transcodes * etc. Everyone in the audience learned something and EVERY INSTRUCTOR learned something as a result of the Q&A sessions. In many ways it was disappointing that there were only 100 engineers listening. It was obvious from the audience that the information covered was vital to the running of their media business. In a world where the business rules are constantly changing and we need to use technology to keep our businesses running, the most valuable resource in a media company is still the people. The people who understand VIDEO and AUDIO and METADATA and STORAGE and NETWORKS and DATABASES and SYSTEMS ADMIN and the BUSINESS are like gold dust and command both respect and decent salaries. It came as a surprise, therefore, that one of the SMPTE seminars had to be cancelled because only 9 people had registered. I learn a lot by sharing my knowledge with others and I often feel that I need a bigger brain to hold all the facts inside. I hope you gain knowledge and maybe a little wealth from the knowledge shared in the AmberFin Academy. Great to have you on-board!
How to Maximize your time in Sports Bars and Airport Lounges with Closed Captions
If, like me, you spend far more time in airports than is good for you, then you will be familiar with the television sets dotted around the lounges, largely silent but with the subtitles or closed captionson. Usually tuned to a news program, the captions themselves become hypnotic, and you cannot help but read them: I’m told that the same thing also happens in sports bars, but obviously I have far less practical knowledge of such establishments myself. It seems that the mere appearance of the words forces you to read. This phenomenon was first formally observed by Brij Kothari, an Indian then studying at Cornell in the States. He was trying to learn Spanish, but the local cinemas that showed films from Spain put English subtitles on them. It made it much harder to hear the original language. He realized that if they had Spanish captions it would be much easier to learn the language, with the written script reinforcing the sound of the spoken words. “Then it occurred to me that if all Indian television programming in Hindi was subtitled in Hindi, India would become literate faster,” recalled Kothari, now professor at the Indian Institute of Management. Today one of the most popular programs on Indian television is the Sunday night sing-along: Bollywood hits with same-language subtitles. Not only do people read, listen, sing and learn, but children copy the lyrics down so they can sing them with their friends later. This karaoke-for-literacy effort reaches 200 million viewers a week. In the last nine years, functional literacy in the areas covered has more than doubled. A researcher focusing on one particular town found that newspaper reading has risen by more than 50%, so the population is better informed. Women are now capable of reading bus timetables so social mobility is boosted. Literacy is liberating in so many ways. Now here is the killer message: this does not only work in developing countries. Research in the USA by Nielsen’s ORG Center for Social Research found that same language subtitling doubles the number of functional readers among primary school children. Across the developed world there is a huge number of adults who, while not being illiterate, cannot read fluently. According to the World Literacy Foundation, one in five adults in the UK struggles with basic reading. If they do not feel they can pick up a newspaper, or read a bus timetable, they cannot take a full role in society. I’m not suggesting that same-language subtitling of every MTV broadcast is the complete solution. But it looks like it would help, and with today’s technology it is a low-cost win. Literate viewers are more responsive to advertising, too, so there are potential returns. So next time you find yourself in an airport, or even a bar, remember that same-language subtitling is not just for those who cannot hear the words, for whatever reason. It could – and should – be changing peoples’ lives.
Taking MXF Interoperability to the next level
Next week, in a corner of the Bayerischer Rundfunk campus in Munich, Germany, likely without much fanfare, something fairly monumental will take place – the IRT MXF PlugFest. Now in its ninth year, this event brings together vendors in the media and entertainment industry to facilitate MXF interoperability tests. Following each event, the IRT (Institute für Rundfunktechnik) publishes a report on the levels of overall interoperability, standard compliance, decoder robustness, and the common errors and interoperability issues – you can download the previous reports here. All of the previous eight reports make interesting reading (particularly if read in order), but none has been more greatly anticipated than the report due from this ninth PlugFest. What then, you may ask, makes this year’s event so special that we would dedicate a whole blog post to a relatively small, vendor-only event in Bavaria? The UK DPP (Digital Production Partnership) has been closely watched by a number of industry organizations and groups, particularly with regards to the file specification it has published, based on AMWA AS-11 for the delivery and interchange of media files. This specification aims to end the headache of media file interoperability at the point of delivery for broadcasters and media facilities across the UK and Ireland. While the issue of file compatibility is not unique to the UK, unique challenges in the German-speaking media community have dictated a slightly different approach to the creation of a standardized interchange format. The ARD group, the Association of Public Broadcasting Corporations in the Federal Republic of Germany, is made up of 10 member broadcasters, covering regional, national and international distribution, who have the capability to exchange media at almost any point in any workflow including news, production and archive. In July this year, together with ZDF (in English: the Second German Television), with support from other German-language public and private broadcasters, the ARD published two new MXF-based media file-format “profiles.” At this point, you would be forgiven for asking, “Do we really need another specification/standard?” In fact, the two profiles, named HDF01 and HDF02, are not too dissimilar to the AMWA Application Specifications AS-10 and AS-11. What makes the ARD-ZDF MXF-profiles different is that not only do they describe what the output of the MXF encoder should look like, but the tolerances and behavior of MXF decoders. For example, MXF files compliant with the profiles shall not have any ancillary date tracks (commonly used for the carriage of subtitles or transitory audio and aspect ratio metadata), but to ensure interoperability, it is required that decoders are tolerant of ancillary data tracks that may be present. Specifying not only the encoder, but also decoder behavior will have a massive benefit to interoperability, particularly when deploying and testing systems. Many of the properties specified in the profiles are low-level elements that frequently cause interoperability problems that require lengthy discussions between multiple vendors, users and integrators to find resolution. Constrained encoding profiles ensure that “problematic” files can quickly be analyzed and “non-compliant” elements identified, but without specifying additional decoder requirements, applying these constraints can introduce as many challenges as they remove with little or no consideration for legacy assets or flexibility to find quick, short-term resolutions to interoperability issues in workflow. Dalet is proud to have been one of the very first vendors to have a product certified by AMWA for the creation of UK DPP delivery specification compliant files and is equally pleased to be going into the first IRT MXF PlugFest since the publication of the HDF01 and HDF02 ARD-ZDF MXF profiles, as one of the first few to fully support the new profiles. The event next week will set the baseline for a new era in media file interoperability and, while reading the historic MXF PlugFest reports is interesting, I personally cannot wait to see what I expect to be the biggest change yet, between the report for next week’s ninth and 2015’s 10th event.
Change Management: 4 Things to Consider When Implementing MAM
We’ve already discussed MAM’s role as an enabler in broadcast operations change management (here) and the subsequent areas where MAM can drive change. But none of these benefits will be realized if staff who will be using the system day in and day out have not understood and bought-in to what is an inevitable disruption to their normal practices. When production is ongoing, facilities are huge, companies are comprised of hundreds or even sometimes thousands of employees and staff, what must media organizations consider when it comes to change management? Explain what a MAM is Sounds simple, but don’t assume that anyone understands what media asset management means. Most people will have heard of the concept, many will have experienced it, and not all these experiences will have been positive. Set a solid foundation for the impact on your staff’s routines by emphasizing the value of the change to them. Remove the technical complexity and explain how new workflows help them and their teammates to do their job more quickly and more easily, allowing more time for creativity and, ultimately, better programming of which everyone can be proud. Start simple A MAM can be as simple or as complex as you wish, but nothing will deter people from using a system more than if it appears complicated from the get-go. Get users to buy into the process by handing them control. Work alongside the client so that they can manage the documentation of asset management forms at their own pace. Demonstrate how linking fields and glossaries can be especially helpful in accreting richness to the MAM’s search functionality without over-complicating the process. But most important to this process is starting; the longer you wait to implement change, the more difficult it will be down the road and, conversely, when starting off with a complicated configuration, the more difficult it will be to streamline things later on. Train superusers Train your facility’s key superusers in the creation, management and modification of asset management forms, and empower them, in turn, to champion the benefits of the MAM and to share their knowledge and enthusiasm with colleagues. Don’t treat training as a top-down tutorial, either. It is only natural to want to resist change, especially if this is seen as directed from above. Break down those barriers by involving the production teams fully in all decisions and by listening to their concerns. Find your champions and superusers in all levels of the organization. Show them the future In certain environments it can be difficult to convince users of the value of asset management forms since they feel that filling them out is a waste of time. Yet it is very important that forms are compiled accurately and comprehensively. Show them the future: communicate the value of MAM for tracking, finding and sharing content from the past. Illustrate with ‘garbage in, garbage out’ examples of the difficulties a facility can experience if forms are half completed, or not all. Emphasize that the benefits of MAM will not only accrue to everyone but will only increase in value for all with consistent data entry. What looks like a lot of tedious work and, even worse, a waste of time now will amount to a more seamless, integrated and content-rich media repository in the future. Implementing a MAM is not a one and done process. It takes time, and while the benefits are immense, the installation can be somewhat disruptive. But by taking all of the above into consideration, you can lessen the impact while also creating some excitement around the change. After all, isn’t a unified and intuitive workflow the stuff dreams are made of in the broadcast industry? Can’t we all just live in a world where our assets are linked and organized logically, where our media organizations operate with speed and efficiency? Admit it, I know you’re with me on this one. If you’ve recently been through or are considering a major change in your organization, particularly in regards to MAM, what are some of the challenges you’ve encountered? How do you deal with change management? Let’s talk! Email us to talk change management, MAM, vent your frustrations, and more.
4K, HDR & UHD: A Look at CES Trends Impacting the Media Industry
Last week, the infamous Consumer Electronics Show (better known as CES) took place in Las Vegas. While it’s not an event that many vendors like ourselves in the media industry attend, it is a show that we watch carefully – acknowledging that our customers’ business is consumer-led, and that innovation in consumer electronics will often drive our customers’ future needs. Of course, the big highlight from CES this year was wearable technology for pets (or if you’re my mother-in-law, the new automated sewing machine from Brother). But amongst the wearable, virtual reality and Internet of Things technology making the CES headlines, there are one or two trends that will undoubtedly follow on to conversations back in Las Vegas, in April, at NAB. Although there were incremental advancements in 4K technology and content availability/distribution, HDR dominated the CES announcements in video. As Bruce pointed out in his post-IBC blog, High Dynamic Range (explained rather wonderfully here by David Wood), is not new, but with providers like Netflix committed to bringing HDR to consumers, and even stating that HDR will make a greater difference to viewers than more pixels, it’s pretty clear that this trend is going to gain traction with consumers. Now for that bit from Netflix. Chief product officer, Neil Hunt, told The Telegraph, “With 4K, there are enough pixels on the screen that your eyeball can’t really perceive any more detail, so now the quest for more realism turns into, can we put better pixels on the screen? […] I think that’s actually a more important quality improvement to get to the brightness and detail in the picture than the 4K is by itself.” It’s no coincidence that while Netflix is making these statements, other players are launching rival services and following suit, not only responding to the trend of “cord-cutting” but looking at ways to deliver UHD (Ultra High Definition) content and expand markets. In fact, some of the industry’s biggest players are teaming up to make UHD content (which will soon incorporate HDR) transparent to consumers while establishing open and flexible standards that will make the oft-talked about ideals and features of UHD a consistent and sustainable reality. Dubbed the UHD Alliance, founding members include Technicolor, DirecTV, Dolby, Netflix, Panasonic Corporation, Samsung Electronics Co., Ltd., Sharp Corporation, Sony Visual Product Inc., The Walt Disney Company, Twentieth Century Fox and Warner Bros. Entertainment. (Learn more about the UHD Alliance in the official press release from Technicolor). Which brings us to standards. For content produces, owners and distributors, new distribution methods, formats and expanding markets can only mean one thing: more versions. Looking ahead to NAB, this is the common theme that will run through announcements and demonstrations at the show, resulting in: Enhancements to encoders to create versions faster, with higher quality and lower bit rates – relating to new codecs, new technologies for encoding or simply new operating points Wider adoption and implementation of international and local standards for multi-version creation and file-based media delivery such as IMF or UK-DPP formats Tools and workflows for smart management of multi-version media assets Like CES, it’s possible that for the media industry at NAB 2015 we may not see any groundbreaking innovation – but I’m sure we’re going to see vendors like ourselves using technology to improve the business of creating, managing and distributing content.
Altering the economics for channel operations
As consumer demand for video increases, the range of delivery models continues to expand. The challenge facing today’s broadcasters and service providers is to select the technology that can keep pace with this explosion in demand, while anticipating future channel delivery infrastructures and behavioral trends. Computer and storage clouds promise to supplement and ultimately replace today’s media factories, but for these implementations to be truly beneficial, the applications and solutions they deploy must be designed to offer maximum flexibility and control. A file-based media services workflow will typically involve receiving very large files from a variety of sources, often at unpredictable times and in uneven volumes. The operations to be performed on those files also vary, from customer to customer and file to file. This is a work pattern that is particularly suited to a highly scalable, highly flexible, programmable infrastructure. Unique among available server architectures, Dalet Brio combines both IT-based video server and a set of workflow tools that make it extremely easy to integrate into SDI environments. The fully IT-based open technology that runs on Dalet Brio makes it an enterprise-class server respecting failsafe security and reliability with built-in redundancy but without the locked-off inflexibilities of proprietary equipment. Its high density and modular design enables Brio to accommodate as many input and output ports as required and it is massively scalable to meet growth. Units can work either with their own local storage or directly attached to a SAN or in a hybrid configuration. The open design affords an easy integration of multiple boxes on one shared storage environment, ready for the needs of tomorrow. What's more, operators can buy Brio as a turnkey video server or as a kit with video cards and timecode cards sourced according to particular IT infrastructure needs. A rich set of apps, from confidence monitoring to graphics insertion, built around Dalet Brio make it easier to ingest and playback large quantities of very large files. Common compute resources can perform an auto QC at one moment and then be re-allocated to perform a transcode at another. This allows operators to achieve much greater efficiency from their infrastructure. The agility of a software-based open IT server further benefits agility in codec support. Dalet Brio supports a very wide range of software codecs and can play any supported files, including a mix of SD and HD, on the same timeline. It allows for on-the-fly cross-, up- and down-conversion of the video signal, as well as aspect ratio modifications. Over the next few years, operators will increasingly look to harness the flexibility and power of open IT technology offering very high levels of channel resilience and interoperability, with the understanding that non-proprietary systems can alter the economic basis for channel creation and operations significantly. To learn more, see our Dalet Brio brochure, with full information.