Menu extends

Jun 12, 2015
FIMS: A Plug-and-Play Solution
FIMS, the Framework for Interoperable Media Services, enables a plug-and-play ecosystem that allows broadcasters and media companies to create customized workflows without costly custom integrations. Learn more about the challenges and benefits of FIMS.

FIMS: A Plug-and-Play Solution

FIMS, the Framework for Interoperable Media Services, enables a plug-and-play ecosystem that allows broadcasters and media companies to create customized workflows without costly custom integrations. Learn more about the challenges and benefits of FIMS.

Dalet CTO and world FIMS expert, Stephane GuezWith so many systems with proprietary interfaces in existence, IT-based media projects these days require a great deal of integrations. At Dalet alone, our applications must integrate with more than a hundred different third-party systems. With customers around the world using customized workflows and any number of different tools and solutions, it’s our job to make sure our own platforms can operate seamlessly within their setup. However, this makes solutions very complex and costly to deploy and maintain. 

Enter: FIMS.

FIMS is the Framework for Interoperable Media Services. This framework simplifies the integration problem through better and stronger standards, which means vendors no longer need to build a custom integration for every single installation. Part of the deal involves developing applications with a standard interface. This could be simple, like transfer services, or more complex, such as the repository interface, which has many operations and options. In addition to these standard interfaces, FIMS employs a data model – a common representation for media assets – which incorporates standard metadata (such as EBUCore) that have been developed separately. 
 
The idea behind the set of standard interfaces and data model is to develop a Service-Oriented Architecture (SOA). By exposing media applications as services and allowing for a flexible architecture, we can leverage standard IT technologies and enable customers to build best-of-breed solutions. As a result, we create an ecosystem of standard interfaces that simplify the design, building and deployment of systems, as well as the maintenance of said systems over time. Because of the system’s flexibility, exposing one’s system as a FIMS system or integrating another tool through a FIMS interface does not require a complete architectural change.
 
For vendors, this means we can build more elaborate integrations at a much lower cost. And because we reduce the number of custom interfaces, the cost to upgrade any given system is also already reduced. What’s more, vendors can offer customers more benefit through improved core applications, as – ideally – the time and money saved by not developing custom integrations can be reallocated towards developing media specific applications. 
 
From the media and broadcast company perspective – because let’s face it, it’s all about the customer at the end of the day – FIMS enables much better tracking and task management, as well as the ability to evolve seamlessly over time. For example, if you want to take advantage of a new transfer accelerator without needing to develop an elaborate custom interface, FIMS provides the framework to facilitate this. With new and improved technology being made available all the time, being able to readily integrate new solutions gives broadcasters a huge advantage.
 
So – why aren’t all systems FIMS-compliant? FIMS is an ongoing effort and as such, is not without its challenges. We work in an industry that is undergoing constant change, which makes this effort a moving target. Companies have to agree to build on the standards, meaning that they must agree with the limits they impose. If you were thinking that FIMS sounded too good to be sure, you may be right; a FIMS-compliant system does have its tradeoffs. With the standardization and simplicity of design, cost-savings, etc., comes slightly looser integration and performance. But we see the long-term benefits far outweighing these short-term issues. 
 
In any case, for FIMS to fulfill its destiny as a plug-and-play solution for the broadcast and media industry, it’s crucial that every actor in the ecosystem plays the game. By cultivating an ecosystem of applications that can all play nice together, broadcasters will be able to build best-of-breed solutions that can be evolved over years to come while saving them money. Now in its sixth year of existence, FIMS continues to gain awareness slowly but surely. But when it comes to making this solution a widespread reality, it’s in the hands of the broadcast and media companies to make the demand. 
 
Want to know more about FIMS? Sign up now to receive our video presentation on what FIMS is, who benefits and why direct to your inbox, as well as our FIMS White Paper, coming this summer.
YOU MAY ALSO LIKE...
CCW, SOA, FIMS and the King & Queen of the Media Industry
All-Star Panel Sessions at CCW 2014 The NAB-backed CCW held some impressive panels, and our own Stephane Guez (Dalet CTO) and Luc Comeau (Dalet Business Development Manager) participated in two of the show’s hot topics. MAM, It’s All About Good Vocabulary – Luc Comeau, Senior Business Development Manager The saying goes, “behind every great man, there is a greater woman.” Within the panel – “Content Acquisition and Management Platform: A Service-Oriented Approach” – there was a lot of talk about content being king. In my view then, metadata is his queen. Metadata gives you information that a MAM can capitalize on and allows you to build the workflow to enable your business vision. Done correctly and enterprise MAM will give you visibility into the entire organization, allowing you to better orchestrate both the technical and human process. Because at the end of the day, it’s the visibility of the entire organization that allows you to make better decisions, like whether or not you need to make a change or adapt your infrastructure to accommodate new workflows. In our session, the conversation very quickly headed towards the topic of interoperability. Your MAM must have a common language to interface with all the players. If it doesn’t, you will spend an enormous amount of time translating so these players can work together. And if the need arises, and it usually does, you may need to replace one component with another that speaks a foreign language, well then, you are back to square one. A common framework will ensure a smooth sequence through production and distribution. A common framework, perhaps, such as FIMS… The One Thing Everyone Needs to Know About FIMS – Stephane Guez, Dalet CTO I was invited by Janet Gardner, president of Perspective Media Group, Inc., to participate in the FIMS (Framework for Interoperable Media Services) conference panel she moderated at CCW 2014. The session featured Loic Barbou, chair of the FIMS Technical Board, Jacki Guerra, VP, Media Asset Services for A+E Networks, and Roman Mackiewicz, CIO Media Group at Bloomberg – two broadcasters that are deploying FIMS-compliant infrastructures. The aim of the session was to get the broadcasters’ points of views on their usage of the FIMS standard. The FIMS project was initiated to define standards that enable media systems to be built using a Service Orientated Architecture (SOA). FIMS has enormous potential benefits for both media organizations and the vendors/manufacturers that supply them, defining common interfaces for archetypal media operations such as capture, transfer, transform, store and QC. Global standardization of these interfaces will enable us, as an industry, to respond more quickly and cost effectively to the innovation and the constantly evolving needs and demands of media consumers. Having begun in December 2009, the FIMS project is about to enter it’s 6th year, but the immense scale of the task is abundantly clear, with the general opinion of the panelists being that we are at the beginning of a movement – still very much a work-in-progress with a lot of work ahead of us. One thing, however, was very clear from the discussion: Broadcasters need to be the main driver for FIMS. In doing so, they will find there are challenges and trade offs. FIMS cannot be adopted overnight. There are many existing, complex installations that rely on non-FIMS equipment. It will take some time before these systems can be converted to a FIMS-compliant infrastructure. Along with the technology change, there is the need to evolve the culture. For many, FIMS will put IT at the center of their production. A different world and skill set, many organizations will need to adapt both their workforce and workflow to truly reap the advantages of FIMS.
What’s really going on in the industry?
My inbox is a confusing place before a trade show. I get sincere emails asking if I’m interested in a drone-mounted 3ME Production Switcherand familiar emails asking when is the last time I considered networking my toaster and water cooler to save BIG on my IT infrastructure. The reality is that prior to a great trade show like IBC, I want to see a glimpse into the future; I want to know what’s really on the radar in our industry, not what happened in the past, or some mumbo jumbo about unrealistic technological achievements. I am personally very lucky that I spend quality time with the folks who set the standards in SMPTE, because this is one place in the world where the future of the industry is hammered out in detail by tiny detail until a picture of the future presents itself like some due process Rorschach test. With the permission of SMPTE’s Standards Vice President Alan Lambshead, here’s a little glimpse of some of those details that you’ll get to see in the weeks, months and years to come. UHDTV – Images Ultra High Definition TV – it’s more than just 4k pixels. In fact, SMPTE has published a number of standards including ST 2036 (parameters) and ST 2084 (Perceptual Quantization High Dynamic Range) that define how the professional media community can create pictures that give consumers the WOW factor when they upgrade. But there’s a lot more to come. How do we map all those pixels onto SDI, 3G SDI, 12G SDI, IP links and into files? SMPTE is actively looking at all thoseareas as well as the ecosystem needed for High Dynamic Range Production. Time Code Oh Time Code. How we love you. Possibly the most familiar and widely used of all SMPTE’s standards, it needs some major updates to be able to cope with the proposals for higher frame rates and other UHDTV enhancements. Beyond Time Code, however, we have the prospect of synchronizing media with arbitrary sample rates over generic IP networks. SMPTE is working on ways of achieving just that, and it means that proprietary mechanisms won’t be needed. That also means different vendors kit should simply work! IMF I’ve written and lectured extensively about IMF’s ability to help you manage and deploy multi-versioned content in an environment of standardized interoperability. As this toolset for a multi-platform ecosystem rolls out into the marketplace, the specifications are continually evolving with the developing needs of the market, as well as with the needs of individuals on the design team who influence the feature set. UHDTV – Immersive Sound I remember back in the 1980s at the BBC, when we proved that great sound improves the quality of pictures. These fundamental principles never change and the desire to create immersive audio-scapes through the use of many channels, objects or advanced sound fields requires standards to ensure that all the stakeholders in the value chain can move the audio from capture to consumption whilst creating the immersive experience we all strive for. SMPTE is the place where that future is being recorded today. TTML The humble caption file. Internationally it is nearly always legal to broadcast black and silence, providing that it’s captioned. There’s really only one international format that can generate captions and subtitles without proprietary lock in, and that’s TTML. SMPTE is active in the use of TTML in the professional space and its constraints for IMF. Whether your view on captioning is good or bad, TTML is the only open show in town and SMPTE’s helping to write the script. ProRes What? Apple disclosing ProRes? Yes, it’s true. As the world requires more interoperability and better visibility, the excellent folks at Apple have created a SMPTE Registered Disclosure Document describing the way that ProRes appears in files. One file format may not seem like a big deal, but the fact that SMPTE is the place where companies that are serious about working together write down the technical rules of engagement is exactly what makes SMPTE the perfect place to plot trajectories for the future. To quote one of my intellectual heroes, Niels Bohr, “Prediction is difficult, especially if it’s about the future.” SMPTE won’t tell you the future, but by participating, you’re more likely to spot the trajectories that will hit and those that will miss. If any of these topics interest you, excite you or put you into an incandescent rage of “How Could They!”, then you are able to participate in 3 easy steps: Join SMPTE Add Standards membership from your My Account page on the SMPTE site Register & turn up in Paris to the meetings on the 16th Sept 2015 Until then, you can always check out more visions of the future on our blog or find out all about IMF on the Dalet Academy Webinar Replay on YouTube. Now, where’s my drone-mounted Mochaccino maker? Until next time…
How to bring standards to your organisation
Back in the 1990s, I was told of an old maxim: "If you can't win the market place, win the standard." I thought that this was a cynical approach to standardisation until we looked through some examples of different markets where there are a small number of dominant players (e.g., CPUs for desktop PCs, GPU cards, tablet / smartphone OS) versus markets where there is enforced cooperation (Wi-Fi devices, network cabling, telephone equipment, USB connectivity). So, how does this affect technology in the media industry, and how can you use the power of standards in your organisation? It seems that the media technology industry hasn't made its mind up about what's best. We have come from a history that is strong in standardisation (SDI, colour spaces, sampling grids, etc.), and this has created a TV and film environment where the interchange of live or streaming content works quite well, although maybe not as cheaply and cleanly as we would like. When the material is offline or file-based, there are many more options. Some of them are single-vendor dominant (like QuickTime), some are standards-led (like MXF), some are open source (Ogg, Theora) and others are proprietary (LXF, FLV). Over any long timeframe, commercial strength beats technical strength. This guiding principal should help explain the dynamics of some of the choices made by organisations. Over the last 10 years, we have seen QuickTime chosen as an interchange format where short-term "I want it working and I want it now" decisions have been dominant. In other scenarios – as in the case of "I am generating thousands of assets a month and I want to still use them in six years time when Apple decides that wearables are more important than tablets" – MXF is often the standard of choice. Looking into the future, we can see that there are a number of disruptive technologies that could impact decision-making and dramatically change the economics of the media supply chain: IP transport (instead of SDI) High Dynamic Range (HDR) video 4k (or higher) resolution video Wide colour space video HEVC encoding for distribution High / mixed frame rate production Time Labelling as a replacement for timecode Specifications for managing workflows Some of these are clearly cooperative markets where long-term commercial reality will be a major force in the final outcome (e.g., IP transport). Other technologies could go either way – you could imagine a dominant camera manufacturer “winning” the high / mixed frame rate production world with a sexy new sensor. Actually, I don't think this will happen because we are up against the laws of physics, but you never know – there are lots of clever people out there! This leads us to the question of how you might get your organisation ahead of the game in these or other new technology areas. In some ways being active in a new standard is quite simple – you just need to take part. This can be costly unless you focus on the right technology and standards body for your organisation. You can participate directly or hire a consultant to do this speciality work for you. Listening, learning and getting the inside track on new technology is simply a matter of turning up and taking notes. Guiding the standards and exerting influence requires a contributor who is skilled in the technology as well as the arts of politics and process. For this reason, there are a number of consultants who specialise in this tricky but commercially important area of our business. Once you know “who” will participate, you also need to know “where” and “how.” Different standards organisations have different specialties. The ITU will work on the underlying definition of colour primaries for Ultra High Definition, SMPTE will define how those media files are carried and transported, and MPEG will define how they are used during encoding for final delivery. Figuring our which standards body is best suited for the economic interests of your organisation requires a clear understanding of you organisation’s economics and some vision about how exerting influence will improve those economics. Although a fun topic, it's a little outside today's scope! So how do you bring standards to your organisation? Step 1: join in and listen Step 2: determine whether or not exerting influence is to your advantage Step 3: actively contribute Step 4: sit back and enjoy the fruits of your labour For more on the topic, don't forget to listen to our webinars! Coming soon, I'll be talking about Business Process Management and standards – and why they matter. Until the next one...
Live from the NAB – sort of!
Normally during a big trade show, such as NAB or IBC, we would have a blog from the show floor – primarily to give our readers not attending the show a glimpse into the main topics of discussion and general vibe at the convention center. The sharp-eyed among you will, therefore, have noted that we’re a little late on this one – sorry – the show was simply that busy that we never had the opportunity. To make it up to you, we’ve compiled show highlights from some of our Academy all-stars. Ben: The one comment that has stuck with me from the show was, “It’s nice to have a focus on technology after all the mergers and acquisitions of last year!” I don’t think it’s fair to say that the industry stopped innovating or releasing new products last year, it’s simply that the news and talk at both last NAB and IBC was largely around the quantity and nature of all the M&A activity and, as a result, many key developments were overlooked. This year, not only did we start to see some of the benefits of the merging of disciplines and technologies, such as the combination of the Dalet Galaxy Workflow Engine and Dalet AmberFin transcoder, but also some significant steps forward in support of 4K/UHD workflows, IP and virtualization. Kevin: Being another busy NAB, I had very little time to walk the floor. But in meeting a lot of present and future customers and partners, I noted two key takeaways. It seems that our industry is getting out of all the Cloud “buzz” and entering a time where there are actual professional applications for it. It feels like everyone is much more educated around the topic of Cloud. Broadcasters, media organizations and vendors alike understand better the challenges and opportunities that it brings from a business point of view, and how it can / should fit in their operations. I think we are finally in a position where we can start to use the cloud for smart workflows and was really happy with the warm reception for our various cloud initiatives, particularly the showcase of our “Newsroom in the Cloud.” Collaboration was another major highlight at the show. Everyone seemed highly interested in the topic. In Dalet systems, we have been implementing and promoting collaboration tools for many years, whether in the facility, across different locations or for users on the field. But this year, the interest and feedback we received about our latest improvements (like bringing some social collaboration tools into the professional world) was way beyond any response we’d gotten in the past. Having various talents collaborating to produce better content seems now to be a priority for our customers, and I’m happy we are in as good a position as ever to help them do it. Bruce: Many discussions of how to ready a business for UHD and whether that UHD would be higher resolution, higher frame rate, higher dynamic range, higher colour profiles or all of the above led to discussions on IMF – the interoperable mastering format. Personally I find this to be excellent news. Seven years on from specifying AS02, it is reassuring to see it reborn with shiny SMPTE IMF specifications and a better understanding in the industry as to the commercial benefits of working with media in a componentised form. Seeing the level of understanding amongst our customers leads me to believe that the transition to IT thinking is now firmly in train. No longer is “IT-based” a technology that you buy, it is a way of architecting and thinking about the business problems to be solved. Stephane: It is interesting to see that this industry continues to evolve rapidly year after year. Information technology is an integral part of the future of radio and television. In the early days of Dalet, I used to say informally that our mission was to bring the best of IT technology to the broadcast and media industry. This continues today with Cloud-based solutions, IP distribution, Service Oriented Architecture (SOA) and Business Process Management, and Dalet is at the forefront of that trend, bringing innovations that are changing and improving the workflows to produce and distribute content. Constant change indeed, but with the need to link the old and the new, whether in formats, protocols, or workflows, to preserve valuable content produced in the past and make it available on an ever-increasing range of distribution platforms. These are factors of complexity: Will IT help us resolve these challenges? At Dalet we believe that emerging new standards and industry initiatives such as IMF or FIMS should help reduce that complexity. The whole industry should take part and benefit from these efforts. Bruce summarizes: The big takeaway for me from both the show and this discussion is that there is no single dominant technological driver any more – there are a number that are pushing and pulling the industry in different directions. No single human can understand every nuance of the technological drivers, and so community education becomes more and more important. The great turnout that we had for all the free Dalet Academy presentations and workshops is a testament to the fact that our customers, partners, competitors and newcomers to the industry all need access to the latest information. I can't predict the future, but I can be confident that the breadth of our work here at Dalet is helping prepare a broad section of the industry to be ready for that future.
6 workflow tips on SOA, BPM and Web Services
You can't move today without someone mentioning workflow and the latest whizzy tool being thrust under your nose with a massive "Buy Me" ticket strapped to it. You'll also find that descriptions of these products are so different that it's hard to figure out a sensible way to compare them. Well fear not! Here I’m providing you with 6 tips to help you navigate some of the vocabulary so that you can tell a well-engineered solution from a hyped up marketing campaign. Tip #1: Service Oriented Architecture (SOA) is a way of thinking If you read a press release that says, "We've implemented Service Oriented Architecture..." then it probably means that the writer didn't understand the subject or they're bluffing. The Wikipedia entry for SOA starts with "... SOA is a design pattern..." In other words, it is a way of designing. It's not a technology and it's not something you can buy in a shop. Specifically, SOA is a way of thinking about the problem that forces you to separate the service that is being delivered to you from how that service is performed. Take a simple, but slightly silly, example. As a builder, I need to make holes in walls so that I can hang beams, pictures and other items. If I went to the shops and asked for a hole delivery service, then the person behind the desk would probably laugh at me for a while and then sell me a drill. The reality is a bit strange. I don't want to own a drill, I don't really need a drill but I do need a service that delivers precision holes to the right wall in the right shape at the right time. Today, I choose to buy a drill as my hole delivery service. Tomorrow - who knows? Maybe a team of students with high-powered lasers and a disregard for personal safety might offer a better service. Tip #2: Your service boundaries might not line up with today's functional boundaries From the early days of TV we have been battling with physics. It's been a miracle to get moving images on the screen at the right time. In the past 8-10 years things have moved very quickly, but old equipment still defines the workflow boundaries in many organisations. A recent example of this is the DPP workflow in the UK. Before the file based delivery specification, QC was performed both in the post house and in the broadcaster. By looking at the service being delivered by the post house, the broadcaster's thinking goes: "I am paying for a trusted file delivery service." Part of the trust element of the business relationship is to trust that the QC was done correctly upstream. As a result, a QC certificate now travels with the media from post house to broadcasters, making the overall supply chain more uniform and efficient. When designing with SOA principals, it's always important to look at the service that is being delivered and figure out what has to flow across that service boundary for that service to be classed as successful. Sometimes it will be media, sometimes it’s metadata, sometimes it’s just performance metrics and sometimes it’s just everything! Tip #3: Business Process Management (BPM) is a methodology If SOA is a way of thinking, then BPM is a way of doing. A process is simply a set of instructions or a method for accomplishing a task. If you are thinking in SOA terms, then a process will be delivering a service to you. For any given service (e.g., a transcode service), two different businesses might have completely different processes for achieving that service. This is because those processes will be designed to optimise something. In transcoding, you may want the maximum throughput or the minimum latency or the maximum quality or the minimum cost or some other optimisation. Each optimisation will define a process and the BPM engine will usually be able to decide which process is best for a given service, provided you give it enough metadata (e.g., cost, throughput, latency or other metrics). A BPM methodology can delivery an SOA design. Tip #4: BPM has standards and terminology – such as BPMN There are quite a few formal standards, de-facto standards and commonly used tools in the BPM world. Learning some of the terminology helps conversations with vendors and technologists. One key word is the orchestrator. This is a component of a BPM system that controls the dispatching of tasks. Just like the conductor in the middle of the orchestra, the orchestrator is responsible for checking that everything is happening at the right time with the right dependencies, and that nothing is stuck and there are no errors. If errors or problems occur, then special tasks are executed to handle them. The orchestrator does that handling too. Another key piece of terminology is BPMN – Business Process Model & Notation. BMPN 2.0 is, in part, a graphical standard that allows you to draw your processes in a standardised way so that you can understand the diagrams in different systems. There are many systems out there today that use their own proprietary representations of workflows, and it's challenging to relearn the meanings of visually similar symbols that in fact have quite different underlying effects. BPMN gives a standard look as well as a standard XML representation of those elements. This means that both humans and machines now stand a chance of exchanging interoperable models. Tip #5: Web services can be good, bad and ugly If BPM is a methodology then Web Services are the technologies that deliver the processes. There are three main Web Services technologies in use today: SOAP - Simple Object Access Protocol; RESTful - Representational State Transfer; and RPC - Remote Procedure Calls. Each of these technologies allows you to create a distributed system connect by IP (Internet Protocol) that behaves like a single system. There is not enough space here to describe the differences between all these systems (and the dozens of alternatives), but if you really want some training in this area then email acadamy@dalet.com and I'll put a webinar together, provided enough people ask. Good web services tend to be stateless - in other words you shouldn't have to remember previous results to interpret the current results. A simple example is this dialogue between two pretend servers: Stateful conversation: Server A – Hello, what's your name? Server B – Mister Server A – Hello, what's your name? Server B – John Server A – Hello, what's your name? Server B – Smith Server A – Hello, what's your name? Server B – <End of Transmission> Stateless conversation: Server A – Hello, what's your name? Server B – Mister John Smith Server A – Hello, what's your name? Server B – Mister John Smith Server A – Hello, what's your name? Server B – Mister John Smith Server A – Hello, what's your name? Sometimes you can't make communications stateless, but you can recognise a bad web service when statelessness is possible but has not been achieved. Designing good web services is difficult. Good ones are released and don't have to be changed for many, many years because they just work. Bad web services probably don't work at all, and ugly ones grow new features and versions in an inconsistent way and drive developers mad. I won't name any products, but if you know any developers ask them to name their least favourite web service and then sit down and drink lot of teas as they tell you why. Tip #6: Web services – it's not religion The various web service technologies can usually be interchanged to achieve the same overall results. By selecting different technologies you are usually optimising a different facet of the process that you are trying to implement. Very often products with which you integrate will come with a fixed web service interface that requires an adapter to convert the product’s web service technology to the web service technology used by the BPM system. This is just the way life is and needs to be factored into the overall project. I know many people who get upset by the fact that there are many incompatible web service technologies in circulation. This is one of the prices we pay for rapid development of internet technologies. You can optimise design speed or the number of people who adopt the design on day-one or the cost of the design. Choose any two. For more on these topics, come and see us at NAB! As Ben said in a previous post, if this year’s NAB is a buzzword Bingo, we’re sure to fill the board.
Broadcasting – all steam ahead. Wait. Which way is forward?
Which way is forward? IBC is over and so are the September SMPTE standards meetings, which means that I've spent the last 10 days continuously talking about the industry and the direction in which we're going. I thought that it would be interesting to share some of those thoughts – you are welcome to disagree! In fact, we welcome the discourse, so please feel free to share this with others to open up the conversation. Three strange consequences of removing tape Removing tape from the interchange of content between facilities has been progressing for the last 10 years, and not everything has been going smoothly. Readers of this blog will be familiar with my views on the excellent work done by the DPP on delivery specifications. The success of this initiative in the UK is causing other international groups to contact DPP enthusiasts, such as myself, to find out how they can replicate that success in their territory. The thing I tell people is that the success is due largely to those responsible for DPP management. They realized that the implementation of the delivery specification and the subsequent impact on the workflow around it depended solely on humans in the chain and their willingness to accept change. If you want to have a unified delivery specification, it requires cooperation between different broadcasters, post-houses and manufacturers within a region to find the compromise that works for that region. No single broadcaster or post house can do it in isolation. This brings me to consequence number 1. Removing tape to cut costs and increase efficiency requires competing production, post-production and broadcast companies to cooperate in areas that add no value to their businesses. However, having different delivery specs for every company adds no value, so cooperation is required. Consequence number 2. As soon as there are an economically significant number of users of a single file delivery specification, the workflow effect ripples upstream in file creation and downstream in file acceptance – and these effects have human consequences. Companies that get change management right will do better than those that just hope it will all work out. Reading some of the DPP's guides will help this process. Consequence number 3. When a large group of people performs a financial transaction to a common specification, the likelihood greatly increases that automated tests of that common specification will be required. This, in turn, increases the chances that the manufacturing community will create specific tests for that specification. These tests become standardised and increase the robustness of the specification. This is a virtuous circle that can only happen if there are enough stakeholders in the success of the interchange standard. No single company is big enough to make this happen. The results are starting to appear in the DPP QC specification, the EBU QC harmonisation work and in the AMWA certification process, of which Dalet is proud to be a member. Is this direction moving us forward? In my view, harmonising delivery standards is most definitely going in the right direction. It removes unnecessary costs and allows more brainpower to be applied to the areas of the industry that add value. This brings me to 4k, high frame rate and high dynamic range. Is this moving us forwards? I would argue yes. It is increasingly obvious on a bright domestic TV screen that content from the 1980s, 1990s, and 2000s can be distinguished if you know what to look for. The progression of acquisition and production quality is now visible to the end user when reasonable HD bitrates are provided. If you project these developments into the future, then it is reasonable to expect that content from the 2010s and 2020s will be distinguishable to viewers in the 2030s. This makes it tough for today's content creators to know which of the UHD TV horses to back. Is 4k 10-bit YUV enough, or do you need high dynamic range to really secure your content for 20 years? High dynamic range and 4k is not new. There is a lot of high dynamic range working in the cinema world, but even for big budget movies, this does not mean that 4k resolution is used everywhere. Mixed 2k and 4k workflows are the norm, and careful attention to color and dynamic range make it all invisible to the viewer. My personal view is that we'll end up with a variety of high dynamic range workflows that will require automated software to insulate the operational staff from the complexities of the underlying system. This will be a good thing, as it will allow facilities to be less tied to a single resolution and a single frame rate in their workflows. In turn this should increase the number of target distribution channels that the content can be deployed on. Most importantly for me, it could herald the end of interlace, fractional frame rates, drop frame timecode anomalies and other strange elements of our industry that we have put up with for 50+ years because the inertia of investment has caused it to be too expensive to change direction. So, to recap… Are we moving forward? Yes. Are we all going in the same direction? No. Will everyone need more versatile metadata-driven tools to stay in the media business? Yes! Do you want to find out more about metadata driven software workflows in the Dalet Galaxy MAM platform? Click here for yes. Click here for no (and hover over the cartoon).