Menu extends

Jul 30, 2019
France
Dalet Brio Provides a Clear Path to IP with SMPTE ST 2110
Flexible, high-density I/O platform enables controlled transition to IP; Passes JT-NM SMPTE ST 2110 testing

Controlled Transition to IP

The Dalet Brio I/O platform provides media organizations a clear path and controlled transition to IP with support for SMPTE ST 2110.

Dalet, a leading provider of solutions and services for broadcasters and content professionals, is providing media organizations a clear path and controlled transition to IP with support for SMPTE ST 2110 in the latest release of its Dalet Brio I/O platform. Supporting both SMPTE ST 2110 and SDI standard workflows, the high-density ingest and playout platform allows media facilities to invest in their future IP infrastructure without disrupting their current operation. The cornerstone of advanced, IP-ready media operations, Dalet Brio adapts to new production and distribution environments with advanced capabilities that manage ingest, transfers, and playout to and from a wide range of systems and devices. Its extensive IP support enables users to process a wide range of parallel streams including SMPTE ST 2110, ST 2022-2 and NDI for linear channels, and RTMP for digital platforms like Facebook Live, YouTube and Twitter.

"With the latest version of the Dalet Brio product, our customers will be able to easily mix SDI and SMPTE ST 2110 workflows, transitioning to full IP with confidence and more importantly, at their own pace,” states Matthieu Fasani, Director of Product Marketing, Dalet. “Media professionals know that IP is the future, yet for most operations, it is not an overnight transformation. Unless you are re-architecting your entire media supply chain, a controlled transition to IP is the best strategy.” 

Fasani adds, "Ingest and playout solutions are key to the media operation and therefore need careful consideration when upgrading. Dalet Brio meets the needs of the new generation IP workflows. Its performance and support for SMPTE ST 2110 workflows are backed by trusted interoperability tests led by the Joint Task Force on Networked Media (JT-NM) in April 2019, ensuring that you are implementing a solution that is going to be compatible with the industry standard. Dalet Brio is an investment that will take your media operation into the future.”

Bruce Devlin, Dalet Chief Media Scientist and SMPTE Standards Vice President comments on the importance of IP workflows and SMPTE standards like ST 2110. “The migration to IP transport for professional media is a key enabler for new live workflows. IP transport and ST 2110, in particular, can give more flexibility and more utilisation of studio infrastructure than SDI is able to provide. Regular interoperability testing and industry collaboration sees an ever-increasing ecosystem of ST 2110 equipment that is able to be put together to create working systems. The IP future is being delivered now and ST 2110 equipment is at the heart of it.”
 

About Dalet Brio


Built on an IT-based input and output video platform, Dalet Brio is an integral part of fast-paced professional media workflows, whether as part of a Dalet Galaxy five enterprise-wide solution, integrated with third-party platforms, or as a standalone product. Dalet Brio suite of applications – Ingest Scheduler, Multicam Manager, Media Logger and Media Navigator – are purpose-built tools that allow broadcasters to expand the capabilities to include multi-camera control, comprehensive logging, and studio production ingest and playout. Dalet customers who have put Dalet Brio at the core of their media foundation range from enterprise broadcasters Euronews, France TV, Fox Networks Group Europe and Mediacorp, to iconic sports teams like San Jose Sharks, to leading post-production and digital distribution facility VDM.

For more information on Dalet Brio and other Dalet solutions, please visit https://www.dalet.com/platforms/brio.
 

About Dalet Digital Media Systems

Dalet solutions and services enable media organisations to create, manage and distribute content faster and more efficiently, fully maximising the value of assets. Based on an agile foundation, Dalet offers rich collaborative tools empowering end-to-end workflows for news, sports, program preparation, post-production, archives and enterprise content management, radio, education, governments and institutions.

Dalet platforms are scalable and modular. They offer targeted applications with key capabilities to address critical functions of small to large media operations - such as planning, workflow orchestration, ingest, cataloguing, editing, chat & notifications, transcoding, play out automation, multi-platform distribution and analytics. 

Dalet solutions and services are used around the world at hundreds of content producers and distributors, including public broadcasters (BBC, CBC, France TV, RAI, RFI, Russia Today, RT Malaysia, SBS Australia, VOA), commercial networks and operators (Canal+, FOX, MBC Dubai, Mediacorp, Mediaset, Orange, Charter Spectrum, Warner Bros, Sirius XM Radio) and government organisations (UK Parliament, NATO, United Nations, Veterans Affairs, NASA).

Dalet is traded on the NYSE-EURONEXT stock exchange (Eurolist C): ISIN: FR0011026749, Bloomberg DLT:FP, Reuters: DALE.PA.

Dalet® is a registered trademark of Dalet Digital Media Systems. All other products and trademarks mentioned herein belong to their respective owners. 

YOU MAY ALSO LIKE
A Three-Platform Approach: Dalet Galaxy, Dalet Brio and Dalet AmberFin
So far, 2014 has been the year of mergers and acquisitions within the broadcast industry. As previously reported on this blog, not all this M&A activity is driven by the same customer-focused aims. However, in the case of Dalet, our recent strategic acquisition of AmberFin has the customer clearly in mind. The merging of the two companies enables our new enlarged and enriched company to cover significantly more bases within file-based workflow environments. From IBC 2014, Dalet will offer three technology platforms: Dalet Galaxy, Dalet Brio and Dalet AmberFin, leveraging the knowledge and technologies of both companies to deliver a broader and deeper set of solutions. It’s worth looking under the hood and understanding why this is so important. For readers that are new to some parts of the Dalet product family, let me shed a little light on these platforms: Dalet Galaxy is the latest and most advanced version of the Dalet Media Asset Management (MAM) platform and the most recent evolution of Dalet Enterprise Edition. The landmark development initiative leverages more than 10 years of successful MAM development and customer input. Dalet Galaxy is the industry's first business-centric, MAM platform developed to manage media workflows, systems and assets throughout the multimedia production and distribution chain. Dalet Brio is an innovative and cost-effective platform for broadcast customers looking for non-proprietary solutions to digitize and playback their content. Constructed using Dalet Brio servers (IT-based ingest and playout servers for SD and HD content), it also provides a powerful set of user tools and applications to help deliver video workflows. Dalet AmberFin is a high-quality, scalable transcoding platform with fully integrated ingest, mastering, QC and review functionality, enabling facilities to make great pictures in a scalable, reliable and interoperable way. AmberFin software runs on cost-effective, commodity IT hardware that can adapt and grow 
as the needs of your business change. Advanced Integration Capabilities to deliver new workflows As a specialist in MAM-driven workflows, Dalet has been actively looking at delivering end-to-end workflows, and we all know that one of the biggest problems we encounter is making the various workflow components work together efficiently and intelligently. This is the reason we, at Dalet and AmberFin, have always been strong supporters of industry standards as a means to ease integration issues when building workflows. Each of the three Dalet platforms possess powerful integration capabilities, based on standards and APIs, which enable every product built on these platforms to be integrated within overall workflows. Most importantly, we believe that the greatest added value we can bring to our customers comes from tight integration between these three platforms, empowering workflow optimization that previously was unimaginable. This vision goes well beyond what any industry standard or even proprietary API can achieve. Let’s take an example: in today’s modern workflows media will be transcoded at a variety of touch points in the production and distribution process, potentially degrading the source quality over successive generations. At Dalet, we strive within the AmberFin platform to minimize quality degradation at each step of the process, but we recognize this is not enough. In fact we still believe that “the best transcode is no transcode.” This can only be achieved by exploiting key metadata (technical, editorial and rights metadata) stored in the MAM platform in order to make smart decisions on when to transcode or not, and what type of transcode profile to apply. And this is just one of the ideas we have. At IBC this year, we will be showcasing some fantastic new features and facilities that are possible using the new extended and enriched Dalet portfolio of workflow solutions. Check out here our exciting theatre line-up for the next few days. We’re still booking demos, so it’s not too late to book a meeting: http://www.dalet.com/events/ibc-amsterdam-2014. To learn more about Dalet’s strategic acquisition of AmberFin, download the following white paper: http://www.dalet.com/white-paper/dalet-and-amberfin.
MXF AS02 and IMF: What's the Difference and Can They Work Together?
If you read my previous posts about IMF, you will already know what it is and how it works. But one of the questions I often get is "how is IMF different from AS02 and will it replace it? After all, don’t they both claim to provide a solution to versioning problems?". In a nutshell, the answer is yes, IMF and AS02 are different and no, IMF will not replace AS02; in fact the two complement and enhance each other. Let me explain: MXF AS02 (for broadcast versioning) and IMF (for movie versioning) grew up at the same time. And while both had very similar requirements in the early stages, we soon ended up in a situation where the level of sophistication required by the broadcasters’ versioning process never really reached critical industry mass. Efforts were continually made to merge the MXF AS02 work and the IMF work to prevent duplication of effort and to ensure that the widest number of interoperable applications could be met with the minimum number of specifications. When it came to merging the AS02 and IMF work, we looked at the question of what would be a good technical solution for all of the versioning that takes place in an increasingly complex value chain. It was clear that in the studio business there was a need for IMF, and that the technical solution should recognize the scale of the challenge. It came down to a very simple technical decision, and a simple case of math. AS02 does all of its versioning using binary MXF files, while IMF does all of its versioning using human-readable XML files. There are maybe 20 or 30 really good MXF binary programmers in the world today; XML is much more generic, and there must be hundreds of thousands of top quality XML programmers out there. Given the growing amount of localized versioning that we are now faced with, it makes sense to use a more generic technology like XML to represent the various content versions whilst maintaining the proven AS02 media wrapping to store the essence components. In a nutshell this is the main difference between AS02 and IMF. Both standards have exactly the same pedigree and aim to solve exactly the same problems, but IMF benefits from a more sophisticated versioning model and therefore requires a greater degree of customization – and XML is a better means of achieving this. IMF is not going to replace AS02. Rather the goal is to get to a place where we have a standardized IMF package as a means of exchanging versioned packages within the workflow. IMF will actually enhance the AS02 bundles that represent componentized clips that are already ingested, transcoded and interchanged today.
Life before and after DPP (Digital Production Partnership)
People that know me will be aware that file-based workflows are a passion of mine. Ten years ago I was co-author of the MXF (Media Exchange Format) specification and ever since I have been engaged in taking this neatSMPTE standard and using it to create a business platform for media enterprises of every size and scale. This is why I’m so excited by the Digital Production Partnership (DPP): it represents the first ratified national Application Specification of the MXF standard and is set to revolutionize the way that media facilities and broadcasters work.To explain what I mean, let’s compare life with a DPP ecosystem to life without. Less pain to feel the gain 
In a standardized DPP world, there would be a limited amount of pain and cost felt by everybody but this would be shared equally amongst the organizations involved and it would be a limited cost, which is incurred only once. After this point, our industry has a fantastic common interchange format to help encourage partnerships and build businesses. In an unstandardized world, where different facilities have decided to use different tools and variants of MXF or other formats, the major cost becomes the lack of third-party interoperability. Each time content is exchanged between different facilities, a media transcode or rewrap in that format is required. This means that all vendors in all the facilities will ultimately support all the file formats andmetadata. The engineering required to implement and test takes time and costs money on an on-going basis. Interoperable metadata helps the content creator 
In a world that has adopted DPP, media and metadata interoperability is not an issue since the format is built on a strong, detailed common interchange specification. In this homogeneous scenario the resources that would have been used in the interoperability engineering process can be used in more creative and productive ways, such as programme making. Programme making is a process where most broadcasters utilise external resources. In a world without DPP, whenever a broadcaster or production facility receives a new file from an external facility, such as a Post House, the question must be asked whether this file meets the requirements of their in-house standard. That evaluation process can lead to extra QC costs in addition to possible media ingest, transcoding, conformance and metadata re-keying costs that need to be taken into account. Building a business platform
 This heterogeneous environment is an issue not just for interaction with external facilities: often different departments within the same major broadcaster will adopt slightly different file standards and metadata making interoperability a big issue to them. As a result, today only about 70 per cent of transactions within companies are file-based – the remainder employ tape. However, this is much higher than where external agencies are involved – here, only 10 – 15 per cent of transactions are file-based. The essence of the problem is the lack of a common interchange format to enable these transactions. DPP is the first open public interchange format that is specifically designed to address this issue. DPP is intended to transform today’s 20 per cent trickle into an 80 per cent flood in the shortest time. To find out more about DPP and how it can transform the way your operation works and also your effectiveness working with other organizations read AmberFin’s White Paper on DPP.
What’s really going on in the industry?
My inbox is a confusing place before a trade show. I get sincere emails asking if I’m interested in a drone-mounted 3ME Production Switcherand familiar emails asking when is the last time I considered networking my toaster and water cooler to save BIG on my IT infrastructure. The reality is that prior to a great trade show like IBC, I want to see a glimpse into the future; I want to know what’s really on the radar in our industry, not what happened in the past, or some mumbo jumbo about unrealistic technological achievements. I am personally very lucky that I spend quality time with the folks who set the standards in SMPTE, because this is one place in the world where the future of the industry is hammered out in detail by tiny detail until a picture of the future presents itself like some due process Rorschach test. With the permission of SMPTE’s Standards Vice President Alan Lambshead, here’s a little glimpse of some of those details that you’ll get to see in the weeks, months and years to come. UHDTV – Images Ultra High Definition TV – it’s more than just 4k pixels. In fact, SMPTE has published a number of standards including ST 2036 (parameters) and ST 2084 (Perceptual Quantization High Dynamic Range) that define how the professional media community can create pictures that give consumers the WOW factor when they upgrade. But there’s a lot more to come. How do we map all those pixels onto SDI, 3G SDI, 12G SDI, IP links and into files? SMPTE is actively looking at all thoseareas as well as the ecosystem needed for High Dynamic Range Production. Time Code Oh Time Code. How we love you. Possibly the most familiar and widely used of all SMPTE’s standards, it needs some major updates to be able to cope with the proposals for higher frame rates and other UHDTV enhancements. Beyond Time Code, however, we have the prospect of synchronizing media with arbitrary sample rates over generic IP networks. SMPTE is working on ways of achieving just that, and it means that proprietary mechanisms won’t be needed. That also means different vendors kit should simply work! IMF I’ve written and lectured extensively about IMF’s ability to help you manage and deploy multi-versioned content in an environment of standardized interoperability. As this toolset for a multi-platform ecosystem rolls out into the marketplace, the specifications are continually evolving with the developing needs of the market, as well as with the needs of individuals on the design team who influence the feature set. UHDTV – Immersive Sound I remember back in the 1980s at the BBC, when we proved that great sound improves the quality of pictures. These fundamental principles never change and the desire to create immersive audio-scapes through the use of many channels, objects or advanced sound fields requires standards to ensure that all the stakeholders in the value chain can move the audio from capture to consumption whilst creating the immersive experience we all strive for. SMPTE is the place where that future is being recorded today. TTML The humble caption file. Internationally it is nearly always legal to broadcast black and silence, providing that it’s captioned. There’s really only one international format that can generate captions and subtitles without proprietary lock in, and that’s TTML. SMPTE is active in the use of TTML in the professional space and its constraints for IMF. Whether your view on captioning is good or bad, TTML is the only open show in town and SMPTE’s helping to write the script. ProRes What? Apple disclosing ProRes? Yes, it’s true. As the world requires more interoperability and better visibility, the excellent folks at Apple have created a SMPTE Registered Disclosure Document describing the way that ProRes appears in files. One file format may not seem like a big deal, but the fact that SMPTE is the place where companies that are serious about working together write down the technical rules of engagement is exactly what makes SMPTE the perfect place to plot trajectories for the future. To quote one of my intellectual heroes, Niels Bohr, “Prediction is difficult, especially if it’s about the future.” SMPTE won’t tell you the future, but by participating, you’re more likely to spot the trajectories that will hit and those that will miss. If any of these topics interest you, excite you or put you into an incandescent rage of “How Could They!”, then you are able to participate in 3 easy steps: Join SMPTE Add Standards membership from your My Account page on the SMPTE site Register & turn up in Paris to the meetings on the 16th Sept 2015 Until then, you can always check out more visions of the future on our blog or find out all about IMF on the Dalet Academy Webinar Replay on YouTube. Now, where’s my drone-mounted Mochaccino maker? Until next time…
Why the forecast is [still] looking Cloudy
I wrote a blog post a while ago about the future looking cloudy. To quote a famous person, Mark Twain I think, "Predictions are difficult – especially when it concerns the future!" and predictions about the link between the technological benefits of cloud technology and new business models are especially hard to get right. So why do I think that the future still looks cloudy? In my opinion, it comes down to the fact that media is undergoing several simultaneous transformations: Moving files over IP instead of tape has broken the link to real time signals for (nearly all) scripted and offline content Moving content by IP instead of SDI is practical, time-accurate, reliable and can be cheap Viewer consumption via IP is increasing year-on-year We are seeing more formats at higher frame rates with strange pixel numbers every year SDI links the properties of the transport to the frame rate / pixel numbers, and requires work & standardisation for every ancillary data format carried IP keeps the properties of the transport and the frame rate / pixel numbers & ancillary payloads independent Traditionally, broadcasting & media have achieved reliability through over-provisioning of equipment – along with the associated cost of doing it Cloud is all about maximising the use of the physical infrastructure so that over-provisioning is shared between many end-users and saves money Add all of these different elements together and I think that a cloudy future is more and more likely as time progresses. The death of over-provisioning is happening in many facilities, as the ability to run processing farms and data-links at near full-capacity at all times becomes more operational practise and less theoretical. Innovative licensing tools that allow floating of licenses to where the processing is needed will increase the usage of hardware and therefore keep overall running costs down. We still have a long way to go before every broadcast centre becomes a virtualised set of applications in a data centre. Many of the cost structures don't work yet. The cost of transporting content into and out of the cloud is still a lot more expensive than the cost of processing it. The cost of storing data in the cloud is not dropping as fast as processing costs. There are still question marks over security. Although papers have been published about secure clouds that allow processing of encrypted content without it ever being "in the clear" in the data centre, I still haven't seen that technology rolled out and deployed at a reasonable cost. Whether cloud means in-house data centres or public shared facilities or some sort of hybrid model, the key is to get the best value from the infrastructure and to share the costs so that everyone is better off. The types of media that we use in our industry vary in volume and value so that the perfect business model for one broadcaster may not work for another. We have an interesting journey ahead of us where business models will be changed massively by technology and in turn new technologies will be forced into existence when business conditions are right. The future is full of change and uncertainty. From where I'm sitting, it's still looking cloudy.
HPA: Mapping the Future, One Pixel at a Time
I love the HPA Tech Retreat. It is the most thought provoking conference of the year, one where you're guaranteed to learn something new, meet interesting people and get a preview of the ideas that will shape the future of the industry. Here are the six most interesting things I learned this year. Collaborating competitors can affect opinions At this year’s HPA Tech Retreat, I had the honour of presenting a paper with John Pallett from Telestream. Despite the fact that our products compete in the market place, we felt it important to collaborate and educate the world on the subject of fractional frame rates. 30 minutes of deep math on drop frame timecode would have been a little dry, so we took some lessons from great comedy double acts and kept the audience laughing, while at the same time pointing out the hidden costs and pitfalls of fractional frame rates that most people miss. We also showed that there is a commercial inertia in the industry, which means the frame rate 29.97i will be with us for a very long time. In addition to formal presentations, HPA also features breakfast round tables, where each table discusses a single topic. I hosted two great round tables, with John as a guest host on one, where the ground swell of opinion seems to be that enforcing integer frame rate above 59.94fps is practical, and any resulting technical issues can be solved – as long as they are known. I will never be smart enough to design a lens Larry Thorpe of Canon gave an outstanding presentation of the design process for their latest zoom lens. The requirements at first seemed impossible: design a 4K lens with long zoom range that is light, physically compact, and free from aberrations to meet the high demands of 4K production. He showed pictures of lens groupings and then explained why they couldn't be used because of the size and weight constraints. He went on to show light ray plots and the long list of lens defects that they were battling against. By the end of the process, most members of the audience were staring with awe at the finished lens, because the design process seemed to be magical. I think that I will stick to the relative simplicity of improving the world's file-based interoperability. Solar flares affect your productions We've all seen camera footage with stuck or lit pixels and, like most people, we probably assumed that they were a result of manufacturing defects or physical damage. Joel Ordesky of Court Five Productions presented a fascinating paper on the effects of gamma photons, which, when passing through a camera’s sensor, cause the sensor to permanently impair individual pixels. This is something that cannot be protected against unless you do all of your shooting underground in a lead lined bunker. Joel presented some interesting correlations between sun spot activity and lit pixels appearing in his hire stock, and then showed how careful black balance procedures can then reduce the visibility of the issue. UHD is coming – honest The HPA Tech Retreat saw a huge range of papers on Ultra High Definition (UHD) issues and their impacts. These ranged from sensors to color representation to display processing, compression, high frame rates and a slew of other issues. I think that everyone in the audience recognised the inevitability of UHD and that the initial offering will be UHDTV featuring resolution improvements. This is largely driven by the fact that UHD screens seem to be profitable for manufacturers; soon enough they will be the only options available at your local tech store (that’s just good business!). The displays are arriving before the rest of the ecosystem is ready (a bit like HDTV), but it also seems that most of the audience feels better colour and high dynamic range (HDR) is a more compelling offering than more pixels. For me, the best demonstration of this was the laser projector showing scenes in true BT2020 wide colour range. First we saw the well-known HDTV Rec.709 colour range and everything looked normal. Next up was the same scene in BT2020 – and it was stunning. Back to Rec.709, and the scene that looked just fine only seconds before now appeared washed out and unsatisfactory. I think HDR and rich colors will be addictive. Once you've seen well-shot, full color scenes, you won't want to go back to Rec.709. The future is looking very colourful. Women are making more of an impact in the industry (Hooray!) There were three all-women panels at this year's HPA, none of which were on the subject of women in the industry. This was a stark contrast to the view of women in the industry as shown on a 1930s documentary of the SMPTE Conference, where men with cigars dominated the proceedings and women were reduced to participating in the chattering social scene. This contrast was beautifully and ironically highlighted by Barbara Lange (Executive Director of SMPTE) and Wendy Aylesworth (President of SMPTE 2005-2015), who hosted their panel in bathrobes with martini glasses, while explaining the achievements of the society over the year. If you haven't yet contributed to the SMPTE documentary film project or the SMPTE centennial fund, it's time to do so now. These funds will help support the next, diverse generation of stars. IMF and DPP are a symbiotic pair One of the most interesting panels was on the Interoperable Mastering Format (IMF) and the Digital Production Partnership (DPP) interchange format (and yes, this was in fact one of my panels!). One format’s purpose is to distribute a bundle of files representing several versions of one title. The other is designed to create a finished, single file with ingest-ready metadata, where the file can be moved to playout with virtually no changes. Both formats have a strong foothold in the life cycle of any title and are likely to form the strongest symbiotic relationship as we move into the future. One thing that I pointed out to the audience is that the DPP has done a huge amount of work educating UK production and postproduction houses about the change management that is required for file-based delivery. They have written a wonderful FREE guide that you can download from their website. All in all, the HPA Tech Retreat is a wonderful event with so much information flowing that it takes weeks to absorb it all. I must confess though, that one of the highlights for me was being able to cycle up the mountain every morning before breakfast. It meant that I could go back for seconds of all the wonderful cake that was on offer. Happy days! Until next time – don't forget about our UHD webinar, happening today. If you didn’t sign up in time, drop us a line at academy@dalet.com and ask for a re-run. The more people that ask, the more likely that we'll do it!
Practice your scales to make your enterprise workflow sing
An increasingly common approach now to developing new media infrastructure is the “proof of concept”. This could sound a bit negative, as if we needed to try something first in order to see if it really works. But I really do not think that is the motivation behind it: To meet the multi-platform, multi-format requirements of a media business today, we need complex, largely automated workflows. And it makes sense to try them out first, in one part of the organization. But this achieves more than one goal: First it obviously proves the concept: it shows that you have all the equipment and processes available to do what you need. Second it allows you to develop workflows on the concept system, so you fine-tune them to work precisely the way that you want to work. Some vendors will try to push you towards a big bang approach where the workflows are baked into the architecture, which makes it difficult to make changes when you find you want something slightly different. Third and this is really important, it allows you to get a sub-set of users comfortable with the system, and to take ownership of the workflows. It means you get the processes right, because they are being designed by the people who actually need them, and it means you get a group of super-users who can ease the transition to the main system. Which all sounds good. But it does depend upon something that we all talk about but rarely really understand. The proof of concept stage is only worthwhile if this small system performs in exactly the same way as the final enterprise-wide implementation. Scalability The word “scalable” is often used quite loosely, but this is what it really means. You can start with something small, and then by adding capacity, make it cover the whole operation, without changing any detail of how it works. For me, that means that the enterprise system has to be built the same way as the proof of concept system. If the first iteration consisted of a single workstation performing all the functionality – which in our case might be ingest, transcode, quality control and delivery – then the full system should be a stack of workstations that can perform all the functionality. And it also means that you don’t need to blow the capital budget on a huge number of hardware boxes. That would not be efficient, because at any given time some of the boxes might be idle while others had a queue of processes backed up and delaying the output. Flexible Licensing It's better to ensure you have sufficient licenses for the software processes you require, with a smart licensing system that can switch jobs around. If server A is running a complextranscode on a two-hour movie, then its quality control license could be transferred to server B which can get on with clearing this week’s batch of trailers and commercials. The AmberFin iCR platform is designed on this basis. You can buy one and run all the processes on it sequentially, or you can buy a network to share the load, under the management of an iCR Controller. This manages the queue of tasks, allocating licenses as required from the central pool. As well as making the best use of the hardware, it also collects statistics from each server and each job. Managers can see at a glance if jobs are being delayed, and if this is an overall problem for the business. More than that, they can also see why jobs are delayed. Can it be solved by additional software licenses, or do you need more servers? Scalable systems are definitely the way to go, but only if you can understand how you need to scale them. If you want to find out more about enterprise level file based workflows, check out our newwhite paper. I hope you found this blog post interesting and helpful. If so, why not sign-up to receive notifications of new blog posts as they are published?
Valentine’s Day News - Broadcast loves IP
2 days of presenting the SMPTE Seminars in the UK with the excellent Ben Davenport and we can announce that it’s conclusive. Broadcast loves IP: The SMPTE seminars in London and Manchester were well attended and generated a lively debate on the merits of using IP and IT technology in broadcast workflows. Much of the comment and debate centred around the replacement of SDI links with Internet Protocol (IP) links. It was no surprise that the topic of end-to-end synchronisation took up a lot of discussion with the issues of delay and latency around audio links being high on the list of topics for which no-one had a perfect solution. IP gives great flexibility within a workflow compared to SDI, and today, the economics look more and more attractive, the further from the camera and microphone you are. We spent a lot of time talking about the visibility of errors within an IP network and how the extra layers of complexity when compared to SDI can hide problems within the system. A lunchtime interlude today was filled by Alex Rawcliffe who gave a compelling talk about BBC R&D’s IP studio project. It’s encouraging experimentation and thinking of radically different ways in which IP technology all entertainment to be created and distributed. Fascinating stuff and makes you wonder if the migration to IP in order to reduces capital and operational costs might, at some stage, result in new forms of television distribution that rely on the two-way communications with the public that end-to-end IP make possible. A key take away from the talk and a recurring theme throughout the two days was the importance of accurate metadata. It’s the key to making it all work. After 2 days of lecturing, I was looking forward to a nice beer on the train on the way home. Unfortunately, a massive storm hit Manchester and all trains were cancelled, motorways closed and even the football match was cancelled! (that’s how serious it was). We found a hotel and were grateful that we weren’t flying to Manchester airport. At least sleeping will be peaceful. All in all a great couple of days and many re-enforcements of the ideas put forward in our Enterprise white paper. No-one in the audience was able to predict the future of broadcasting, but everyone agreed that Internet, IP and IT technology are important parts of that future. Love it. Happy Valentine’s day to all.
Broadcast Workflows - Looking to the future
Today’s the day for my webinar on the future. I must confess to being slightly nervous. Once you put your ideas on paper and tell the world, an invisible clock starts ticking and you’re open to the scrutiny of the world: I know that many hundreds of people will tune into the webinar and nod wisely as I talk about UHDTV, IP transfer, why the death of interlace can’t come soon enough and I wonder what my predictions will look like in 2016 and 2020: I know already that there is one topic I forgot to include – the issue of fractional frame rates. We live in a software world where just about anything you can think of can be made for a price. There really are very few limits to creativity left nowadays, yet we not only live with the compromises of the past – SOME PEOPLE STILL THINK THEY’RE A GOOD IDEA! Sorry, I’ve calmed down now. Let’s start with a little history. In the early days of television we wanted to show a frame rate that was high enough to avoid flicker, but with enough vertical resolution to be sharp. One of the compromises was to invent INTERLACE – Aaaaagghh. Sorry. I get worked up when interlace is mentioned. But there were other compromises. A frame rate had to be chosen that did not cause beat frequencies with the electricity supply. If your electricity is at 50Hz and you show a picture at 60fields/s on an old (1940s - 1950s) television set, then you will see vertical lines on your screen that move up (or down) at the beat frequency of 60-50 = 10 field lines/second. This is very annoying, so in the original television standards we related to the field / frame rates to the electricity frequencies to make set design easier and cheaper. Before clicking on this map link, I’d like you to imagine what percentage of the world watches pictures at 50fields/s (i.e. 25 fps) and what percentage watches at 60fields/s (i.e. 30fps). If you live in the USA, this is quite surprising – most of the eyeballs watching TV in the world are not using your frame rate. This makes frame rate conversion (or temporal conversion) technology one of the key technologies for multi-platform distribution (luckily for me, AmberFin are global leaders in this). I digress. When color TV was introduced by the NTSC committee in the USA in December 1953, a slight reduction of the frame rate was introduced to reduce the visibility of the chrominance subcarrier and the FM audio subcarrier. This meant that the frame rate was not 30fps, but (30 / 1.001) fps. This gave the birth of 29.97fps television – a fractional frame rate that has had huge consequences through the industry. Because of fractional frame rates, timecode had to have a counting mode that allowed it to keep (roughly) in sync with the time of day – this is called drop frame. Mixing drop frame and non-drop frame timecode and getting it wrong wastes thousands if not millions of dollars around the world every year in content rework. As I mentioned earlier, we live in a software world where just about anything you can think of can be made for a price. So why is it still a good idea to introduce brand new formats like 120fps video and insist on a 1.001 fractional offset? It makes no sense. Modern TV sets don’t care. Companies likeAmberFin (and a few others) can convert to and from the fractional rates with ease in software. Why burden the many to solve the problems of a very few? It’s crazy. If you care about this – get involved in the SMPTE 10e group or the ITU group before it’s too late. If it’s all quite interesting and you’d like to know more, then tune into today’s seminar – you can sign up right here – but be quick space is limited! Register for our next webinar TODAY: Wednesday 29th January at: 1pm GMT, 2pm CET, 8am EST, 5am PST OR 5pm GMT, 6pm CET, 12pm EST, 9am PST ‘til next time. I hope you found this blog post interesting and helpful. If so, why not sign-up to receive notifications of new blog posts as they are published?
SDI is dead – long live SDI (and can you help me ingest some?)
In the last 2 weeks, I have been asked to sit on panels in front of audiences as the leader of the fictitious “Campaign for the continued existence of SDI”. I suspect, given my outspoken views that interlace and fractional frame rates should be killed off, that I am a surprising choice for the role and therefore good entertainment for the audience: The fact that I spend my day job helping customers get out of the SDI world into the IP, file based world as quickly as possible. Those of you who read the IABM journal will recognise some of my text below. I hope it causes you to stop and think, or maybe (if you are in the UK) turn up at the Wednesday meeting of the RTS and tell me that I’m wrong! It’s easy to look into the past and say Old is Bad, and look into the future and say New is Good. Getting rid of SDI is more than just history though, it’s like flies. No-one really likes flies. I don’t know anyone who goes into their garden and rejoices that flies are buzzing over the barbeque. I do know, however, that flies are a vital part of the food chain and although they have many downsides, getting rid of them would break our existing way of growing, cultivating and managing food. SDI is similar. Getting rid of SDI will mean completely redefining the whole value chain of the media industry. Without SDI, most of what we know today as television would have to be redefined. At almost every point in the chain where we have to be real time, you will still find SDI. SDI may seem old and lack the scalability of IP, but it is also comparatively simple and secure. I know that a live feed from a camera, going through a router, into a desk and then out to a transmitter is going to stay up. Internal and external redundant working practises and engineering policies have matured over decades to make it that way. The systems are engineered so that during the live match, staff can concentrate as much as possible on delivering entertaining content rather than requiring an IT super-hero to dynamically reconfigure the Name Server because it’s under a DDOS attack from bad people. In today’s IT / IP based world we are seeing new ways for data to be mined and hacked by criminals. Zero-day exploits are traded in the criminal underground internet and many of those exploits can be used to attack an IP based media value chain. After all, media is just data. Ok, it’s data that’s a lot bigger than your name, address and credit card details, but at the end of the day, it’s just data. In a big IT infrastructure, you often don’t know that your data is being copied illicitly until it’s too late. I am not aware of hackers getting into an SDI based plant and siphoning off content or denying access to the infrastructure by maliciously overloading it. SDI is a known and understood way of working. With the advent of file based practises for off-line, batch based workflows and non-linear editing, SDI has already been eliminated from certain parts of the media food-chain. In the domain of linear television where customers still seem to enjoy the experience of having a knowledgeable curator of content decide their viewing schedule, SDI is still going strong. The predicted death of SDI “glue” and module business seems as far away today as it was 5 years ago. Should SDI die? I don’t think it’s a technical decision – it’s a business risk decision. There are few technical barriers left to killing off SDI, but like killing off all the flies – if we kill SDI before we have thought about the whole media food chain – there could be some Media Enterprises facing an unforeseen and prolonged famine. If you have lots of SDI and need to work in files (not flies), then why not check out our range ofingest products and read our white paper on enterprise transcoding so that you can make all the files you need both economically and quickly. I’m off for a bike ride and risk those flies. See you at the RTS event. I hope you found this blog post interesting and helpful. If so, why not sign-up to receive notifications of new blog posts as they are published?
What is MAM (Media Asset Management)?
Note to readers: this article was written in January 2015. For up-to-date information on Media Asset Management, visit: www.dalet.com/solutions/media-asset-management Media Asset Management systems (or MAM as they have become known) is an area that has generated substantial interest in recent times. But what exactly is MAM and what does it mean to broadcasters and media facilities? To attempt to answer this thorny question, we surveyed the major MAM vendors to ask them what a MAM was. And it was very interesting to see the responses. All of them agreed on some of the functions of a MAM but there was a very large spread of functionality as you can see from the responses listed below. And no single solution vendor can claim all of these functions. Vendor 1 What is MAM? Organizations want to manage their media assets from ingest to distribution. An organization has more and more media to ingest and more and more media to distribute and is looking for a solution. A good image used by our customers: a MAM is a kind of a 'factory' of media where stuff gets in and other stuff (transformed) gets out. MAM processes Ingest, QC, Edit, Prepare, Log, Transcode and fulfil. Non-MAM processes Playout automation, scheduling and resource management. Vendor 2 What is MAM? Very Big Question – it would require a whole slide deck. MAM processes This would require a whole slide deck – it seems that everything is in scope. Non-MAM processes See answer above. Vendor 3 What is MAM? The management of an asset for the purpose of using the asset for publishing and broadcasting. Key factors include the file format, its stored location, is the asset the master or a copy, technical metadata required for publishing, metadata and links about the content. MAM processes Generation of the initial metadata, capture asset (upload or digitisation) create browse, spot check & QC, prepare for publishing, send to archive, create promotional material, send to publishing or broadcast system. Non-MAM processes The following should be linked from MAM. DRM (inc. contracts and ownership), Metadata about manufacture of media, Scheduling and production planning, Tracking talent, Financials , Newsroom systems. Vendor 4 What is MAM? MAM is a business focused workgroup or enterprise class solution that: - Ingests, manages and delivers media assets - Manages & share knowledge of assets - Unified, universal, access to assets - Integration for cross-system asset exchange - Orchestrates business processes + workflows - Controlled collaboration between users MAM processes - Metadata synchronization to traffic / NRCS / ERP / DRM - Metadata + media ingest, transform, QA & annotation, delivery, lifecycle management, genealogy, round-trip - Search, assessment, collection and ordering - Business process management, news, sports, reality etc. Non-MAM processes - Program planning and scheduling - Resource scheduling - DRM - Financial and business admin (ERP) - Customer relationship management (CRM) - Web Content Management Vendor 5 What is MAM? MAM is a system to archive and manage time-based media and workflows in a distributed and collaborative environment. MAM processes - Media processing and transformation - Integration of all components in a facility - Media publishing and delivery - Business Process Management - Cloud computing Non-MAM processes - ERP (Enterprise Resources Planning) - CRM (Customer Relationship Management) - Playout automation and Traffic systems - Non Linear Editing Vendor 6 What is MAM? MAM is a multi-format, multi-vendor, multi-workflow federated content repository which encourages innovation, traction, and operational efficiency over the complete life-cycle of an asset. MAM processes - Ingesting + Proxy + Transcoding - Population, update, retrieval of metadata - Workflow + Multi-format delivery + version control - Translation services + ad sales + budget planning - Metadata + asset import tools Non-MAM processes - Human Resources processes - Construction planning - Facility management and scheduling - Financial planning So as you can see, it is difficult even to agree on what MAM constitutes and the business processes that it includes. It is important when looking for a MAM solution that you work out what YOU want & need for YOUR business and then find a solution that matches it with the smallest amount of customization. Don’t end up being Special. Make sure your site does not become bespoke. Like all software vendor choice, ask the vendor about general product enhancements and improvements, new device integrations and common developments to be made available to your business. At Dalet AmberFin, we have not set out to create a designated MAM system as such, for this will, we believe, create restrictions and limits to your current and future business operations. Instead, through our iCR (intelligent Content Repurposing) platform we have created an architecture with an over arching management layer, which provides a brain for your facility that enables users to make and implement facility –wide decisions quickly, easily and with the right blend of process automation and human intervention for any organization and application. The same architecture provides an Engine Room where all the workflow’s ingest, transcode and QC operations are contained and managed. AmberFin iCR (now Dalet AmberFin) is not a MAM. It’s an engine that can be controlled from a MAM to provide the right files at the right time with the minimum of effort. We have worked very hard to be MAM neutral, so whether you end up buying from a market leader like Dalet or you end up “Rolling your own” from software off the net, iCR provides a set of APIs that allows you to integrate with any MAM. Why is this important? It comes down to the rate at which different functions in your organisation age. You will find that business rules (controlled by a MAM) and transcode profiles (controlled by the transcoder) will change at different times. If you’re happy to lock the two functions together forever, then you may be happy with the MAM not only controlling but also performing your transcoding. If you are aiming for more business flexibility, then you may choose a best of breed solution. The choice, at the end of the day is yours. I’m always interested in feedback, so why not sign up to receive blog notifications and maybe drop us a line and suggest topics that you’d like me to talk about. Until next time…
What to look out for in Amsterdam – apart from the canals and fast-moving trams?
So here we are again, gearing up for another assault on all the senses at IBC. More than 1350 exhibitors spread across 14 halls – what should you focus on to make best use of your time in Amsterdam? In today’s increasingly file-based media workflows, file transcode operations have assumed critical importance. Gone is the old router that joins processes together and in comes the smart, scalable, reliable, hidden transcoder. I apologize in advance if some of this blog piece sounds a bit pro-AmberFin, but when we designed our new Transcode Farm Controller, we talked to many different customer types to find out what was needed and they made some interesting observations that I will share with you. Most importantly, they said that a Transcode Farm Controller must be scalable, must provide an appropriate level of redundancy to suit their application and must enable a combination of high throughput and advanced system functionality. Well, that sounds easy enough. They also wanted it to be operationally easy because the operational staff won’t have the training to understand the low level technology of every file format and the supervisors will be too busy to spend much time on the farm. (Note how I avoided trying to tell a joke there). What’s the big deal about transcode farm control? The big deal is the reliability aspect. To make the problem simple. Imagine that your MAM is sending jobs to iCR via web services and suddenly someone unplugs it from the rack. How do we build the system so that the MAM doesn’t know or care that something went wrong? We ended up with 4 major components to the system: 1. The interface – this is the web service “listener” and watch folder controller that responds to commands from the MAM, from the GUI(s), from the review stations or any other component of a facility using iCR 2. The transcode node – this is the engine that does the processing of the file(s) 3. The Farm Controller – this is the brain that decides which job goes to which iCR node at what time and with what overrides 4. Network License Manager – this is the control layer that allows an iCR node to be a transcoder and allocates the various options to the farm Put those functions together on a server and call it “ The new iCR Transcode Farm Controller” and you get a single, reliable, redundant interface to the iCR Transcodecapabilities. Jobs are sent to the iCR Controller and behind the scenes the system architecture is sized to achieve the required levels of redundancy and throughput required by that particular application. By combining the Farm Controller with AmberFin’s Network Licensing Server you satisfy another requirement we were asked for – to dynamically float cost-options across the underlying server hardware and no longer have a fixed node-to-server relationships. In fact one thing we have been able to achieve is to allow an international customer to move their transcode farm around the globe on a daily basis and “follow the moon”. In other words, by utilizing the power of their international VPN, they can move jobs around the planet and process jobs when the people are sleeping. Neat! Strangely, we were also asked by customers if we could make the transcode farm cheaper. The combination of the Transcode Farm Controller and Network License Server adds this third dimension in terms of enhanced network functionality. It provides floating software licenses for occasional functions, such as standards conversion, captioning, Dolby, audio processing and watermarking. Network License Server makes European debut at IBC We know how complicated some of these systems can get, so we were asked if we could make a single desktop system look and behave like a big system, so we’ve managed to use the same technology in both a single standalone Desktop PC running as the proof of concept as well as in a network of 100 or more servers / blades / VMs. Furthermore, each iCR node contains all the software required to implement the four main functions of media ingest, file transcode, playback and quality control. The license defines the functionality of a specific node at a specific time. It goes without saying that we were asked to be sure that a user’s full capacity is always on-line. For example, if one transcoder node encounters problems, the Farm Controller will seamlessly swap the required iCR operations to a back-up node and the Network License Server will ensure it is able to carry on. It is easy to tie together job queues and the Network License Server so that the network administrator is able to gauge how many jobs were delayed based on the licensing options available within the group. This feedback gives key capacity information to administrators, allowing them to manage both costs and capacity. For more information, why don’t you download the AmberFin white paper "Enterprise Level File-Based Workflows: Merging Technology with Sound Business Sense". I’m pretty pleased that we’ve listened to our amazing customers and made something shiny and new that will help the monetisation of media and give a great ROI.