Menu extends

Sep 23, 2014
Designing Enterprise Class Workflows – it’s all about the money

Designing Enterprise Class Workflows

Enterprise Workflow MoneyThis Wednesday, 4th December, I will be hosting the seventh webinar within AmberFin’s Bruce’s Shorts program and I just wanted to give you a ‘heads up’ so that you can put it in your busy schedule:

The webinar will look at the fascinating issue of ‘Enterprise Class Workflows’ and in particular it will focus onTranscode Farm requirements. Like many radical concepts, I will demonstrate how these workflows can and indeed must start on a small scale and then grow to support your business development.

The webinar explains exactly what Enterprise Class Workflows are – a class of project or undertaking that can be both bold and complex that is undertaken for a business or company. Or to put it another way, it is a solution to a difficult challenge that is taken on in a financially oriented way. The webinar highlights the fact that these workflows are always financially driven – it’s all about the money, about costs versus benefits.

Enterprise Class Workflows - profit driven planning for business growth 

The key issue with Enterprise Class Workflows is ROI (Return on Investment) in terms of how these working methodologies can create more profit and revenue, whilst reducing costs and increasing efficiency. When properly conceptualized, an Enterprise Class Workflow will be financially aware, sustainable, reliable, robust, efficient, scalable and flexible. And just as important, in the webinar, I will explain that these workflows do not need to be massive nor expensive.

In my experience, most Enterprise Class Workflows that fail to meet their objectives do so because of a lack of thought and planning at the critical early stages. They fail to sketch out and capture the complete workflow requirements, including the all important omissions. Normally, the designer does not understand how to measure success and progress in this critical developmental stage. I will highlight the importance of planning for the worst case scenario whilst hoping for the best.

Putting Enterprise Class Workflows under the microscope

During the Webinar, I will put our AmberFin approach to Enterprise Class Workflows under the microscope. I will demonstrate why it is so important to build a proof of concept – a small scale version of your big idea. This approach enables you to mitigate and plan for problems whilst simultaneously putting most of your effort where the money is.

AmberFin iCR transcode farms are scalable from one to many nodes whilst always providing fault tolerant operating systems. I will demonstrate how with AmberFin iCR you can scale up your proof of concept to whatever system capacity you require without needing any kind of external orchestration system.

AmberFin iCR employs a Network License Manager, which means that your key software licenses can float around the network and be used when and where they are needed in the most cost-efficient and effective ways. Also Quality Control of your media files is just another job within AmberFin iCR. Yes, QC is critical within any file-based workflow but we don’t restrict the workflow in any way to address the need for QC – the transcode/QC unifier sits under the controller.

Think enterprise, think scalable 

Within AmberFin iCREnterprise Class Workflows start with Mini Transcode Farms on a single chassis and one transcode node that are not redundant but are resilient. The smallest configuration still employs all the same technology and methodology as the largest capacity workflow so your proof of concept remains true to the eventual large-scale workflow. We offer extra functionality options – such as standards conversion, captions, Dolby audio processing and watermarking – which float around the workflow in identical ways to the other software licenses.

Within this Webinar, I will prove that it is easy to take a really complex task and create a very simple, elegant workflow solution that supports your operating staff and also pleases your CFO by offering the best ROI possible. I will show that with the right planning and thought processes, coupled with AmberFin technology, you can create an Enterprise Class Workflowthat is designed for business change and also designed for fault tolerance.

Yes, it is all about the money, but that does not mean that you need to jump in with a large scale system – with AmberFin iCR it is easy to scale your workflow to mirror your business needs.

If you would like to take part in the webinar you can chooe your prefered time and sign up to the webinar here: 

Wed 4th December: 1pm GMT, 2pm CET, 8am ET, 5am PT

Wed 4th December: 5pm GMT, 6pm CET, 12pm ET, 9am PT

CTA Enterprise Whitepaper


Why is Captioning so Difficult?
Captioning is a really hot topic right now so I thought I’d share one of my short videos from the“Bruce’s Shorts” series of Broadcast-IT training videos for today’s blog post: Read the transcript below or clickPLAY to watch the video. And if you would like to receive the entire series of videos with no obligation, click the "Bruce's Shorts" box to the right of this post and sign-up now. Hello and welcome to Bruce's Shorts. My name's Bruce Devlin, I'm the Chief Technology Officer at AmberFin. And today, we're going to ask why is captioning so difficult? Well, it's an interesting question. Mostly because in some places in the world you are not legally obliged to broadcast video, you are not legally obliged to broadcast audio, but you are legally obliged to put captions on it. This is a bit weird. It's OK to go black and silent as long as there are captions. And actually, that's symptomatic of one of the things about captioning and subtitling across the world. Very often the captions and subtitles have been legislated. Which means there's no direct revenue involved for the content owner or the broadcaster inputting the captions on. But, you have to have them there. Also around Europe, where there are many, many languages, there's lots of little cottage industries that have grown up to do subtitling. So there's no real industrialization that's taken place in that area. Likewise, in the US, captioning has never really been industrialized. But now, in this modern file based world, we're looking at delivering content to more and more platforms. SD, HD, internet, iPad, Catch Up TV, On Demand, Blu-ray, ultraviolet, you name it there's lots of different captioning formats required out there to service all of these different delivery devices. However, there isn't a lot more money for making the captions. So if you want to get your captioning right, and you want to help with the difficulty of getting captions into your solution, then look for the one stop shop that allows you to sort captions out within the heart of your ingest and transcodeand QC solutions. Because if you can find a way of doing that, you stand a much better chance of controlling your captioning costs. My name's Bruce Devlin. That was one of Bruce's Shorts. I hope to see you for the rest of the series.
4 Steps to creating a Hyper V Virtual Machine as an iCR host
Server virtualization has been around for quite some time now, the first time I came across it was about 10 years ago in an attempt to reduce the roll out time and overhead for setting up Microsoft Technical training rooms: The existing process was quite involved taking two dedicated staff members to ensure the training machines were rebuilt and ready for any upcoming courses, utilising virtual machines allowed for a room to be readied from a central source therefore allowing the technical staff to focus on other aspects of their role. Whatever the motivation for an organisation to move to virtualisation, be it to reduce power consumption, as part of a disaster recovery plan or simply to make better use of current server platforms, this blog will demonstrate how to create a Hyper V virtual machine as a host for an iCR transcoder. Why Hyper V? Simply because it’s comes bundled in Windows 2008 Server and is therefore free… Step 1: The first step is to configure your host machine, if Hyper V is not already installed on your Windows 2008 Server, open Roles within Server Manager and add Hyper V as a role; the virtual machine host software will be installed. Step 2: Now to create a new virtual machine, open Hyper V Manager from the Start Menu. To start,click Virtual Network Manager in the Actions panel on the right hand side of the Manager Window and add a new virtual network adaptor, this will allow our new virtual machine to communicate with the outside world via the host computer NIC, assign it a name which will later be displayed during the virtual machine setup, select the host network interface the virtual interface should link to and click ok. Step 3: Next is to create a new virtual machine, again in the Actions panel click on New and chose Virtual Machine… Step 4: This will launch the New Virtual Machine Wizard, click next on the ‘Before you Begin’ window, in the next window you will be asked to specify a name and location for the virtual machine, complete with your desired settings and click next. Set the amount of memory which should be reserved on the host machine RAM and allocated to your new virtual machine. Click Next, now we connect the virtual machine to the virtual network adaptor we created in the first step. Next you will need to create a virtual HDD for the virtual machine, by default the VHD will be given the same name as specified for the virtual machine, the location will default to the location of the virtual machine. I plan to use shared storage for my VM, therefore I will keep the VHD reasonably small. Finally, we need to decide how we will install the VM Operating System. Click Next, review your settings and click Finish. Despite having configured most of the VM settings using the Wizard, there is one other change required, the number of CPU cores available to the VM. To set this, right click on your newly created Virtual Machine and click settings, chose Processor in the left hand panel and configure the settings on the right hand side. The number of required processors is dependent on the workflow you wish to implement, information on required resource for each workflow can be found in the iCR Hardware Recommendation document, email AmberFin Support and request this document if you need assistance. If you plan to run multiple VMs on one host machine you may need to configure the Virtual Machine Reserve and percentage resource settings, for now I will configure for one VM. Once you are happy with your settings click OK, ensure your install media is available, either the DVD or the ISO image (either Windows 2008 Server R2 or Windows 7), and right click on the VM and click start. Hyper V will start the OS install on your Virtual Machine, follow the steps as you would with a standard Windows installation. Taking a Snapshot: If you plan to roll out further VMs within your environment you may wish to make a backup of this clean VM so you can quickly and easily roll out new virtual machines using this install as your base “Golden” template. With the VM running, take a Snapshot of the current configuration by highlighting the VM in Hyper V manager and clicking Snapshot in the right hand panel. Once the snapshot is complete the menu will change and an Export option will be available, shutdown the VM andclick this new Export option, specify the backup location and click Export. Your new VM is now backed up and ready to have iCR installed. Copy the iCR installation file to the local drive of your VM, double click and run through the install process. Let us know if you would like to test iCR within a virtual environment, we should be able to supply you with a 14-day trial licence so you can play…
Rearrange these words: "Nail, coffin, Interlace, HEVC, last, for"
Sometimes I feel like a complete failure. Although I have been fortunate enough to win a lot of awards and have met a lot of fantastic people with whom I think I've done some good work, there is still an evil that lurks amongst us: This evil is being forgotten and often lurks silently and invisibly in our media causing no problems at all UNTIL IT'S TOO LATE. To a young person starting out in our industry this evil is hard to understand and the obvious solution is "JUST GET RID OF IT". INTERLACE is EVIL "Hello young person", I say. "Hello old man." They reply. I look upset. I don't feel old, but I remember black and white TV broadcasts. In their eyes, I am a fossil. "I see you're shooting a documentary" "Yes, we like the Film Look™", they reply. "It gives our content a new and modern look. Not like that old fashioned HD stuff from 2008". "Progressive!", I say. "Yes very" they reply. Rolling my eyes skywards, I know this conversation is going to get difficult. "No, not progressive style, but progressive scanning", I start. I am met with the blank stare of two teenagers looking at a kitten that has just started to speak Japanese. They have no idea what the word progressive means when followed by the word scanning". I take a deep breath. "When your electronic camera shoots in Film Look™ in Europe, it is actually taking a series of pictures that have 1920 pixels and 1080 lines and it is doing it 25 times every second. Every pixel in the picture was shot at about the same time and they are transmitted line by line starting at the top left corner and continuing to the bottom right corner. That's called progressive scanning." "You can edit progressively, but when you come to broadcast the picture, you actually have to do it in a mode called interlace where you pretend that all the even lines were shot 1/25 of a second before all of the odd lines in any given frame. That's called interlaced scanning. Then in the TV set, you have to figure out if the original content was actually shot in Film Look™ or whether it was shot with an ordinary interlaced camera so that the picture isn't shredded by the cheap deinterlacer in the screen. After all, flat screens are progressive and not interlaced" "Woah, old guy", the spotty teen says. " You're telling me that even though I've shot this movie in super Film-Look™ there is a chance that it will be destroyed in the transmission process? THAT'S NUTS!" "Well, yes", I reply. "It is nuts, but that's how television works. Even more scary is that your great documentary work might be edited for conformance, censorship and duration by some stranger who had their copy of Final Cut Pro set to interlaced mode and it might add a whole bunch of interlaced artefacts throughout the entire production." "By that stage", I continue, "you have no control over your content. The only thing that can save your content in broadcast, on cable and on the web is a great deinterlacer like those fromAmberFin. Without that kind of technology, you're masterpiece is consigned to look jerky and crinkly and you won't know why!" "But, but... ", the young person stammers, "Interlace must be evil. Who invented it and why can't we get rid of it?" And so I come to the point of this post. I have tried over the years to help rid the world of interlace. It made a lot of sense back in 1940s when it allowed us to fool the eye into seeing more lines in a picture despite the limited bandwidth available in the valves and electronic components of the day. Today, when all capture devices are progressive, all display devices are progressive and the black-art of good interlace handling is being lost from the collective minds of the industry, interlace makes no sense. HEVC - High Efficiency Video Coding will, I hope, be one of the last nails in the coffin of interlace. It has been published without interlace modes. This is a GOOD THING and encourages everyone to start using more and more progressive techniques in their distributions. We are, however still generating more 1080i material every year than progressive material. While this continues to be the case, good quality deinterlacing in the value chain will be vital for the success of HEVC in professional deployments If you want to know how AmberFin's world beating deinterlacer helps HEVC look great for ALL content and not just movies, why not book an appointment to see us at IBC?
Pixels! Look at all the lovely pixels!
“3D is sooo last year. We’re totally over it. 4K is our new BFF. We’ve even seen it on a demonstration screen at NAB. It is so cool, and you don’t even need ugly glasses": Yes, my friends, it seems the doom-mongers were right and 3D television was just a passing fad. Consumers did not want to sit at home wearing glasses, and it transpired that they were not that keen on tigers appearing to sit on their coffee tables. Out goes 3D, here comes 4K But this is not the end of the world, because the R&D departments of the big consumer electronics companies have a new idea: 4K television. Actually it is not quite 4K.We are talking a screen size of 3840 x 2160, because that is a convenient, double the resolution of HD in each direction. And four times the pixels of HD has got to be four times as good, right? Well, up to a point. First, to see the extra resolution you need a bigger screen. That changes viewing habits: you are not looking at a picture, you are exploring it, moving your eyes around the screen. That may not be a bad thing, certainly for big programs like wildlife documentaries and sport. But I think there are some things we can do which will get us a much bigger improvement in picture before we just throw more pixels at the screen. First – and if you are a regular reader of this column you will probably guess what I am about to say: we have to consign to history the evil that is interlace. We also need to consider how many of those lovely progressively scanned pictures we would like. If you have ever had the chance to look at high frame rate television – say 100 progressive frames a second – then you will have been astounded at how much sharper it looks, even though it is still in humble old HD. A higher frame rate, of course, means more data. But so does having four times as many pixels. Not to mention the NHK Super Hi-Vision 8K system, which has 64 times the pixel count of HD. HEVC The good news is that those very very clever mathematicians who develop compression codecs for video have come up with a new solution: HEVC. It’s called High Efficiency Video Codec for a reason: it packs a lot of data into a small stream. So HEVC can make high frame rates possible. And 4K. And – this is the bit I really like – it does not allow interlace. When we are ready for 4K production, then HEVC will be able to deliver it. It might be a broadcast stream, although I think it is more likely it will be online. 4K will not be practical in the very near future, and certainly not as fast as the consumer electronics companies would like us to replace our televisions. But it will come, as will other advances. When it does, it will be because creative program makers want to make use of the additional quality that more pixels, or more frames, or both, will deliver. In turn, that means keeping as much of the quality as is humanly possible, all the way to the final screen in the home. And when that happens, I hope that my eyesight is still good enough to be able to read the manufacturers logo on the pedals of the riders as they ride the Tour de France in 4K. If you happen to be at IBC next month then come and check out how it's done on stand 7.H39. Click here to make an appointment now.
It's all about the API!
Once upon a time, in a land far far away software was sold that ran on a single computer. That computer was a lonely device. It didn't have many other computers to talk to, so all of the software was concerned with internal processes and internal calculations and making sure it didn't crash. Everything was under control and the software QA team was small and happy: Along came networks and suddenly pieces of software could talk to each other over great distances. Suddenly it was not good enough for software to run on just one computer, it had to talk to many computers at the right time and with the right computer vocabulary. It was a time where the QA team had to get bigger because there were more things to test, but the team was medium sized and happy. Very very big computers lived at the far end of the network and were accessed by terminals that the users sat in front of. These were the good old days. Time passed and eventually some bad people discovered that there were bugs in the network software and they could get into any computer they liked by exploiting these bugs and stealing data and causing havoc. The QA team was not happy any more and they grew a security division and also some IT sys-admins who put up firewalls and other security devices. More time passed and the internet was born. Now data was moving freely between computers and the firewalls were very good and people used their personal computers for running lots of powerful software that was so complicated that it took a long time to learn. Today we see that the "cloud" is arriving. It looks a little bit like a super duper version of the Mainframe and terminals from the good old days but there is an important difference. The mainframe was all about dumb terminals talking to a central computer that was far away. The cloud is all about smart applications talking to other smart applications with smart APIs and users joining them together to do really neat things, even though the computers are still far away. At AmberFin we've known about the importance of the API since the first day we launched the company. Our APIs are stable and well used. In fact about half our customers don't use the GUI - they only use the APIs to control their transcoding, QC and ingest. This approach makes it very easy to turn our standard product into something that looks like a customized solution by using a local programmer to call our web service APIs from within a browser, within an application, from a MAM or from a command line.
The view from Brazil
The post is going out while I am atSET in Brazil and it's interesting to see the difference in TV habits around the world. Brazil is an amazing country. It has a rapidly growing economy and a new generation of affluent middle class who are keen to have the latest television sets in their homes: The companies and the people that I have talked to so far show a broad spectrum of sophistication when it comes to file based workflow. Some are at the bleeding edge of technology with implementation that are tape free, and others are struggling with migration plans to create an online infrastructure to show an archive that is still on tape. This broad spectrum of users highlights clearly an effect described by Joe Zaller of Devoncroft. He called it the evolution continuum and this refers to the fact that television, in many ways, resists change. Taking creative content and showing to large audiences can still be done profitably today, particularly in countries with less sophisticated internet infrastructure. Competition for eyeball contact time becomes more and more intense and the market that television is trying to service comes into direct competition for attention with online activities. If someone had told me that I would see adverts for browsers and online video services on the television and within 20 minutes I would be online with that browser being served adverts for the television station I was watching only a few minutes ago, then I would not have believed them. Yet here we are with everyone scrambling for viewers and attention. In many ways, the countries that have more time to adopt the new television models have it lucky. They can afford to watch what is happening to the more developed countries and see what models work and what models don't. Whichever model wins the day, there is one thing that is clear. The downward pressure on prices for TV equipment will continue. We're not quite at the stage where a fully service based software infrastructure in the cloud makes sense for the majority of stakeholders, but the signs are there. If you're interested in a little security against the uncertain future, how about downloading our enterprise white paper to see how AmberFin can help you industrialize your file handling.
How to get de-interlaced in Amsterdam
HEVC (High Efficiency Video Codec) is likely to be one of the most talked about topics at IBC this year, not only because it promises to reduce the data rate needed for high quality video coding by 50% compared to the current state-of-the-art, but also because the new coding standard simply does not support interlacing: And that's a really good thing because when it comes to achieving a high quality, clean encode of video – and in particular high frame rate, high-resolution video – then interlacing is a huge distraction. Inevitably it adds noise and reduces quality, because the compromises inherent in it do not work well with the underlying algorithms. Moreover, all display devices and most capture devices inherently use progressive scanning. This is a good thing because it encourages everyone to start using more and more progressive techniques in their distribution operations. While I wholeheartedly believe that HEVC will mark the end of interlacing, we still have some ways to go before we truly rid the world of the evil that is interlace. In fact, the professional content creation industry is still generating more 1080i material every year than progressive material. While this continues to be the case, good quality de-interlacing in the value chain will be vital for the success of HEVC in professional deployments. So if your content was created in an interlaced format or restored from an archive in an interlaced format, don't give up! Passing it through a professional de-interlacer will at least ensure a clean progressive signal, free from artifacts, going in to the HEVC encoder. And if you integrate high quality de-interlacing within a generic transcode platform, and tightly couple transcode to a variety of media QC tools to check quality before delivery, you're got a winner! If you come and see us at IBC (7.H39), we'll be able to show you a really good de-interlacer can help HEVC look great for all content. And if you're not going to IBC but still want to find out more about HEVC and how to rid the world of interlace, check out our new HEVC white paper.
IBC 2013 - Did interlace die or was it just wounded?
I delivered the last paper of the show in 1988 in Brighton, UK and it was raining. This year the rain was plentiful too and I was up on stage again talking to young people as part of the Rising Stars program. It's refreshing to talk to people who are new to the industry. It gives you a good perspective on the future: One of the most interesting topics of conversation was 4K and ultra HD. A young producer, Tim Pool, remarked how it was easy to make UltraHD with mobile technology. The latest Nokia phone has a 4k video camera in it which enables you to shoot and upload to the internet in amazing resolutions with clean progressive pictures and get them displayed on 4k TV sets. So if it's easy for the amateur, why is it hard for the professional? Ever wondered why we have progressive cameras that are forced to do interlace that feed compression codecs which are less efficient with interlaced material going into an interlaced transmission system that illuminates a progressive display that is forced to do interlace? Seems a bit weird when you say it out loud. And that's what I did. Everyone I talked to at IBC thought it weird that we all just accept interlace and we're not actively designing it out of our workflows. Instead the buzz at IBC (if there was one) was focussed on 4K and HEVC. Strangely, both of these technologies work best with progressive pictures (and there is no 4K interlaced standard that I know of yet). AmberFin's HEVC demo showed a lot of people about the effects of interlace on compression and all the 4K imagery at the show was in progressive. Spookily, one of the most interesting demos was in the new technology area where frame rate changes were shown. Images at 60fps looked so much better than 30fps. 120fps looked even better and 240fps had a lifelike quality that was awesome. For my money, super-high frame rate HD that is up-converted to 4K will make my man-cave the perfect place to hang out and watch sports. Native 4k resolution movies in my man-cave will be awesome if I can have a Dolby Atmos object based sound system. Unfortunately, my man-cave is still a dream and until that day, AmberFin's new transcoder farm will have to continue removing interlace from 1080i pictures. Maybe I was a little premature in predicting the death of interlace. I think for now, it is just wounded.
Digital Production Partnership (DPP): Will it meet its deadline?
The Digital Production Partnership (DPP) is an organisation facing the Herculean challenge of helping the UK broadcast industry to exploit maximum benefit from file-based digital production. The DPP’s Technical Standards group objective is to achieve the standardisation of technical requirements for the delivery of TV programmes to UK broadcasters and to maintain and update these standards in line with current capabilities: The agreement of the DPP’s file-based Technical Standards (released Jan 2012) was not intended to signal an immediate move to file-based delivery. Instead, the DPP has provided clarity around which file format, structure and wrapper will become the expected standard for file-based delivery as it is phased in. In 2012 BBC, ITV and Channel 4 began to take delivery of programmes on file on a selective basis. The aim is for file based delivery to be the preferred delivery format for these broadcasters from 1st Oct 2014. DPP D-Day is looming So, slightly less than 12 months ahead of DPP D-Day how is it looking? Subscribers to Bruce's Shorts may recall what we call the interoperability dilemma. The premise being that if you take all the combination of wrappers, video codecs, audio codecs, track layouts, time code options and other ancillary data and complete a "minimal" in/out test matrix you end up with test plan that will take at least 1800 years to complete. Even if you constrain this to "commonly used" combinations, by the time you factor in different versions of formats and specifications (we're now on the 3rd revision of the base MXF specification for example), 1800 years is a little on the optimistic side. By tightly constraining the wrapper, video codecs, audio codecs and metadata schema, the DPP Technical Standards Group has created a format that has a much smaller test matrix and therefore a better chance of success. Everything in the DPP File Delivery Specification references a well defined, open standard and therefore, in theory, conformance to those standards and specification should equate to complete interoperability between vendors, systems and facilities. However, theory and practice frequently bear little resemblance. At AmberFin, we can see two key reasons why the theory and reality don't quite match up. Interoperability issues create need for DPP dress rehearsal before deployment First, despite the best efforts of the people who actually write the standards and specifications, there are areas that are, and will always be, open to some interpretation by those implementing the standards, and it is unlikely that any two implementations will be exactly the same. This may lead to interoperability issues and the only way to find out is "on boarding" - actually testing real-life workflows. This highlights the importance of planning DPP deployments well in advance of the looming deadlines and allowing for "dress rehearsals" as early as possible. Can you extend the DPP specification? The second reason is less around files and more around workflow. The simple truth is that the more you constrain a specification, the fewer applications and workflows it can be used in. A year before DPP D-day, we have already been asked about where AmberFin can help "extend" the specification to meet the needs of those facilities already planning and/or implementing DPP file delivery. The flexibility of AmberFin’s iCR platform in creating additional metadata fields and the ability to display and manage QC data while keeping a core media file that conforms to the DPP specification has been a big bonus. But while this flexibility simplifies the implementation of DPP-based workflows, deviation from the specification only increases the need for testing and highlights the importance of planning DPP deployments sooner rather than later. If you needed any more reasons to getting going on DPP well ahead of the deadline – next summer is rumoured to be a good one, and if you would prefer to spend August enjoying the British summer sun knowing that come 1st October, everything will just work then you need to plan ahead! So, to answer the question posed in the title of this blog – an enormous amount of work is being done by a great many people within various organizations. These operations will be the big winners when D-Day arrives. Other organizations might have fallen into the trap of believing that the DPP specification is an ‘off the shelf solution’ – it is not. If you want to be a part of the DPP revolution you need to start preparing now. If you want more information of how AmberFin can help you meet the DPP D-Day challenge then why not start by reading our White Paper.
Industry pros gather in the Big Apple at Content Communications World (CCW)
While CCW may not attract the insanely large crowds that NAB and IBC do, it provides a more intimate, more focused setting that many professionals in our industry tend to find really valuable. Now in its 10th year, CCW show officials say they're expecting in excess of 6,000 people: Attendees not only get the opportunity to network with friends and other members of their respective communities, but they can also meet directly with vendors and service providers to acquire a deep understanding of today's complex and ever changing technical landscape. That’s because the show highlights many different areas of expertise, both through the individual exhibits areas related to each discipline, but also through its panel discussions and technical sessions. CCW's educational program will feature over 200 speakers addressing the latest trends in content creation, management, distribution and delivery. As part of this highly popular educational program, Bruce, together with one of our long time customers Jonathan Salomon from WWE, will be hosting a session on how to handle multi-format conversion for global content delivery. WWE is a busy, fast paced operation: every week, they distribute approximately 200 tapes internationally, 100 hours of programming via satellite, and about 45 hours a week of file based content to some 30 international and domestic clients. All of that content is created in 1080i 29.97 NTSC and with a large international presence, a lot of this has to be converted to PAL. WWE also delivers playout-ready content to all of their clients to match whatever playout server they have; they produce multiple language versions in-house, create custom versions of their shows for specific countries, and produce an ever growing amount of web content, so as you can imagine they do a lot of file-based conversion! Bruce and Jonathan will show how to avoid quality issues when international and internet distribution takes place and will show a novel, fast, software technique to correct the problem, resulting in clean international and internet masters. Their presentation will take place in the Broadcast & Beyond Theater, located in the 600 aisle on Thursday November 14th from 11:45 AM – 12:15 PM. And of course you can come and see us too on booth #1256 where we will be showing a new workstation designed to help smaller facilities and postproduction houses easily create and review J2K assets. Many media and entertainment organizations have chosen JPEG 2000, or J2K, as a high-end service master or service mezzanine and the facilities that deliver content to them have to work with their strict format requirements. This new affordable and easy to use version of iCR will enable them to ingest and transcode files to J2K, ensuring the highest quality video encoding in a scalable and tightly integrated solution.
Whatever you do, don’t become an accidental software programmer!
Whoever said that running a media facility was easy needs carting off to the madhouse! Remaining price and quality competitive in today’s marketplace requires the physical agility of Usain Bolt, the insight of Gary Kasparov and the staying power of Red Rum all rolled into one: Those media facilities that were amongst the first to spot the tremendous technical, operational and business opportunities offered by file-based workflows have benefitted from this revolutionary environment for some time. But it’s not as simple as making that decision to adopt file-based workflows and then retiring to your yacht in the Caribbean. No. As the whole area of file-based workflows evolves, media facilities need to keep at the forefront of this new technology. But willingness to embrace these new technologies is not sufficient on its own. For without insight, you can (quite accidentally) end up becoming a software programmer rather than a media facility. Enterprise-class file-based workflows – make the right decisions early What do I mean by this – running your company, you buy products that you believe will make your life easier and your business more profitable. Your plan is that these new products will provide flexibility and versatility in your business operations. You’ve seen products that allow you to design all the logic for your custom workflow and this seems perfect But just hang on a second! What’s your business about? Are you a software designer using software tools to deliver a workflow, OR, are you a media engineer using a workflow to deliver a result? All too often we see valuable media engineers plugging away with workflow design software trying to get it to work for them. The alleged capital software saving is often wiped out by the man hours spent figuring out the logic of the workflow engine. So, what are the alternatives? You can buy the right tool for the job that has the right check boxes on its integrated GUI. Or alternatively you can use an expert software design service to design the workflow with the sophisticated tools. What’s more costly – capex or opex? So what’s the moral of this story: don’t be an accidental software designer. Don’t accidentally subsidise your capex budgets with opex pain and suffering. At least get a quote for the software service to configure your workflow – this will give you the true cost of the workflow solution. When you are developing your file-based workflow strategy, don’t just concentrate on your technical needs. As how good are these products and software programmes at accommodating my business needs today? And how good are they at performing cart-wheels and handstands in order to accommodate our business needs as they change in the future. Bruce’s Shorts Enterprise-Class Workflow webinar – sign up today Enterprise class software platforms can help transform your business operations – hopefully for the better but sometimes for the worse. Time spent planning your strategy could be some of the most valuable time that you ever spend in the life of your media facility. AmberFin can help you through this planning process in three ways: 1 - Why not download and read our White Paper on Enterprise Level File-Based Operations 2 - On December 4th, as part of AmberFin’s Bruce’s Shorts initiative, I will be hosting a webinar that will investigate Enterprise-Class Workflows.If you would like to take part in the webinar or just to learn more about Bruce’s Shorts then why don’t you sign up to the program today? 3 - Pick up the telephone and give us a call. At AmberFin, we hate to see facilities making well-intentioned but flawed investment decisions. We are happy to help you plot a safe course through these new waters and would love to hear about what you are doing. I hope you found this blog post interesting and helpful. If so, why not sign-up to receive notifications of new blog posts as they are published?
5 Secrets to make Metadata Easy
Transfer of content between companies remains a time consuming and costly process. In a lot of cases it is not the time taken to get the file from A to B which is the time consuming aspect, it is the interpretation of what exactly has arrived when it gets there: We all know that Metadata is the answer, but what is metadata? How should it be stored? And how can it be made easier? In this blog post I will look at 5 secrets to make Metadata easy… When I say easier, should it be easier for the content creator to generate the Metadata, or easier for the content recipient to interpret the information contained? Can it not be both? One Initial consideration is what data should be captured or in other words the Metadata Schema, should it be video codec, resolution, and frame rate along with the same information about the audio? The metadata schema depicts all the information which should be included in the metadata and should cover all aspects of video and audio so that downstream processing decisions can be made based solely on the metadata, no asset analyses should be required. Two Next, how should the metadata be carried? When creating the metadata for a particular asset you may not know who is going to receive it. Is it going to be supplied to a multinational broadcast facility where it will be imported into a MAM and processed along with the hundreds of titles received on a daily bases? Or is it going to a small post production house where it is to be put back on tape so it can be supplied to meet a particular customer requirement? Either way it is probably best to choose a language which tailored to both. XML, a language designed to store and transfer data with the benefit of being both human and machine readable. It therefore doesn’t matter which of the above facilities receives the content, the metadata contains usable information. An additional benefit of using XML is the addition of a Style Sheet, formatting like data in a nested structure and presenting the metadata in a more readable way. While this addition would probably not assist the larger organisation above, it will help the individual who has to read and extract key information when making downstream processing decisions. Three What if the schema you decide upon uses a similar naming convention as someone else? You supply the content to the broadcaster who attempts import into their MAM, the MAM rejects the files based on the contents of the XML due to a conflict with a previously used naming convention. Creation of a namespace in your XML allows for similar tags to be used within differing schemas, therefore removing the likelihood of conflicts between differing XML Metadata files. Four You now have decided on the metadata you plan to supply with your asset, you know how it is going to be carried and how it will be presented. You have ensured there will be no conflict between metadata carried in your XML and that of other suppliers. What about confidence in your metadata? Can you be sure it is all present and correct? Depending on how the metadata is created, is there a potential for incorrect values being entered? Validation of the metadata prior to submission is good practice, ensuring the information is present and complete therefore removing the potential for follow-ups on previously supplied content. Five Finally, the easiest way to make metadata is to employ one of the previously defined standards, AS-11, AS-12, DPP… While each of these standards specifies the metadata will be carried in the MXF wrapper, the same information can be supplied as an XML side car, using a pre-defined schema and already has validation tools available. More information on embedded vs. side car metadata can be found in Ben Davenport’s blog. AmberFin iCR provides support for both embedded and side car metadata though a simple plug-in, allowing data capture during content creation, the generation of a side car XML and the embedding of metadata in the file wrapper. More details can be found in a previous post - 3 Simple Steps to generating XML Metadata in AmberFin iCR. The Real Secret The real secret to making metadata easy is to not re-invent the wheel, use predefined standards and greatly reduce the cost of content transfer. I hope you found this blog post interesting and helpful. If so, why not sign-up to receive notifications of new blog posts as they are published?