An increasingly common approach now to developing new media infrastructure is the “proof of concept”. This could sound a bit negative, as if we needed to try something first in order to see if it really works. But I really do not think that is the motivation behind it:
To meet the multi-platform, multi-format requirements of a media business today, we need complex, largely automated workflows. And it makes sense to try them out first, in one part of the organization.
But this achieves more than one goal:
First it obviously proves the concept: it shows that you have all the equipment and processes available to do what you need.
Second it allows you to develop workflows on the concept system, so you fine-tune them to work precisely the way that you want to work. Some vendors will try to push you towards a big bang approach where the workflows are baked into the architecture, which makes it difficult to make changes when you find you want something slightly different.
Third and this is really important, it allows you to get a sub-set of users comfortable with the system, and to take ownership of the workflows. It means you get the processes right, because they are being designed by the people who actually need them, and it means you get a group of super-users who can ease the transition to the main system.
Which all sounds good. But it does depend upon something that we all talk about but rarely really understand. The proof of concept stage is only worthwhile if this small system performs in exactly the same way as the final enterprise-wide implementation.
The word “scalable” is often used quite loosely, but this is what it really means. You can start with something small, and then by adding capacity, make it cover the whole operation, without changing any detail of how it works.
For me, that means that the enterprise system has to be built the same way as the proof of concept system. If the first iteration consisted of a single workstation performing all the functionality – which in our case might be ingest, transcode, quality control and delivery – then the full system should be a stack of workstations that can perform all the functionality.
And it also means that you don’t need to blow the capital budget on a huge number of hardware boxes. That would not be efficient, because at any given time some of the boxes might be idle while others had a queue of processes backed up and delaying the output.
It’s better to ensure you have sufficient licenses for the software processes you require, with a smart licensing system that can switch jobs around. If server A is running a complextranscode on a two-hour movie, then its quality control license could be transferred to server B which can get on with clearing this week’s batch of trailers and commercials.
The AmberFin iCR platform is designed on this basis. You can buy one and run all the processes on it sequentially, or you can buy a network to share the load, under the management of an iCR Controller. This manages the queue of tasks, allocating licenses as required from the central pool.
As well as making the best use of the hardware, it also collects statistics from each server and each job. Managers can see at a glance if jobs are being delayed, and if this is an overall problem for the business. More than that, they can also see why jobs are delayed. Can it be solved by additional software licenses, or do you need more servers?
Scalable systems are definitely the way to go, but only if you can understand how you need to scale them.
I hope you found this blog post interesting and helpful. If so, why not sign-up to receive notifications of new blog posts as they are published?