Virtualization – is it always best?
As global health concerns disrupt our traditional working models, companies need to consider how to fine-tune their infrastructure choices for business continuity.By Bruce Devlin | 03/17/20
I like to be told that I’m wrong. It usually means that I’ve made some broad sweeping assumption that over-simplifies the world. My most recent blunder was assuming that the whole world will obviously move 100% of its media operations to the cloud. It seems to me that in the space of a few short years, the media industry has changed its mindset from cloud is unsafe through a brief dally with cloud is good and has now ended up with everything cloud as the way to go.
Considering current global events around mobility and remote working, this is a highly topical discussion.
One-size-fits-all solutions do not exist!
In the unused bit at the back of my brain, I know that there is no such thing as a one size fits all solution, but at the same time I cling to the everything cloud marketing philosophy as some kind of justification for forward motion. Very often, it’s a mix of technologies that gives the best performance for a given price and it’s the ability to choose the right technology at the right time for the right job at the right price that ensures that any business continues to thrive.
Transcoding is a curious business. To select a service or a device, you first must be sure that it meets your needs for scaling, deinterlacing, frame rate conversion, image filtering, SDR and HDR conversion, range of codecs, compression efficiency and compression quality. In today’s time-pressed environment choices are often done with a service rate card rather than by testing with real content and real people. This is a shame because very often the idea of taking a high-quality device with a Capex price tag is eliminated, even though the per-transcode costs of an alternative service can be higher for a lower quality.
Nothing is ever simple – what’s the real business problem?
So why all this heavy philosophy? Dalet asked me to look at a hardware accelerator for an offline transcoder. I initially thought that I had stepped into a time machine because that sort of solution is just not fashionable now. I stopped and thought about it for a while in the context of todays reduced operating margins, remote infrastructure requirements and ever-increasing platform support requirements.
If you have a fixed and stable volume of content that needs to be converted every day / week / month then actually the costing of that core transcode is a key fixed cost of the business. If a hardware accelerator reduces that fixed cost with a one-off investment rather than a pay as you go continuous commitment, then it is a no-brainer providing you still have a local data center to house it and you have the ability to manage it remotely.
There is a business sweet spot for accelerators!
So I found myself looking at an HEVC encoding accelerator, connected to a cloud-enabled Dalet AmberFin transcode farm and realized that it was the right solution for many customers to fulfil their core needs of doing a lot of transcoding for the minimum TCO (Total Cost of Ownership). Like many things in engineering, it might not be fashionable or glamorous, but for the right application it makes good business sense.
It serves the needs of working and managing remotely, since you can build a hybrid architecture that works in the background and yet, can be accessed anytime, anywhere, assuming your data center has some solid business continuity in place (let’s face it, who doesn’t these days?)
As 2020 makes its way, with major issues at global scale, it seems that there is a sweet spot for hardware accelerators – high throughput with less energy consumption than a raw software solution. It also seems that I should avoid jumping on today’s fashionable technology for everything and to keep my mind open to a wider range of practical solutions to solving real business problems!