The Importance of Interoperability as We Head to the Cloud – Part One

Categories
Blogs

By Dave Van Hoy, President, Advanced Systems Group, LLC

As with any early phase of technology, manufacturer interoperability is always a big challenge. Companies innovate, engineering products before there are standards that facilitate creating ecosystems of those products. This has been true all the way back to the early days of film and eventually arriving at the very first film sprocket standard, which beget the organization today known as SMPTE. That occurred long before there was such a thing as television. And today we find ourselves facing the same challenge in a similar early development phase – building ecosystems of cloud products that “talk” to one another.

This is particularly challenging because as we have discussed in previous columns, media requires very deterministic communication, meaning exact timing. And when we run applications in public cloud, where everything is about virtualizing hardware and sharing resources, deterministic communication is not a consideration. 

We correct for this by using specialized protocols that carry deterministic timestamps from one part of the process to the other to provide a correct audio and video output. Therein lies our “failure to communicate.” If one product is speaking in one protocol and another doesn’t know that protocol, there is no way for them to communicate. 

An example of this would be with some of the early cloud production infrastructure products. They used their own internal protocols to control the deterministic need for communication, using external standard protocols for ingest and play out. An open-source standard protocol like SRT (Secure Reliable Transport) can bring in your signals from a remote source. But once they come into an environment such as Grass Valley’s AMPP (Agile Media Processing Platform) they are converted to proprietary protocols that were created to allow inter process communication in a deterministic fashion. On output, those signals must be converted back to a transport standard. Protocol conversion is always tricky and can be error prone. 

We like to standardize protocols within a given ecosystem. Today, the most prevalent interface between different vendors products for use in public cloud is Vizrt’s NDI (Network Device Interface). NDI was not designed initially to be an Internet protocol. However, because it was designed to work on point-to-point private networks, it is optimized for use within a hyperscaler’s virtualized environment. 

This is how we have built out standard systems. We work closely with our vendor partners and ask them to either help us or to implement NDI communications. We look to create homogeneous systems with multiple vendors’ products in the same way we use SDI or ST-2110 for on-premise installs today. 

This will be one of the biggest considerations for integrators as they design systems for their clients: What protocols do I use to transport my signals to the cloud? What protocols do I use within my processing system? In the cloud? Is it a simple distribution process or a complex switched production process? 

On the output side, what protocols do I use to transport to my destinations? Am I going to a traditional terrestrial transmitter? Am I going to a CDN or am I going to a specific destination like a private venue? Each of these today requires a different protocol to get the optimal result.

Secondarily, the other challenge is how do we create control systems that work across multiple vendors? Again, this challenge looks just like on premise. As a system integrator part of your responsibility is to recommend products to your client that you know will work with each other. If you need a control system, for instance, you need to make sure that control system speaks whatever common protocol you’ve chosen for that purpose. And you must ensure that protocol is supported in public cloud in these non-deterministic, non-multicast environments. 

I know all of this can sound quite daunting. But in truth, it’s no different than what you have been doing with your vendors, for your clients, all along. You look for ecosystems and products that work together to create the best experience for your client. Sometimes the standards are better developed than others. Who has not experienced an HDMI signal that should have worked from one device to the other that didn’t, and you find yourself troubleshooting until you finally get a handshake?

The best thing you can do is work with products that are proven to work together already. Work with vendors to ensure that they have tested their products with other partner vendors that you’re using in your ecosystem. And if they have not, allow yourself the time and cost to facilitate that testing in your own environment. If you can do that, you are guaranteed a positive outcome for your customer now and in the future.