Pascal Menezes is chair of the International Multimedia Telecommunications Consortium (IMTC) unified communications (UC) software-defined networking (SDN) Task Group, vice-chair of the Open Networking Foundation (ONF) northbound interface group, and former vice-chair of Wi-Fi Alliance’s (WFA) newly formed Wi-Fi Mobile Multimedia Task Group. SDxCentral’s Roy Chua recently caught up with Menezes, who also is principal program manager for the Skype for Business group at Microsoft and has co-authored standards in the IETF, MEF, and IP/MPLS Forum.
SDxCentral: When we last talked a little more than six months ago, IMTC was beginning some UC SDN POCs, demonstrations, and standardization efforts. How have things progressed?
Menezes: The central issue we’ve been working on is, how do we enable applications and networks to talk to each other in a meaningful and industry-standard way? We’ve been driving the industry towards standardization in this area and have made tremendous progress since our last interview.
On the standardization front, last fall the IMTC released Automating Unified Communications Quality of Experience (QoE), Version 2.0 of the quality of service (QoS) use-case specification. We renamed the original QoS specification to QoE in 2.0 because it goes beyond QoS to cover traffic engineering (TE) and admission control (AC). We now are working on a use-case specification in automating diagnostics using SDN.
IMTC has a strong relationship with ONF that’s allowed us to work with them to create the first standards-based, end-user application northbound interface (NBI). ONF’s real-time media NBI REST-based specification will enable SDN to enter a whole new paradigm of how end-user apps and networks effectively communicate machine-to-machine (M2M).
This lowers TCO while improving end-user experience. Together with improved visibility, control, automation, and agility, these are the guiding principles of our UC SDN standards initiative.
The updated version of your QoS use case was released just a few months after the original. Why is QoS such an issue for real-time communication?
Lack of consistent QoS standardization policies is a problematic issue for real-time communications because data and real-time media have very different network requirements. Most data applications can run well in a “lossy” network and recover just fine due to TCP/IP retransmission and recovery mechanisms. Any intermittent packets lost or out of order have very little perception from the end-user experience aside from an occasional “buffering” icon.
On the other hand, most real-time media communication involves human two-way interaction at each end. Generally, as end-to-end delay increases, it takes time for the receiving party to get this, perceive it, and communicate back.
Delay, jitter, and packet loss all have a significant effect on real-time media like voice and video, especially with HD video running at 1.5+ Mbps per stream. Many modern codec techniques try to recover from packet loss using forward error correction (FEC) algorithms, packet loss concealment (PLC), etc., but packet loss that is bursty or back-to-back in nature is typically very difficult to recover from.
IMTC specifically added traffic engineering and admission control in Version 2.0. Why are those important?
In Version 1.0, we specified how to automate QoS tagging of packets with the right treatment preference so the network can identify real-time media flows and tag them accordingly. In Version 2.0, dynamic TE and AC complete the E2E solution. We have a diagram (below) that shows what I’m talking about.
The AC module regulates access to shared network bandwidth resources across all the classes of services (CoS). It decides how to allocate these resources on a session-by-session basis. To prevent oversubscription on certain links, or paths, the module may reject a new session and inform the UC&C application of insufficient resources. Or, based on policy, it may notify the UC&C application to reduce or reallocate the bandwidth requested to allow the new session on the network.
The dynamic TE module governs the allocation of total available link capacity to the different CoS as well as dynamic path selection based on available link capacity. TE can automatically calculate how much bandwidth the network must reserve for each CoS based on policy, as well as dynamically change the assigned bandwidth for different CoS based on requested UC&C traffic loads.
How about standardization efforts? When can we expect to see standardization hit the mainstream?
The standardization efforts are going extremely well. Our strong partnership with ONF has allowed us to articulate the kinds of UC SDN NBI we want to see defined in standardization efforts. We worked with ONF to develop their real-time media NBI REST-based specification. The web service allows any real-time media app to dynamically program an SDN network without complex, up-front network policy configurations. ONF hopes to release the specification to the public by mid-2015.
IMTC also expects to see our use case for automating diagnostics defined and released by this summer.
Between IMTC’s UC SDN use-case specification and ONF’s real-time media NBI specification, a real-time media app will be able to programmatically communicate its intent to the network and let the network automatically orchestrate itself. That is real innovation!
What challenges are you seeing for deployment? We’ve heard encryption is making things difficult. How does encryption affect SDN?
Encryption is a double-edge sword. End users want their communication to be secure, so everything is encrypted to meet security standardizations in TLS for signaling and secure real-time transport protocol (SRTP) for media. However, this makes classifying real-time media sessions very challenging, especially with SRTP, which uses ephemeral ports.
Some vendors implement deep packet inspection (DPI) heuristic technologies to attempt to overcome this, but since UC&C uses encryption extensively, having the network inspect packets to understand what is happening with real-time media sessions can become complicated, costly, and unreliable.
Most real-time media sessions are long-lived flows, so having an out-of-band API service that allows end-user apps and networks to communicate makes a ton of sense. This is especially true within an SDN architecture where more of the intelligence and logic is centralized. With this simplistic approach, a lot of rich information can be communicated to the network for a multitude of use cases such as diagnostics, QoS, and security regardless of encryption.
How do we build enough SDN infrastructure in the access and campus networks to make this vision a reality?
When most enterprise IT folks think of SDN, they think they need a forklift upgrade to change out all their network elements before they have an SDN network. This is the furthest from the truth. We are seeing a gradual hybrid evolution with SDN that takes current invested hardware and optimizes it for a more centralized intelligent architecture much like what SS7 did for the PSTN.
Sliding in an SDN Controller as a centralized intelligent broker between the old and new world is a great start. With so many SDN innovations coming out from so many vendors, there are many great examples of phased migration. If you look at most of the SDN Controllers coming out and what southbound interfaces they support, you’d be surprised how many options there are to communicate to the legacy world.
How does wireless come into play?
Most Wi-Fi APs will become OpenFlow-enabled with the majority of Wi-Fi vendors delivering or about to deliver SDN Wi-Fi controllers. A rapidly growing percentage of enterprise users prefer Wi-Fi to Ethernet, so Wi-Fi is a great starting point in upgrading it to SDN since it needs a lot of visibility, control, automation and agility. Getting an enterprise Wi-Fi SDN-enabled would deliver significant value from the get-go since most UC&C users prefer to use real-time media over Wi-Fi.
Are there commercial UC SDN combinations on the market today?
Many vendors are shipping UC SDN products, including Skype/Lync, Aruba, HP Networking, Meru Networks, Extreme Networks, Nectar, Arrow S3, etc. There are even more vendors I cannot disclose who are in development and about to ship UC SDN products including a recent public blog from Cisco. It makes me so excited about this space! With the standardization efforts underway, this puts UC SDN in a very good spot.
What do you predict for this space in the next six-12 months?
I don’t like making predictions because looking into a crystal ball is always tough, but one thing for sure is that the UC SDN market is starting to take off with actual implementations that are accomplishing their goals. This is exciting because as a pioneer in this space, I really believe end-user apps communicating to the network provides a ton of value.
If you are an SDN vendor or an IT professional operating a UC&C environment, I recommend getting involved by joining IMTC or checking out the IMTC UC SDN Activity Group. We need all hands on board to help us define even more advanced use cases that provide visibility, control, automation, and agility.
To learn more about QoS standardization and SDN advancement efforts, visit IMTC.