- Analysts are not employed by SDxCentral.
- Views and opinions expressed in analyst content belong solely to the author and do not reflect the views of SDxCentral, LLC.
- SDxCentral does not fact check analyst content. If you believe there is a factual error in analyst content, please notify [email protected] Should we find factual irregularities, that article will be unpublished from the SDxCentral website.
Effective April 18th, 2019, the SDxCentral analyst blog syndication program has been terminated.
SDxCentral Statement about AvidThink, LLC
- Roy Chua, the founder of AvidThink, was a co-founder of SDxCentral. As of September 30, 2018, Roy is no longer affiliated with SDxCentral.
- The views expressed by AvidThink and Roy Chua are independent of SDxCentral and do not represent the views or journalistic principles of SDxCentral.
- As of April 18th, 2019, SDxCentral is no longer publishing AvidThink analyst blogs on the SDxCentral website.
Walking against the flow of hordes of KubeCon/CloudNativeCon attendees, I felt like a salmon swimming upstream. After all, there were over 8,000 attendees at this year’s conference, a testament to the pull of containers and, more specifically, Kubernetes — the orchestration platform to manage containers at scale. KubeCon provided a venue for me to efficiently engage with vendors in the crowded exhibition space, chat with application developers, and have insightful conversations with some of the technical leaders in the container world. My conversations with Dan Williams of Red Hat, Matt Klein of Lyft, William Morgan of Buoyant, and others left me, a networking and security veteran, feeling like a salmon.
Networking Doesn’t Matter, or Does It?
Trying to sell the importance of networking to application developers — which I’ll shorten to AppDev even though that’s not a thing yet — can be as challenging as a salmon fighting the current. For many AppDevs, the network does not really exist. When I say that, I do not mean that AppDevs don’t understand that networking is critical in making their applications run well, but that they don’t want to care. Perhaps a picture here might be appropriate:
I first saw this on Matt Klein’s excellent post on Medium about the basics of load-balancing. As someone with deep roots in networking, I felt the reverberations in our L4-7 world.
From an AppDev and DevOps perspective, networking teams either handle tasks application developers couldn’t care less about, perform black magic, or, more likely, do both. Once DevOps pushes a deployment out, the expectation is that magic happens and the application just works. In today’s reality, and especially if application topologies or deployment locations have changed, networking and security will end up scrambling to ensure that all elements of an application are connected and secured and that the application can scale.
However, this last phase of deployment isn’t usually accounted for in most development plans, and so AppDevs and DevOps get frustrated with networking and security. In return, NetOps and SecOps complain that they are not given enough notice to make the changes. And yet, DevOps — and everyone else — hates the “submit a ticket to reconfigure the network and wait 2 weeks” rigamarole.
AppDev and DevOps envision a world of L3 reachability, which is all they want from NetOps, across multiple private and public clouds, with all ports secured except those needed for service meshes and a few monitoring services, which is all they need from SecOps. Their contention is that if NetOps and SecOps just focused on giving them that, they could take care of the rest in an agile manner: service discovery, routing, load-balancing, scaling, encryption, visibility, and even application-level security.
And that’s where the service mesh steps in. The service mesh layer essentially handles what networking and security have striven to provide in a pre-agile, pre-container, pre-micro-services, pre-multi-cloud world.
Wither ADCs and SLBs?
With Google, Red Hat, and IBM backing Istio and the underlying Envoy proxy, it appears that application delivery controllers (ADC) and their legacy namesake server load-balancer (SLB) might have a more limited role in the future. F5, one of the well-established ADC companies, is trying to embrace the future with Aspen Mesh, its commercially-supported service mesh offering based on Istio. Understanding the drag that legacy branding can have, Aspen Mesh’s branding doesn’t tout its F5 roots. And I’m certain that other ADC companies will want to do the same, either adding new logic on top of Envoy as an alternative to Istio or providing better manageability above Istio. There’s even a new category for orchestrator of service meshes brought forth by Solo.io and their SuperGloo project.
In the meantime, we’ll see vendors try to stake out their territories, perhaps trying to segregate ingress north-south traffic use cases, where ADCs traditionally held sway, from east-west use cases, where service meshes have their roots. Software ADCs like HAProxy, NGINX, Pulse Secure, Snapt.io, and many others will try to distance themselves from hardware ADCs and claim that they were the original service mesh anyhow. Some of this will be a repeat of what we saw and continue to see on the SDN and NFV front, and in some cases, with overlapping players. We will hear again that software will eat hardware and that open source and white box will dominate the proprietary vendors, etc.
What I do see is that ADCs, API gateways, and service meshes will all tumble into the melting pot, and I can’t wait to see what emerges.
Network Service Mesh
While at KubeCon, I also caught up with Ed Warnicke of Cisco and Frederick Kautz of Red Hat. I’ve known Ed since our SDN days with OpenDaylight, and Ed’s embarked on a new project building on his past innovations. Ed and Frederick, along with many of their networking counterparts are driving a new open-source initiative called the Network Service Mesh (NSM).
The NSM is a way to provide L2/L3 networking constructs that mirror the application service mesh within the Kubernetes environment. NSM provides a way for applications to ask for high-level services from the network, for example, “secure internet connectivity” or “secured connectivity to AWS VPC #1”, and then have the network pull together the appropriate network service endpoints — network functions software living on pods — that can provide that type of service. For instance, NSM could direct the traffic through a firewall first, and then an IPSec device, followed by an encapsulation function to transport the payload to a remote network.
Sound familiar? Well, there are certainly flavors of NFV, service function chaining (SFC), network service header (NSH) all rolled up into this, but in service of container workloads running on Kubernetes.
Aligning AppDevs, DevOps, NetOps, and SecOps
As to whether NSM will be needed in a full L3-reachable, flat-network nirvana, depends on your point of view. Some of the AppDevs, DevOps folks might see little value, and yet, magic needs to happen at the network layer to support application connectivity and security. And I don’t believe CIOs will give up on layered security or network segmentation and open up their entire application farms to all internet traffic, not even on limited TCP/UDP ports. The interaction between the NSM team and the applications service mesh proponents will be one to watch closely in the next months.
With the momentum behind Kubernetes, service meshes, and container networking, the triumvirate of DevOps, NetOps, and SecOps will have to sort things out and come together in support of making the AppDevs more productive. This trio will have to ensure that corporate policies in security, networking, and application development are adhered to as we drive into container-native, large-scale, agile application development and deployment. I can’t wait to see how this evolves.
Disclosure statement: The CNCF/Linux Foundation covered airfare and lodging for AvidThink’s trip to KubeCon. However, AvidThink stands behind the independence of its insights and coverage from the conference. AvidThink would like to thank the analyst and public relations team at Linux Foundation for its assistance at the conference.