In the age of Web-scale networking and elastic, responsive, on-demand cloud environments, a few pieces of the enterprise network stubbornly remain stuck in the 1990s when it comes to architecture. That’s certainly the case for load balancing infrastructure—those bedrock appliances that sit in front of every single mission-critical application that an organization deploys.
But, a revolution is on the horizon.
According to Guru Chahal, vice president of products at Avi Networks, load-balancing approaches are ripe for being dragged into the modern era. After all, what organization doesn’t want to supercharge their enterprise application performance? Why not apply next-gen approaches to lower TCO, improve app health, boost security and incorporate real-time analytics for business insights?
In a recent DemoFriday with SDxCentral, Chahal discussed how to close the modernity gap between next-gen computing and the application infrastructure; load-balancing has a legacy architecture, manually provisioned, very static, and built on proprietary hardware—but the entire world around it has changed.
“Virtualization came around, we’re applying Web-scale principles and elasticity to networks, and cloud technologies allow us to get access to compute on-demand,” Chahal explained. “You can do all of these amazing things using some pretty mature technologies. It’s time to apply the same Web-scale principles to the load-balancing infrastructure.”
To close the gap, Chahal noted that it’s important to first abstract the software from the underlying hardware so that it can run on standard servers and be virtualized. But perhaps most importantly, the software should be architected with software-defined principles in mind. That means centralizing control of the load-balancing infrastructure so that it’s possible to scale apps and lines of business without increasing complexity.
“You want a high-performance load balancer that can run on x86,” Chahal explained. “That gives you a very high-rate, high-performance footprint that can be spun up in minutes—perhaps there are hundreds of x86s. What you also want is a central control, and scaling without complexity. [Just using] virtual appliances doesn’t solve this problem.”
Chaha outlined how the Avi Vantage Platform uses a single point of service orchestration and other software-defined approaches to deliver application agility, scaling without complexity and simplified operations.
In a the live demo, he highlighted how the platform can be implemented on bare metal servers, extended into Amazon Web Services (AWS) or other virtualized infrastructures, how it works within containers and container environments, and how it can be tied together with OpenStack. And thanks to the service orchestration piece, he demonstrated how it’s possible to consolidate these services into one platform that can run across all of these different types of environments, with one centralized, single point of management and automation.
- How to scale not just load balancers but also applications, elastically and predictively based on real-time traffic pattern
- How to take advantage of x86 servers, VMs, or containers to deliver application services close to individual applications
- How to visualize application topology and performance with central management of all your load balancing resources
- Ways to troubleshoot applications in minutes with the ability to record and replay traffic events, security and client data
- How to accelerate application services for SDN environments such as Cisco ACI and private clouds such as OpenStack or for container-based microservices applications
To the latter point, Chahal noted how centralized orchestration for load-balancers has the capability to significantly reduce operational complexity. Applications also don’t exist in their own bubbles—they’re often made up of a collection of microservices, all of which talk to other microservices and application clusters across the enterprise. Thus, a change to one application can have a powerful ripple effect across the topology—a state of affairs that’s ripe for complexity. The Avi platform automatically maps which services are talking to each other, and allows automated changes that cascade from any changes made to the infrastructure—eliminating costly manual overhead (and IT frustration) when it comes to tweaking apps.
Analytics are also a key benefit of a software-defined load-balancing approach, he said.
“Load balancers sit at such a strategic place in the network,” he explained. “They see every single packet from every single location. Why put on monitoring fabric and mirror all that traffic and send it to an offline processing engine just to figure out what’s going on in your network? Our service engines collect all of that and send it to the controller, where we process billions of insights in real time around app performance.”
This includes being able to flag security issues. The Avi Vantage Platform’s real-time analytics engine captures the application infrastructure’s overall security posture, how users are accessing those applications and monitors for DDoS activity.
For many enterprises, including those in key verticals, like telcos, government organizations, media and financial services, new software-defined load balancers are significantly improving the way that application services are delivered and scaled, while lowering TCO and freeing IT from repetitive tasks through intelligent automation. Chahal’s demo illustrates this in very relatable ways.
To check out the full demo and presentation, click here.