Archived Content

The following content is from an older version of this website, and may not display correctly.

If cloud computing represents the promise of an always-on, responsive application network, then software-defined networking (SDN) provides the tools and technology to manage and optimize the traffic along that network. The potential of cloud computing is underpinned by an architecture that is extremely heterogeneous, from machine to task to cloud vendor. To take full advantage of the cloud, companies need to begin to consider how network traffic can be routed across heterogeneous cloud-environments, where spikes in traffic demands cannot be planned or scheduled and task-processing requirements vary significantly.

Many application networks still rely on traditional load-balancing technology, such as Round Robin or Weighted Least Connections, designed for routing traffic in more homogeneous environments, where the tasks and machine details are better known.

Knowing in advance the types of tasks and machines available can translate into a better understanding of how to process and complete the workload. Today’s load-balancing algorithms perform well in these types of environments because back-end knowledge of server utilization is not critical to achieving efficient performance. What's needed, however, is a more intelligent approach for managing unpredictable demand in a heterogeneous environment.

Harnessing Cloud Potential Requires Intelligence

Cloud computing, supported by SDN, is helping many businesses take advantage of the operational efficiencies and scaling flexibility inherent in a cloud-based application to help them meet ever-increasing customer expectations. Of course, with every solution come both benefits and unexpected challenges. One of those challenges is the way in which traditional load-balancing algorithms perform in a dynamic, heterogeneous environment.

For example, an e-commerce platform must manage requests ranging from serving up JPEGs to processing resource-intensive PHP. There are also unpredictable events that occur outside the control of the computing system, such as a product “going viral.” Additionally, if during the completion of a complex request the virtual machine on which this process is occurring is shared, then tasks performed by its neighbor may also impact the completion time of the complex request.

The point to keep in mind is that each task has different requirements — memory, CPU, network bandwidth — and these variables expose the inherent weakness of common traffic-management techniques. Location, as well as task and machine heterogeneity, may compromise the effectiveness of many simple load-balancing algorithms because the load balancer itself is unable to derive information from back-end systems. In some cases, because load balancers do not have access to direct back-end utilization information, they may continue to send traffic to failed or overloaded application servers. The failure to fully account for these significant variances in both tasks and machines means the capabilities of cloud computing cannot be fully realized.

Load Balancing Evolves

Of course, new traffic management methods are emerging, like content-based routing. But many of these alternatives still fail to account for machine differences in a virtual environment. Additionally, relying on specific clusters of machines for specific content requests may compromise the robustness of your system if there is an outage on that particular server. In order to optimize the routing of traffic, including unexpected demands, within a heterogeneous environment, the traffic-routing methodology has to include accurate, real-time back-end utilization.

As more businesses move their complex workloads to the cloud and begin to leverage the capabilities enabled by SDN technology — distributed networks, virtual machines, cloudbursting — trusted methods for managing traffic may no longer be the default. Fundamental changes in how applications are architected and networked by extension influence the effectiveness of these traditional load-balancing algorithms. Responding to increasing and unpredictable customer requests by taking advantage of the operational efficiencies of the cloud means that the way those requests are managed and completed is becoming a critical consideration to ensure the success of your networked application.