2014 saw steady maturation in the hottest new trend of the “everything is software” market. Definitions of software “x” crystallized around network virtualization (NV), network function virtualization (NFV) and software-defined networking (SDN) (see: NFV and SDN: What’s the Difference?). However, the productization of these ideas remains elusive.
With so many different solutions from so many different companies on the market, buyers can’t easily see their solutions in the market’s offerings, and they are hesitant to take the leap down any particular deployment path. Additional complexity has been brought into the discussion because these new ideas are based on ecosystems where some of the control aspects are now external to the actual device driving the network behavior. Whether the device is a switch or a server, you may need to combine several pieces to build a complete solution.
2015 will see the rise of solid use cases for this new software-driven thinking, and these emerging use cases will encourage more companies to adopt these ideas in their networks. In this article, we will look at five emerging use cases for SDN.
SDN: Adding Policy to Existing Networks
SDN is not a wholesale reinvention of the network, and it doesn’t require anybody to rip out and replace working switching and routing technology. Rather, it’s an additive technology that allows network administrators to solve problems that are difficult to solve with switching and routing. After all, traditional networking works – we get e-mail every day, use the Web every day, and transfer files every day. This kind of networking works for any connectivity and it scales fairly well.
What we’re trying to do with SDN is to solve inefficiencies in existing networks. SDN can be used to make networks more automated, dynamic, and easier to change. Or SDN tries to solve those problems that are difficult to address with traditional switching and routing protocols. For example, SDN can drastically simplify the implementation of policy-based networking by letting you apply a specific policy for a specific person or application – something that is hard to manage with switching and routing.
Meter, Match and Act
Meter, match, and act are the three steps SDN takes to do something in a policy-driven network. SDN allows you to meter traffic conditions, application and user behavior; to match those conditions against a set of pre-defined criteria; and then to act on the match according to a policy. Part of a policy framework is to pre-set conditions you meter against.
Now let’s get more grounded and look at five such use case examples for SDN. Figure 1 illustrates a service delivery ecosystem, with the enterprise pulling services and data from a hosted or private data center. The metro network or WAN is the highway used to deliver these assets between the users and servers.
SDN Within the Data Center
The data center is the beachhead for many SDN discussions, primarily because data centers remain one of the top three strategic initiatives in IT today (Figure 2). Let’s say you want to move a virtual machine from rack 2 to rack 4 and it’s for scaling up an application that is hosted on that VM. Here SDN advocates suggest a tunneling scheme using the emerging VXLAN encapsulation technology. We meter the traffic by leveraging OpenFlow, the Open Networking Foundation (ONF) API for multi-vendor external programmability. OpenFlow should be considered as a control protocol that also can gather performance information in the data plane. The second step is to manage the condition. We manage the condition by moving VMs through the VXLAN tunnels from rack 2 to rack 4, and then we act on that tunnel by tearing it down after the VMs are moved. Using VXLAN tunneling did not force us to re-engineer the switching or routing environment, which is why many people refer to this as an overlay strategy.
The main takeaway from this use case is that there are a lot of competing means to achieve orchestration and help drive provisioning functions in an open (read: vendor agnostic) manner. OpenFlow brings unique value in that it has inherent metering capabilities, which are a key piece of knowing when to act on a condition.
Data Center to Data Center Networking
Let’s suppose we’re talking about a hybrid cloud or just moving assets between two data centers (Figure 3). MPLS is a great way to make a private circuit; the bad news is that once you provision it, it’s hard to modify, so you don’t want to mess with it. But if you are trying to move specialized traffic between data centers (imagine you’re cloud bursting or you have SLA governance between data centers, or security governance between two data centers), you need a way to do that.
Remember, SDN is metering, matching and acting, so let’s see how this works in data-center-to-data-center traffic. Imagine you’re metering AT&T and NTT traffic between two data centers. You might be paying different rates for each type of traffic, and you’re trying to combine rate and traffic behavior with the best result. OpenFlow lets you do this.
OpenFlow lets you split traffic between different pipes based on a combination of SLA performance or cost, and then you use a policy to reprogram MPLS headers, which drives the traffic down either path A or path B.
Starting with the idea that OpenFlow enables metering, you meter the traffic based on your selection parameters. Once you match a condition, you now use OpenFlow to act by pushing or popping an MPLS label, which will enable the LSR (label switched router) to forward the packets down the correct (matched) path. This simple reprogramming on the fly thinking is also something Google is pioneering and spoke to at the Open Networking Summit in 2012.
SDN at the Routed Edge
Since the edge router is a key piece of either the provider’s “service edge” or the customer’s “service edge,” it is a hot topic as to how SDN can change the game (Figure 4). Here, we’re talking about two aspects of discussions within the service provider community: both ripping and replacing edge routers and injecting some SDN thinking. First, if I actually look at my hardware (switches and servers), I can rebuild an edge router using a much lower capex model (using a white box server and switch). What Google helped the industry learn was the concept of distributing control in the network to the extent that they came up with the idea of reliability. Google was one of the pioneers of thinking of servers as light bulbs: instead of trying to nail down spot failures, let things “burn out” and periodically send kids down the data center rows on skateboards to rip and replace servers.
On the switching side, instead of thinking of a chassis as the building block, think of 1-RU fixed configuration switches as the building block and build what we now commonly call a CLOS or spine/leaf topology from these devices. This model scales by adding rows of these “pizza boxes” and in the event of a failure, traffic flows around the failed device as a CLOS fabric enables redundant paths.
With an edge router, even though it’s a chassis with two sides, the idea is, why not build it out of building blocks like Legos, stack ten top-of-rack switches and split the traffic across those, and have a lot of external servers to do the control processing? We have 32-core processors with tons of memory on servers, so you don’t need to have the whole routing table in the top-of-rack switch – you can have it sitting in the server. You don’t need the top-of-rack switches to have chassis-like failover mechanisms because you can have that maintained as a stateful resiliency model in the server.
It’s desirable to commoditize edge routers by building them out of different devices to bring this kind of Google-like Web 2.0 resiliency thinking to a classic closed device. What is the most important router between the service provider edge and the customer edge? It’s the edge router. It’s where service is injected into the customer network. Granular control, visibility in real time for in-service SLAs, types of traffic, users, or specific applications – all of those are important.
Bringing our Meter, Match and Act thinking into this model, we can now think of a new, more granular, and centralized software means for driving service chaining, providing more granular SLAs and enabling the concept of giving end customers a user portal whereby they self-manage certain aspects of the routed edge. Perhaps it is part of the end-to-end control of traffic between data centers and adding another injection point for controlling SLAs and helping us build the tools for a true hybrid cloud environment.
The enterprise is traditionally a laggard in networking progress, but there are some important challenges on the enterprise side that SDN can really address (Figure 5). Malware, spyware, security and compliance, data protection, VBOS, enterprise mobility, and BYOD are examples of difficult challenges being addressed in campus networking.
SDN can be a powerful savior here. Consider an example of User A connecting to the network. We can set a policy that if User A connects based on his physical location, time of day, and/or endpoint device (mobile smart phone, tablet, or laptop) – if any of these conditions are matched in a certain way, the policy for what they are allowed to access can also change (act).
In this way, network administrators and CIOs can enforce user access policies in an elegant way. They can shut down network access should there be any suspicious traffic, and protect critical data before any security breach occurs.
Network administrators get excited whenever we talk about the SDN tap instead of a physical tap. A physical tap requires one tap per switch, and you use a production port for each tap. The tap is tuned for a specific application: filtering, mirroring, aggregating, sniffing or analysis for example. Every IT manager needs taps (Figure 6).
With SDN taps, you’re not changing anything – you’re simply grabbing data from the network, but instead of doing it on a device-by-device basis, you can easily aggregate the data.
One example is mirroring, taking traffic from Port 1 and sending it to ports 2, 3, and 4 – maybe you’re separating voice, data, and video data, for example. A second example is a filtering port, splitting traffic on Port 25 and sending it to two different analysis tools because we’re comparing traffic. A third example is aggregation, combining three ports into port 44 – combining voice, data, and video traffic to see how it behaves, and to determine whether there are any jitter or SLA problems when you combine traffic over the same link.
In the old world, this would require three separate taps; in the new world, you can use one port or a collection of physical ports that you now program to have these different functions. You no longer need hundreds of tap devices, and you can be mutable in terms of how you use the tap.
These five emerging use cases are just examples of how SDN solves significant networking challenges without disrupting 30 years’ worth of good network engineering. The Internet and networking work. SDN supplements them by fixing problems intelligently for use cases that would otherwise drive topology engineers nuts.