Archived Content

The following content is from an older version of this website, and may not display correctly.
Introduction

MPLS has taken the WAN networking world by storm. In its first five years, MPLS dramatically transformed Service Provider networks in two very different ways:

  1. MPLS eased the migration of the infrastructure underlying IP networks from ATM to Packet-over-SONET (with Traffic Engineering).
  2. MPLS made possible a new service: “Provider Provisioned Virtual Private Networks” that shifted the burden of managing a VPN service from enterprises to SPs and thus created a new revenue stream for SPs.

MPLS also offered a way to “cap-and-grow” legacy network technology, in particular, Frame Relay, ATM and SONET. This allowed some SPs to upsell an enterprise offering of point-to-point Frame Relay connections to a superior offering of any-to-any IP VPN connectivity with multiple Classes of Service. MPLS also greatly helped with the transition from TDM interfaces to Ethernet for much improved equipment economics. Thus, the driver for the deployment of MPLS was financial as much as technical: the opportunity for new revenue, the potential for cost savings, and the ability to seamlessly cap-and-grow.

Sixteen years on, MPLS is still an active field of research. The past few years have seen significant innovations such as “Entropy Labels” (for improved load balancing), Ethernet VPNs (E-VPNs), multicast and “Seamless MPLS”. MPLS has also addressed the problems of scale, availability, and fast recovery from network faults. All Tier 1, and most Tier 2, Tier 3 and Tier 4 SPs use MPLS in their backbone, and also use MPLS for VPNs, pseudowires, mobile backhaul and other revenue opportunities.

MPLS in the Scale-out, Virtualized Data Center

With such success in Wide Area Networks, one might expect that MPLS in the data center would be a shoo-in. First, the virtualization capabilities that MPLS brings with IP VPNs and E-VPNs are much needed in modern data centers. Second, virtualization inside a data center needs to be extended outside the data center, either to another data center, or to a VPN customer in a branch office; both of these use MPLS. Finally, traffic engineering is often important between data centers, and MPLS helps there too. Yet, so far, MPLS hasn’t penetrated the DC to any great extent. The question then is, is there a fundamental reason why not, and can MPLS be used profitably in the data center? This blog will try to answer these questions.

What is MPLS?

MPLS consists of a very simple data plane paradigm: label switching with efficient hierarchy, accompanied by a powerful and extensible control plane. MPLS owes its success to both of these, and to its easy integration with IP in both the control and data planes. Label hierarchy (or stacking) allows MPLS to cleanly separate its wide variety of uses. For example, MPLS can simultaneously virtualize networks and create overlays. To do this, MPLS uses a label stack with two labels, one for the virtual network (VN) and one for an overlay tunnel, keeping these two functions distinct.

Figure 1 shows a simple DC with three servers connected via three MPLS switches. There are three Virtual Networks (VNs): red, with 3 VMs, green, with 2 VMs and purple, with 3 VMs. Two overlay tunnels are shown: from server 1 to server 2 as a solid black line, and from server 1 to server 3 as a dotted black line.

If the red VM on server 1 sends a packet to the red VM on server 2 (1→2), two labels are pushed on the packet: a black (overlay) label to put the packet on tunnel 1 to server 2 (since that’s where the destination VM lives), and a red (VN) label to indicate that the packet is in the red virtual network. When the packet gets to server 2, it knows to deliver the packet to the red VM, not any other VM.

In Figure 2, the green VM on server 1 sends a packet to the green VM on server 2. This packet also gets a black overlay label (again, to server 2); additionally, it gets a green label (belongs to the green virtual network). Note that the red 1→2 is very different from the green 1→2, and these packets must be kept separate. The red and green labels do that. Note further that the red and green labels are “inner” labels, not seen by the infrastructure (i.e., the MPLS switches), but only by the end systems (servers 1, 2 and 3). This is how the physical network is shielded from the virtual, and the two are decoupled. Meanwhile, the purple VM sends a packet to the purple VM on server 3. This time, the packet gets a grey overlay label (to server 3), and a purple label (belongs to the purple VN).

Simple, huh?

A few more facts about MPLS labels: a label is 20 bits, so it can distinguish a million different things. That means that there can be a million tunnels in the DC network; that also means that there can be a million virtual networks on a single server. These numbers will hold us for a long, long time – and, if we do come close to a million VMs per server, there are simple techniques to get to a trillion VNs :-)

So, really, why not MPLS in the data center? Read on ….

MPLS is (Supposedly) Complex

As we just saw, the MPLS data plane is simple yet powerful. However, we didn’t go into how the labels were assigned, that is, how server 1 and server 2 both understood that the “red” label was for the red VN, how tunnel 1 got set up, and how server 1 knew that tunnel 1 went to server 2, etc. All of these are the purview of control plane protocols. And that’s where the complexity starts.

Implementing protocols is generally quite straightforward. The difficulty lies in making the implementation robust, defensive, scalable, highly available and easily extensible. Protocols are by nature distributed, which brings with it another order of complexity. Protocols are multi-vendor, which means dealing with many quite different implementations. Finally, there are a myriad of knobs and whistles that are needed to tweak protocol behavior to suit different network architectures and different design points.

The above is implementational complexity, which a network admin can quite justifiably delegate to a vendor. However, MPLS is also thought to be hard to configure, manage, debug and troubleshoot because of the flexibility of the underlying protocols and the number of knobs and whistles. Isolating issues can be difficult. Mediating between dueling vendors (It’s her broken code! No, it’s his misinterpretation of the standard!) adds a non-technical dimension to the problem.

Having said all that, control plane protocols are the key to managing large-scale networks. The right control plane should simplify management, provisioning, and troubleshooting. The fact that control planes appear to have the opposite effect is a sad testament to something having gone pear-shaped somewhere along the line.

MPLS is (Supposedly) Expensive

Another gripe about MPLS is that, since MPLS is complex, MPLS must be expensive. However, as we’ve seen, the MPLS data plane is quite simple. A typical MPLS “core” operation is to swap a label. At the “edge”, that is, the boundary between IP and MPLS, there is more work to do per packet: one has to recognize the virtual network to which a packet belongs, look up IP addresses, deal with Class of Service, queuing and load balancing, apply Access Control Lists and push and/or pop labels. “Edge MPLS” clearly requires more work per packet than “core MPLS”. “Edge MPLS” is primarily about virtualization, and is comparable to “edge VXLAN”, while “core” MPLS is about tunneling. Note that commercial chips have had MPLS features far longer than they have had VXLAN features.

So, why is MPLS hardware more expensive than plain Ethernet switches? In part, because of the flexibility that MPLS demands; in part, because of the robust control plane that is expected with MPLS hardware; in part, because of the (lack of) economy of scale. More on that later.

Enter SDN

SDN separates the control and data planes. Perfect! SDN gives a high-level view of network functions – no more twiddling knobs. Yay! The “SDN as a compiler ” approach completely changes the management paradigm. Touchdown!

Yes, that is somewhat simplistic. Let’s break it down:

  1. Managing MPLS is hard. So, hide the complexity.
  2. The MPLS control plane is expensive. So, separate it from the hardware.
  3. “Edge MPLS” is more expensive than “core MPLS”. So, move it to the server.
  4. “Core MPLS” is not yet commodity. So, use a different tunneling technology.
Networks as Cars

To understand this better, let’s take the analogy of a car.

Network equipment companies take a lot of (justifiable) pride in their control plane implementation. This is what makes a network go. The corollary seems to be to flaunt it. The control plane has to be in your face: in the CLI, in provisioning, in management, in troubleshooting.

Car manufacturers take just as much pride in their engine – it’s what makes the car go. However, they don’t push it in your face; they hide it under the hood, and give you two (or three) pedals, a gearshift and a steering wheel to control it. This is because cars have matured; gone are the days when you mess with the fuel-air ratios (remember “chokes”?) and pump the brakes to stop safely. (BTW, anti-lock brakes are a reminder that you do want complex technology, but with a simple UI – and, hard as it may seem, this is achievable.) As a driver, you care about what’s under the hood – is it reliable, is it efficient, is it safe? But you don’t dwell on it while driving; you focus on the traffic, road conditions and navigation. You have a set of dials that tell you enough of what the car is doing to help with driving, but no more. You can access a whole lot more information if needed, but that’s typically used only when troubleshooting.

1. Under the hood

For the various reasons mentioned above, you want MPLS under the hood … but not in your face. It absolutely has to be possible to have a scalable, robust, highly available technology without making it near impossible to manage – OpEx economics demand that, and some company will figure it out. We believe that an SDN system can offer a complex technology with a simple-to-use UI. Moreover, the SDN system can present operational data needed to run the network efficiently in a way that is easy to consume, or even let you build your own dashboard.

2. Transmission and differential

Decoupling the engine from the wheels (via the transmission) and left from right (via the differential) is crucial to building an efficient and fast car. It is just as crucial to decouple the control plane from the data plane. There are technical reasons for doing so, such as upgradability, availability and such. Equally important are the economic reasons: decoupling the pricing of the control and data planes, and enabling pay-as-you-go and other pricing models which a tightly coupled system inhibits. An SDN system gives you this decoupling.

3. Mechanical or Electronic Timing?

Cams that control valve timing are a marvel of mechanical engineering. However, the advent of electronic timing made the timing more precise over a wider range of speeds, and also simplified manufacturing, and thus reduced cost. Moving “edge MPLS” to the server is analogous: use a quite different technology to perform a task, and one may see dramatic improvements in functionality and/or price.

4. Whitewall tires or black?

What type of overlay tunnel should you use? Does it matter that much? To some, yes. To others, no. MPLS is actually quite amenable to running over IP tunnels, simplifying infrastructure switches. Alternatively, one can use VXLAN or NVGRE tunnels with an MPLS control plane. While there are advantages to each tunnel type, including MPLS tunnels, this is really a relatively minor detail.

Note that there is another, simpler solution to (3) and (4) – providing MPLS features in commodity chips. If chip vendors took the same approach with MPLS that they appear to have taken with VXLAN and NVGRE (i.e., whole-heartedly embracing virtualization technologies), economies of scale would address the cost of MPLS hardware. Till then, though, the above solutions will serve.

Conclusion

MPLS has several compelling arguments in favor of its use in virtualized data centers. The counter arguments are primarily economics, both CapEx and OpEx. A purpose-built SDN system can address these arguments, by presenting a very different management and monitoring interface to the underlying control plane, by decoupling control and data planes and by moving functionality around. Many find this idea intriguing, and eagerly await the outcome – as do I.

Editors Note:  Kireeti Kompella is CTO of Contrail Systems