New Take on SDN: Does MPLS Make Sense in Cloud Data Centers?

Does MPLS Make Sense in Cloud Data Centers?


MPLS has taken the WAN networking world by storm. In its first five years, MPLS dramatically transformed Service Provider networks in two very different ways:

  1. MPLS eased the migration of the infrastructure underlying IP networks from ATM to Packet-over-SONET (with Traffic Engineering).
  2. MPLS made possible a new service: “Provider Provisioned Virtual Private Networks” that shifted the burden of managing a VPN service from enterprises to SPs and thus created a new revenue stream for SPs.

MPLS also offered a way to “cap-and-grow” legacy network technology, in particular, Frame Relay, ATM and SONET. This allowed some SPs to upsell an enterprise offering of point-to-point Frame Relay connections to a superior offering of any-to-any IP VPN connectivity with multiple Classes of Service. MPLS also greatly helped with the transition from TDM interfaces to Ethernet for much improved equipment economics. Thus, the driver for the deployment of MPLS was financial as much as technical: the opportunity for new revenue, the potential for cost savings, and the ability to seamlessly cap-and-grow.

Sixteen years on, MPLS is still an active field of research. The past few years have seen significant innovations such as “Entropy Labels” (for improved load balancing), Ethernet VPNs (E-VPNs), multicast and “Seamless MPLS”. MPLS has also addressed the problems of scale, availability, and fast recovery from network faults. All Tier 1, and most Tier 2, Tier 3 and Tier 4 SPs use MPLS in their backbone, and also use MPLS for VPNs, pseudowires, mobile backhaul and other revenue opportunities.

MPLS in the Scale-out, Virtualized Data Center

With such success in Wide Area Networks, one might expect that MPLS in the data center would be a shoo-in. First, the virtualization capabilities that MPLS brings with IP VPNs and E-VPNs are much needed in modern data centers. Second, virtualization inside a data center needs to be extended outside the data center, either to another data center, or to a VPN customer in a branch office; both of these use MPLS. Finally, traffic engineering is often important between data centers, and MPLS helps there too. Yet, so far, MPLS hasn’t penetrated the DC to any great extent. The question then is, is there a fundamental reason why not, and can MPLS be used profitably in the data center? This blog will try to answer these questions.

What is MPLS?

MPLS consists of a very simple data plane paradigm: label switching with efficient hierarchy, accompanied by a powerful and extensible control plane. MPLS owes its success to both of these, and to its easy integration with IP in both the control and data planes. Label hierarchy (or stacking) allows MPLS to cleanly separate its wide variety of uses. For example, MPLS can simultaneously virtualize networks and create overlays. To do this, MPLS uses a label stack with two labels, one for the virtual network (VN) and one for an overlay tunnel, keeping these two functions distinct.

Figure 1 shows a simple DC with three servers connected via three MPLS switches. There are three Virtual Networks (VNs): red, with 3 VMs, green, with 2 VMs and purple, with 3 VMs. Two overlay tunnels are shown: from server 1 to server 2 as a solid black line, and from server 1 to server 3 as a dotted black line.

MPLS Figure 1

If the red VM on server 1 sends a packet to the red VM on server 2 (1→2), two labels are pushed on the packet: a black (overlay) label to put the packet on tunnel 1 to server 2 (since that’s where the destination VM lives), and a red (VN) label to indicate that the packet is in the red virtual network. When the packet gets to server 2, it knows to deliver the packet to the red VM, not any other VM.

In Figure 2, the green VM on server 1 sends a packet to the green VM on server 2. This packet also gets a black overlay label (again, to server 2); additionally, it gets a green label (belongs to the green virtual network). Note that the red 1→2 is very different from the green 1→2, and these packets must be kept separate. The red and green labels do that. Note further that the red and green labels are “inner” labels, not seen by the infrastructure (i.e., the MPLS switches), but only by the end systems (servers 1, 2 and 3). This is how the physical network is shielded from the virtual, and the two are decoupled. Meanwhile, the purple VM sends a packet to the purple VM on server 3. This time, the packet gets a grey overlay label (to server 3), and a purple label (belongs to the purple VN).

MPLS Figure 2

Simple, huh?

A few more facts about MPLS labels: a label is 20 bits, so it can distinguish a million different things. That means that there can be a million tunnels in the DC network; that also means that there can be a million virtual networks on a single server. These numbers will hold us for a long, long time – and, if we do come close to a million VMs per server, there are simple techniques to get to a trillion VNs :-)

So, really, why not MPLS in the data center? Read on ….

MPLS is (Supposedly) Complex

As we just saw, the MPLS data plane is simple yet powerful. However, we didn’t go into how the labels were assigned, that is, how server 1 and server 2 both understood that the “red” label was for the red VN, how tunnel 1 got set up, and how server 1 knew that tunnel 1 went to server 2, etc. All of these are the purview of control plane protocols. And that’s where the complexity starts.

Implementing protocols is generally quite straightforward. The difficulty lies in making the implementation robust, defensive, scalable, highly available and easily extensible. Protocols are by nature distributed, which brings with it another order of complexity. Protocols are multi-vendor, which means dealing with many quite different implementations. Finally, there are a myriad of knobs and whistles that are needed to tweak protocol behavior to suit different network architectures and different design points.

The above is implementational complexity, which a network admin can quite justifiably delegate to a vendor. However, MPLS is also thought to be hard to configure, manage, debug and troubleshoot because of the flexibility of the underlying protocols and the number of knobs and whistles. Isolating issues can be difficult. Mediating between dueling vendors (It’s her broken code! No, it’s his misinterpretation of the standard!) adds a non-technical dimension to the problem.

Having said all that, control plane protocols are the key to managing large-scale networks. The right control plane should simplify management, provisioning, and troubleshooting. The fact that control planes appear to have the opposite effect is a sad testament to something having gone pear-shaped somewhere along the line.

MPLS is (Supposedly) Expensive

Another gripe about MPLS is that, since MPLS is complex, MPLS must be expensive. However, as we’ve seen, the MPLS data plane is quite simple. A typical MPLS “core” operation is to swap a label. At the “edge”, that is, the boundary between IP and MPLS, there is more work to do per packet: one has to recognize the virtual network to which a packet belongs, look up IP addresses, deal with Class of Service, queuing and load balancing, apply Access Control Lists and push and/or pop labels. “Edge MPLS” clearly requires more work per packet than “core MPLS”. “Edge MPLS” is primarily about virtualization, and is comparable to “edge VXLAN”, while “core” MPLS is about tunneling. Note that commercial chips have had MPLS features far longer than they have had VXLAN features.

So, why is MPLS hardware more expensive than plain Ethernet switches? In part, because of the flexibility that MPLS demands; in part, because of the robust control plane that is expected with MPLS hardware; in part, because of the (lack of) economy of scale. More on that later.

Enter SDN

SDN separates the control and data planes. Perfect! SDN gives a high-level view of network functions – no more twiddling knobs. Yay! The “SDN as a compiler ” approach completely changes the management paradigm. Touchdown!

Yes, that is somewhat simplistic. Let’s break it down:

  1. Managing MPLS is hard. So, hide the complexity.
  2. The MPLS control plane is expensive. So, separate it from the hardware.
  3. “Edge MPLS” is more expensive than “core MPLS”. So, move it to the server.
  4. “Core MPLS” is not yet commodity. So, use a different tunneling technology.

Networks as Cars

To understand this better, let’s take the analogy of a car.

Network equipment companies take a lot of (justifiable) pride in their control plane implementation. This is what makes a network go. The corollary seems to be to flaunt it. The control plane has to be in your face: in the CLI, in provisioning, in management, in troubleshooting.

Car manufacturers take just as much pride in their engine – it’s what makes the car go. However, they don’t push it in your face; they hide it under the hood, and give you two (or three) pedals, a gearshift and a steering wheel to control it. This is because cars have matured; gone are the days when you mess with the fuel-air ratios (remember “chokes”?) and pump the brakes to stop safely. (BTW, anti-lock brakes are a reminder that you do want complex technology, but with a simple UI – and, hard as it may seem, this is achievable.) As a driver, you care about what’s under the hood – is it reliable, is it efficient, is it safe? But you don’t dwell on it while driving; you focus on the traffic, road conditions and navigation. You have a set of dials that tell you enough of what the car is doing to help with driving, but no more. You can access a whole lot more information if needed, but that’s typically used only when troubleshooting.

1. Under the hood

For the various reasons mentioned above, you want MPLS under the hood … but not in your face. It absolutely has to be possible to have a scalable, robust, highly available technology without making it near impossible to manage – OpEx economics demand that, and some company will figure it out. We believe that an SDN system can offer a complex technology with a simple-to-use UI. Moreover, the SDN system can present operational data needed to run the network efficiently in a way that is easy to consume, or even let you build your own dashboard.

2. Transmission and differential

Decoupling the engine from the wheels (via the transmission) and left from right (via the differential) is crucial to building an efficient and fast car. It is just as crucial to decouple the control plane from the data plane. There are technical reasons for doing so, such as upgradability, availability and such. Equally important are the economic reasons: decoupling the pricing of the control and data planes, and enabling pay-as-you-go and other pricing models which a tightly coupled system inhibits. An SDN system gives you this decoupling.

3. Mechanical or Electronic Timing?

Cams that control valve timing are a marvel of mechanical engineering. However, the advent of electronic timing made the timing more precise over a wider range of speeds, and also simplified manufacturing, and thus reduced cost. Moving “edge MPLS” to the server is analogous: use a quite different technology to perform a task, and one may see dramatic improvements in functionality and/or price.

4. Whitewall tires or black?

What type of overlay tunnel should you use? Does it matter that much? To some, yes. To others, no. MPLS is actually quite amenable to running over IP tunnels, simplifying infrastructure switches. Alternatively, one can use VXLAN or NVGRE tunnels with an MPLS control plane. While there are advantages to each tunnel type, including MPLS tunnels, this is really a relatively minor detail.

Note that there is another, simpler solution to (3) and (4) – providing MPLS features in commodity chips. If chip vendors took the same approach with MPLS that they appear to have taken with VXLAN and NVGRE (i.e., whole-heartedly embracing virtualization technologies), economies of scale would address the cost of MPLS hardware. Till then, though, the above solutions will serve.


MPLS has several compelling arguments in favor of its use in virtualized data centers. The counter arguments are primarily economics, both CapEx and OpEx. A purpose-built SDN system can address these arguments, by presenting a very different management and monitoring interface to the underlying control plane, by decoupling control and data planes and by moving functionality around. Many find this idea intriguing, and eagerly await the outcome – as do I.

Editors Note:  Kireeti Kompella is CTO of Contrail Systems

Contributed Article Disclaimer:

Statements and opinions expressed in articles, reviews and other materials herein are those of the authors; the editors and publishers.

While every care has been taken in the selection of this information and reasonable attempts are made to present up-to-date and accurate information, SDxCentral cannot guarantee that inaccuracies will not occur. SDxCentral will not be held responsible for any claim, loss, damage, or inconvenience caused as a result of any information within this site, or any information accessed through this site.

The content of any third-party website that you are linked to from the SDxCentral site is entirely out of the control of SDxCentral, and you proceed at your own risk. These links are provided purely for your convenience. They do not imply SDxCentral's endorsement or association. The copyright and any other intellectual property right, as well as any third-party content, belong to the author and/or other applicable third party.


  1. HPTOGA says

    So, why would one want to use MPLS at all with SDN?

    If all MPLS provides is tunneling, than there are better suited tunneling protocols for data centers (e.g. VXLAN).

    With new possibilities to create optimized control schemes, there is no reason to adopt and virtualize legacy, and sometimes patched, protocols.

    Moreover, proper use of SDN (IMO) would require minimal packet processing at the compute layer, so virtualizing MPLS edge on a server does seems to make much sense.

    • Kireeti Kompella says


      Thanks for your comment!

      I would characterize MPLS and VXLAN as both being virtualization technologies. Picking one over the other is not the point of this blog.

      MPLS also provides a native tunneling technology. Until this is economically feasible, IP tunnels can be used with both MPLS and VXLAN virtualization. There are cases where MPLS tunnels are better — TE, fast reroute, etc., but one may argue how valuable TE is in a structured DC topology.

      The last (and perhaps most important) thing MPLS has is a control plane. But this is also where some difficulties arise: cost and complexity of management. And that is where I see SDN playing a really valuable role — taking us to a higher level of abstraction, decoupling control and data, simplifying management, and improving visibility into what is happening in the DC network. That was the main point; thanks for the opportunity to clarify!


    • Bertrand Duvivier says


      If you want to deploy the tunnel technology end-end, this one have to be deployed over the WAN with high transport cost. The tax to pay to transport VxLAN is 30% of unusable bandwidth for your data (56 bytes for VxLAN vs 8 bytes for MPLS) and this to me is is enough to avoid VxLAN as end-end transport technology.


  2. says

    Thanks Kireeti for articulating the content very nicely. True, SDN definitely plays a predominant role especially in any distributed protocol by decoupling control plane from the data plane. Just for analogy, In OSPF protocol RFC, most of the content describes about the control plane(how to manager the stuff and et al.,) and heart ‘n soul of the protocol (i.e, Algorithm) is just described in few pages of it. It may not be a surprise, If SDN eliminate’s the need for having separate protocol specific distributing intelligence for any future protocols design.

    Regards, –Bala.

  3. Romain JacotinRomain Jacotin says

    Very big datacenter market (= thousands physical servers and thousands Ethernet ports as a consequence) are mostly for service providers and carriers who generally propose MPLS-VPN WAN connectivity services to customers. (Orange, ATT, …)

    The first Juniper innovation was a proprietary SDN datacenter switching solution (QFabric is a central control plane decoupling from switches data plane).

    And i suspect that the second Juniper innovation will be a network virtualization solution based on a proprietary Juniper virtual switch that will use MPLS and MPLS-VPN too. For Juniper hypervisors are MPLS-PE routers . But because it is quite impossible to handle thousans of PE (= each hypervisors is a PE), i suspect them to use a SDN approach for that with a central control plane : i suspect Contrail System to be this SDN controller, and i suspect Juniper to integrate it into their existing QFabric controller so their will use the same controller to handle the physical switches of QFabric and the virtual switches of Contrail System.

    If they use MPLS and MPLS-VPN inside the datacenter, i bet they will proposed another enhancement : for carriers and service providers it will be easier to interconnect their multi-tenant MPLS-VPN WAN to their multi-tenant MPLS-VPN datacenter !

    It will be very intelligent to propose that even if it’s a totally proprietary Juniper solution, witch is not so good for pricing and locking of course. But i will appreciate this global vision if it’s what Juniper have in mind.

    Disclaimer : “The views and opinions expressed on this web site are soley mine and not those of employeer”.

  4. Amit Bhagat says

    HI Kireeti,

    This is an excellent post. My question is- without a Route-Reflector, you will need full-mesh of tunnels from a server (vSwitch) to every other server. Isn’t that inefficient? Or, would smth like P2MP LSPs come handy?

    Also, what seems to be stopping the vendors from making server hardware that supports MPLS? OpenFlow 1.2 seems to support MPLS.


Leave a Reply