Archived Content

The following content is from an older version of this website, and may not display correctly.
Does Network Lifecycle Management Make Sense?

Recently, we met with a friend who has done an amazing job of understanding the lifecycle management of virtual machines (VMs). As the CTO of a very large cloud provider, he explained in deep detail how he took advantage of Moore's Law and doubled the amount of VMs in each rack each year, while maintaining or shrinking the cost per rack. As a result, he has doubled the amount of earning potential in each data center while driving cost down, even as his staff is ripping out servers long before their traditional three- to four-year lifecycle and purchasing new ones. He is buying servers at a 3-to-1 ratio over a three-year period when compared with a typical server lifecycle, yet his cost to operate the data center is going down and his productivity is going up by 2x every year.  Amazing!

While we enjoyed learning of his success, when we hear these stories, we think "Could this have the same type of impact somewhere in the network?" It got us to ask why customers traditionally hang on to their top-of-rack switches for four or five years and sometimes longer.

What is different about the network versus servers?

Obviously, development cycles of server processors differ significantly from those of network switching ASICs. While you may get double the processing power in a server every year, it may take five years to see a tenfold change in port speed on a switch and even longer for the price on the ports and NICs to become even close to consumable.

Secondly, the server market is much more commoditized and the ecosystem is much more open. Servers can easily be customized with processors, RAM, disk, and NICs. With Linux, the operating system (OS) is open and not tied specifically to either the hardware or the applications. And further, the tools do not change when the server changes.

In the data center network, in terms of devices, switches are mostly fixed-configuration. The OS is not open and is tied to the switch. When the hardware is changed -- especially if you are bold enough to change vendors -- many of the tools and applications need to be painstakingly modified, accommodating new APIs and MIBs or other proprietary interfaces. Because this environment is so closed, proprietary, and inflexible, changing any layer of the network ecosystem is a costly exercise. If you want to change your management tools, it impacts the OS below. If you want to modify the OS and if the vendor allows it, you may be required to change the switch hardware or the tools used to deploy and manage the switch.

What if the OS was open and truly separated from the hardware?  What if you could swap out the hardware without having to change the OS or any of the changes you made to this open OS?  What if you could change the management tools or provisioning system without having to worry about the complexity it may create with your proprietary or closed programming interfaces?

It's time to see which vendors are going to take the needed steps to simply ignore conventional thinking and bring forward a new type of solution that enables their customers to realize the benefits of open networking and enjoy greater efficiencies in their data centers.