Data communication networks are complex environments, where operations teams are challenged to maintain stability while deploying new services or implementing changes. The network engineering and design teams need to consider, not only the demands of new services, but how to ensure that current service requirements are still met and services are unaffected while changes are underway. This is driving the need to design, test, and deploy quickly and efficiently.
The “DevOps” approach has brought best-practices from software engineering to systems administration, with virtualization enabling test and development environments that are being used to stage new automation tools and scripts prior to deployment into production. This approach and the tools that have been developed are key to providing reliability and agility at scale.
While DevOps concepts are being used in software development and deployment, bringing these concepts to the network is a challenge: Network equipment is expensive, requires space, power, and cooling, and there’s never enough equipment for everyone who wants to use it.
For DevOps to be applied to the network, engineers need cost-effective solutions that can provide test and staging environments on demand – not only when lab equipment eventually becomes available.
For many people, their first exposure to networking was at college, where they were taught networking concepts and principles. Such theoretical knowledge really needs to be balanced with practical exposure, getting hands-on experience to really understand how things work. Yet the costs of commercial-grade networking equipment leads in many cases to high student-to-equipment ratios, reducing the amount of practical exposure a student can get.
In some parts of the world, the cost of equipment that many of us may consider reasonably inexpensive is prohibitive. In some places, the average wage for a junior network engineer can be as low as $500 per year. Alternative ways to provide access to commercial-grade technologies are essential.
Virtualization is now a commonplace technique. Many of us will have used virtualization services like those from Amazon or Microsoft, or run virtual machines on hypervisors like VMware ESXi or KVM. Spinning up or tearing-down a VM is simple – just push a button, issue a REST call, or run a CLI command.
Network operating systems can now also be run as virtual machines. These network virtual machines don’t offer the data-plane forwarding performance or ASIC-level fidelity of production network hardware. Such network VMs are not intended to emulate the hardware characteristics of physical network equipment, nor are they intended to validate aspects such as convergence time. What network VMs do provide are highly cost-effective devices that have many of the operational characteristics of their physical counterparts such as the control-plane and management-plane behavior, complete with the nuances of standards interpretations and vendor-specific implementations.
Having the ability to deploy a virtual network simulation with the same level of simplicity as deploying a VM in a cloud, means that you need to be able to coordinate the operation of the VMs and the connections among them. Rather than operating VMs as individual instances, we need to be able to operate that set of VMs as a coordinated network.
If you bring up a set of network VMs, you also need to configure the VMs in order for them to be of use. Configuring three nodes manually? No problem. Five nodes? Ten nodes? As the topology grows, so does the time required to configure all of the network elements. And many of us will have experienced the frustration of trying to find a mismatched subnet mask when setting up network equipment! The challenges that engineers face in the physical world apply in the virtual world as well.
As a result of this complexity, people use the same simulation topologies and associated configurations again and again, trading time against the ability to model accurately, since the time required to configure each potential network simulation can be significant. We therefore lose the ability to experiment with various different inputs, to look at the range of “what if” scenarios.
DevOps teams have been able to create environments where they can build and validate their scripts and applications. The challenge is now to provide the NetOps teams with a similar capability to set up, tear down, and start over once more. Network VMs are now being used to validate and refine scripts, tools, and applications prior to deployment into production environments, but they are still being operated as individual virtual machines rather than as a network.
Modeling a real-world network using the same network operating systems as used in production helps to build confidence that the next set of configuration changes you intend to deploy are not going to result in service-affecting outage. Network virtualization platforms enable users to build working models of networks, in some cases many months before the physical networks are deployed. The simulation enables the operations staff to work with the network, to understand the impact of proposed changes, and to prepare themselves for the actual rollout of the equipment – without needing to take a ticket and queue up for access to the lab.
Network virtualization technologies increase agility while maintaining high service availability. They enable the creation of functional simulation environments providing speed and flexibility for designers, developers, network engineers, educators, and students. Those simulations can be used to model parts of a production network, to create a training lab, or perhaps to investigate designs for a future network.