There is some debate about the best way to implement network functions virtualization (NFV) and where to host the constituent virtualized network functions (VNFs). The options are described in the original ETSI NFV white paper:
NFV aims to transform the way that network operators architect networks by evolving standard IT virtualization technology to consolidate many network equipment types onto industry-standard high-volume servers, switches and storage, which could be located in data centers, network nodes, and in the end–user premises.
The question is whether to centralize hosting, or to distribute the hosting out to a central office or even the edge of the network. My view is that all these options are valid.
Centralized NFV Topology – Lowest Cost
In this topology model, all of the VNFs are hosted at a centralized location in a metro area. Here’s an example of how that might look:
In this discussion, the centralized model could be implemented in either a large data center or in a smaller central office, with the only difference being scale. In either case, the centralized model provides a shared pool of large servers, enabling the maximum statistical multiplexing of compute loads for multiple consumers. This configuration is ideal for deploying services to small and medium-sized businesses. It also enables the creation of service overlays using existing Carrier Ethernet connections to the service edge.
Here are some key considerations of the centralized topology:
- First Cost: The one drawback to this approach is that it requires a substantial initial investment to build out the infrastructure in the areas where services are needed.
- Per-Unit Cost: Having a large pool of inexpensive, general-purpose, shared servers means that the cost for each virtual service is low, and re-use of resources is maximized. This is a huge advantage.
- In-service scalability: Another advantage of this approach is that compute and storage resources can easily be added to the centralized location, and then be made available for any services that need them.
- Overlay services with no upfit of edge: The centralized virtualization can be used to create services that overlay existing Carrier Ethernet 2.0 (CE 2.0) facilities. The operator can create CE 2.0 connections from the service edge to a VNF hosted in a centralized location, maximizing the use of the existing edge network. The one caveat is that some services may require functionality beyond that of CE 2.0. Examples include prioritization at layer 3 and security functions.
- Edge services, e.g. routing and security: Another disadvantage to centralized virtualization is that it cannot support services where particular functionality is needed at the service edge. For example, a security-critical service cannot be delivered unless security functions can be placed at the service edge. Another example is latency-sensitive applications, which can best be met with collocated network functions. The one caveat is that some services may require functionality beyond that of CE 2.0. Examples include prioritization at layer 3 and security functions.
- External service chaining needed: With a centralized approach there is no need for service chains to extend beyond the serving data center, which simplifies orchestration.
- Disruption to PMO: Perhaps the biggest drawback of centralization is that it requires a major change to the present mode of operation (PMO). In short, it requires a fundamental change to how services are constructed.
Service Edge NFV Topology – Scalable and Service Rich
In this topology model, the virtualization resources and hosting of VNFs are pushed to the service edge, as shown in the following figure.
The pros and cons of this topology include:
- First Cost: A huge advantage of putting the virtualization resources at the service edge is that the first cost is minimized. There is no need to build out centralized infrastructure before deploying virtualized services; the resources are deployed when each customer is added.
- Per-Unit Cost: The other side of this equation is that there is very little opportunity to achieve lower costs through scale. Because the virtualization resources are located remotely, the cost is constant.
- In-service scalability: Another drawback to this approach is that the ability to add additional services or scale is limited by the installed resources. This limitation can be overcome using the chained approach described in the next section.
- Overlay services with no upfit of edge: This consideration is not applicable because the service edge model assumes an upfit deployment at the service edge.
- Edge services, e.g. routing and security: The ability to deploy functions at the service edge enables a class of advanced services that are not possible in a centralized model. Some examples include dual homed routing, WAN optimization and security functions.
- External service chaining needed: As with the centralized approach, placing resources in one location means that no advanced service chaining is needed.
- Disruption to PMO: The service edge model replaces a stack of appliances with a server running VNFs. However, the deployment location is not changed, and no changes are needed back in the network. As such, this approach requires only an incremental change to the PMO.
Chained NFV Topology – The Best of Both Worlds
With chained NFV, services are composed using resources that are distributed throughout the network and then chained together, as shown here:
Here’s how the chained topology model measures up:
- First Cost: As with the centralized model, the chained model requires an initial investment to start deploying services.
- Per-Unit Cost: The chained model utilizes shared resources for low per-unit cost. It will be somewhat higher than a purely centralized approach due to the location of resources at the service edge.
- In-service scalability: One advantage of the chained model over the service edge model is the ability to add virtualization resources without having to upgrade the service edge.
- Overlay services with no upfit of edge: Not applicable.
- Edge services, e.g. routing and security: The chained model can support all of the advanced applications that are available with the service edge model.
- External service chaining needed: The biggest complexity of the chained model is that it requires advanced orchestration to combine physical and virtual resources across multiple physical locations.
- Disruption to PMO: This is the most powerful virtualization model and, as a result, is the most complex.
The Power of NFV – Put the Functions Where Needed
I have described some of the tradeoffs and synergies between the various topology models. These characteristics are summarized in the following table:
|Overlay services with no upfit of edge||Yes||N/A||N/A|
|Edge services e.g. routing and security||No||Yes||Yes|
|External service chaining needed||No||No||Yes|
|Disruption to PMO||High||Low||High|
By using the appropriate NFV topology for a given service, operators can match the requirements of the customers to constraints such as bandwidth and latency, as well as available resources. Doing so ensures that the power of virtualization is optimally applied to unleash new services.