- Analysts are not employed by SDxCentral.
- Views and opinions expressed in analyst content belong solely to the author and do not reflect the views of SDxCentral, LLC.
- SDxCentral does not fact check analyst content. If you believe there is a factual error in analyst content, please notify firstname.lastname@example.org. Should we find factual irregularities, that article will be unpublished from the SDxCentral website.
Effective April 18th, 2019, the SDxCentral analyst blog syndication program has been terminated.
SDxCentral Statement about AvidThink, LLC
- Roy Chua, the founder of AvidThink, was a co-founder of SDxCentral. As of September 30, 2018, Roy is no longer affiliated with SDxCentral.
- The views expressed by AvidThink and Roy Chua are independent of SDxCentral and do not represent the views or journalistic principles of SDxCentral.
- As of April 18th, 2019, SDxCentral is no longer publishing AvidThink analyst blogs on the SDxCentral website.
As computing becomes more distributed, more application code will be moving to the network edge. This trend will make it imperative that service providers accurately estimate the Virtual CPE (vCPE) processing power in order to handle customer demand.
In some cases, applications will be managed by internal IT organizations. However, given the complexity of managing distributed computing at scale, it’s likely that many organizations will be counting on various types of service providers to manage applications at the edge of the network on their behalf.
Most of those applications are going to rely on some form of network functions virtualization (NFV) software running on a gateway or local server that will be delivered via vCPE, installed and managed by a service provider. What many services providers are trying to determine is how much processing power will be required at the edge of the network to reliably deliver application, security, and network services.
Avoiding Rip and Replace
Most carriers have been testing the vCPE waters using “white boxes” they built themselves. But as the number of sensors and devices at the edge of the network increase, so too will the number of application services they will invoke. This presents a significant future-proofing challenge for service providers that could easily find themselves ripping and replacing hardware they’re using to deliver a vCPE service because they have underestimated the processing power needed to handle customer demand.
There’s a natural tendency to try and minimize financial risks by provisioning the least amount of hardware required to meet application requirements. But in the case of vCPE deployments, it’s difficult to accurately forecast what the hardware requirements at the edge of the network are likely to be. However, it is certain that if those services depend solely on white boxes based on low-cost industry standard processors, the chances there will be application performance issues are exceedingly high. After all, it’s not just the application running at the edge of the network. There are other security and network functions trying to share that same hardware.
But it’s clear that competition across the vCPE space will be fierce. Not only will service providers be able to determine the cost of delivering their services, the quality of those services will also be measured. Providers of distributed applications are especially sensitive to latency issues. Those developers are not going to tolerate any performance hiccups at the edge of the network because the equipment being used to deliver a service can’t scale to meet demand.
Whether the application is a digital business initiative involving thousands of mobile computing devices or an Internet of Things (IoT) deployment that encompasses millions of sensors, service providers will have to plan carefully. After all, the only thing more costly than deploying a vCPE environment the first time is having to do it all again a few months later.