The growth of data generation and data processing has had a strong impact on the needs and evolution of the underlying infrastructure network. 5G networks are often looked upon as a harbinger of ubiquitous high-speed connectivity that will cater to the data demand. That being said, 5G networks face many important challenges to deliver on that promise.
ETSI proposed multi-access edge computing (MEC) to address some of these challenges. This article discusses some of these challenges and how adding network programmability to MEC ameliorates the overall MEC solution.
To support the growth of IoT devices, 5G wireless infrastructure will need to support a large number of connected devices and complex applications. This will require the network to support high scalability, ultra-low latency, high throughput, and reliable transmission of data. These critical latency and bandwidth requirements led ETSI to propose multi-access edge computing as a viable solution, thus pushing the services to the edge and closer to the end-user. But this approach comes with its own set of challenges:
- MEC servers have limited computational resources as compared to high-end servers in the cloud.
- There is a need to handle heterogeneity of requests from end-devices. Some might require high computational capacity, and some might have more critical latency requirements.
- Load balancing between the MEC servers might be required. Consuming computational capacity of few MEC servers, while others are idle, might lead to an increase in service time.
One possible solution to address some of the challenges mentioned above is to introduce an MEC coordinator in the system. This MEC coordinator is made aware of the existence of available MEC servers and their capabilities (in terms of resources and services they offer). In other words, the MEC coordinator acts as a broker of sorts that connects the application to the right MEC server.
The MEC coordinator isolates the operator from making network decisions such as:
- MEC servers might have varying hardware capabilities. Where can you execute a particular instance of an MEC application?
- MEC servers might have varying computational capabilities and varying MEC applications running. Which MEC server should be used for a particular client/device request?
The MEC coordinator can make these decisions based on various factors like server capabilities, its current load, and its locations.
Further, SDN concepts can be exploited by the MEC coordinator for the service discovery (to track the instances of a service) and orchestration ( service instantiation and management) needs of MEC services.
A centralized SDN controller has a global view of the network. MEC coordinators can be integrated with the SDN controller to gather information from the network.
The MEC coordinator can internally maintain a database of services available on the MEC servers. It can use the SDN controller to connect the right MEC server to an application request. For instance, an end-user might request for a facial recognition service by just mentioning the name of the service. The MEC coordinator will identify the MEC server that provides this service and add flows in the network, accordingly.
Addressing Latency Considerations
Two major motivations behind the concept of an MEC system are computational offloading and latency reduction. Latency to centralized data centers or to public clouds is quite high. This is the reason why MEC servers are deployed so close to the edge. The MEC coordinator will have to take an intelligent decision based on the latency, energy, and bandwidth requirements of a client request before deciding the location of handling the request.
There are two primary considerations that must be taken into account, while trying to reduce latency:
- The distance between the client and the MEC server capable of handling this client request needs to be considered. The proximity between the client and the MEC server is an important deciding factor.
- The cost of transmission versus the cost of local computation needs to be compared. This helps in deciding if a computation should move to an MEC server or should be handled locally at the client.
The MEC coordinator needs to come up with a solution which minimizes the energy consumption at the device as well as meets the execution delay constraint (latency requirements) of the client request or application.
Further, the nature of the application — whether it is latency sensitive (high responsiveness requirement) or computationally intensive — needs to be considered. Latency sensitive requests should move to MEC servers closer to the client (even if they have less resources), whereas computationally intensive requests will be targeted toward more powerful MEC servers (even if they are farther from the client device). Thus, different network slices from the same device might have different paths and might be handled by different MEC servers.
SDN Controller Integration With MEC
The first release of MEC ETSI specifications seems to be inclined toward offering MEC services as ‘network services’ on a virtualized platform. These services are basically a combination of VNFs running software related to the functionality of a network middle-box. By architecting the solution on an NFV platform, one can manage the complete lifecycle (instantiation, termination, scaling, etc.) of these MEC services. The NFV platform also supports VNF forwarding graphs to implement service chaining of the VNFs on an MEC service.
Adding SDN to the platform can enable more flexibility and dynamism in the network. SDN allows for a global view of the underlying network and, accordingly, can apply traffic steering rules to implement complex service chaining scenarios. It can be used to manage a network that is interconnecting distributed MEC servers.
The SDN controller can host the ‘MEC coordinator northbound application’ which can be programmed to handle various scenarios:
- To monitor service instances running on the MEC servers to decide which MEC server can be used to service the request from a client application on the end device, in terms of computation power, storage area, or a certain service type?
- To monitor the capacity and utilization of the MEC servers to decide which MEC server should be used to instantiate an instance of a service?
- In case there are multiple MEC servers running instances of the same service, which one should be selected to handle an end device request for this service? The requests should ideally be moved to less-loaded servers.
The MEC coordinator can thus reuse the SDN architecture where customized northbound applications define the behavior of the network. The SDN controller offers northbound APIs to these applications for triggering commands. The controller also has a southbound interface (usually, OpenFlow based) which communicates with the managed devices (using OpenFlow switches in the network).
The commands from the MEC coordinator northbound application can be converted by the SDN controller into low-level, OpenFlow-based traffic steering rules and sent to the OpenFlow devices — either connected to the MEC servers or as part of the MEC server— in the network. These OpenFlow rules can be integrated with the rules of the “traffic offload service” running on the MEC servers. The “traffic offload service” is the MEC platform service responsible to route the traffic toward the MEC applications or from the MEC applications.
Finally, the status and statistics from the OpenFlow devices can be communicated back to the MEC coordinator application by the SDN controller. This provides the MEC coordinator with a global view of the network as well as utilization/load on the links/MEC servers.
When looking at SDN and MEC in one frame, there are certain issues to resolve.
ETSI introduced the concept of MEAO (mobile edge application orchestrator), which is responsible for triggering the lifecycle management of MEC services. The SDN controller will need to be integrated with MEAO to trigger MEC services on an MEC server based on its custom decisions/algorithms.
The SDN controller needs to integrate with an agent (running on the MEC servers) for the following functions: reporting the capabilities of the MEC servers; and reporting client application requests, to arrive at the optimal MEC server.
Handling UE mobility will create more complexity in the design. The SDN controller will have to change the routing/flow information between the user and the MEC servers when the location changes. The location services running on the MEC servers might be used for active device location tracking and conveying the device locations to trigger flow updates from the SDN controller.
In handling varying workloads on MEC servers you must determine if the SDN controller should try to re-compute the best path, keeping in mind service continuity for the device. The SDN controller might relook at the MEC server loading periodically and update the flows accordingly. But the device is oblivious to the existence of MEC servers. Any impact on the running traffic of the device — due to its computation being moved to a new MEC server — is not acceptable.
To summarize, SDN can help MEC-based infrastructure in multiple ways:
- Computational load balancing: Using a periodic collection of OpenFlow-based statistics on the southbound interface with the OpenFlow compliant servers.
- Simpler end devices: By supporting service-centric access instead of host-centric access all service instances can register to the SDN controller.
- Easy plug-ability of edge devices in the network: SDN relies heavily on OpenFlow — new devices are easily detected using LLDP/OFDP and traffic rules can be easily updated.
- Decision making for computational offloading: a centralized SDN controller can provide the device with the info about the channel conditions, server load, etc.
SDN concepts can thus be used in MEC to provide a unified control plane interface, retrieving the network context or device information, and subsequently using this information for intelligent traffic steering across the network.