In telecommunications, hot potato routing is defined as passing traffic from one network to another, as quickly as possible to achieve shortest end-to-end network path. To visualize this metaphor, imagine people standing in line, trying to quickly pass a hot potato to the next person (for the obvious reasons), so it can reach the last person as fast as possible. The ultimate aim of hot potato routing principles is to make sure application traffic takes shortest network path between users and application servers.
The problem with this approach is that the shortest end-to-end network path does not always result in improved application performance. That’s because it’s calculated by the routing protocols leveraging various algorithms, rather than application relevant metrics. For example, border gateway protocol (BGP) uses best path selection algorithm, which uses numerous factors, with the most common being the number of autonomous systems in path (think of it as network “hops”) and the preference for prefixes advertised by the neighboring autonomous system.
On the other hand, OSPF leverages Dijkstra shortest path first algorithm, which takes into account link-cost derived from interface bandwidth configuration or administratively assigned value. Overall the end-to-end shortest path between the user and a cloud application provider is a combination of the shortest path inside the user’s network (calculated by OSPF) and a shortest path between the user’s network and cloud application provider’s network across the Internet (calculated by BGP).
Unfortunately, none of the traditional routing protocols’ algorithms factor in the changing nature of network conditions, such as latency, jitter, packet loss, path MTU and so on, which can occur at any given time. This is where the principles of software-defined networking (SDN) can be applied to the wide area network to achieve better results. This methodology is also known as the software-defined wide-area networking (SD-WAN).
With SD-WAN, application traffic is forwarded across a virtualized wide-area network infrastructure, abstracting the underlying physical transports, such as MPLS, broadband Internet, 4G/LTE, metro Ethernet, satellite and so on. The power of abstraction allows employing more advanced mechanisms than previously available in order to improve application performance and quality of experience.
Let’s consider three sample scenarios. In the first, SD-WAN uses the principles of regionalized access to cloud applications. Here, cloud application provider networks are advertised across the Internet using BGP, which eventually reaches the edges of SD-WAN fabric at regional locations. Each of these regional locations performs BGP best path calculation to determine the shortest network path to the cloud application provider networks. So far, this is no different from the traditional approach, however this is where things diverge.
SD-WAN fabric can continuously monitor network conditions of all the underlying physical transports, so nodes can make intelligent decisions for forwarding application traffic in compliance with desired service level agreements (SLA). This completely redefines the notion of “shortest path” as described earlier in the context of hot potato routing. Instead of choosing the shortest path calculated solely by the routing protocols, the best end-to-end path between users and cloud applications is determined using a combination of best application performing paths (likely more than one with active/active approach). Of course the use of BGP is not mandatory and some regional Internet exchange facilities offer other means of exchanging reachability information, but for the sake of our discussion let’s assume a more generic case of using BGP.
In the second scenario, the SD-WAN fabric stretches all the way into cloud application provider networks to enable an end-to-end SLA driven path selection and completely replaces the principles of hot potato routing. All user traffic destined to cloud applications is forwarded in compliance with desired SLAs, rather than the shortest path calculated by the routing protocols. There is also no need to create regional exit points since the entire end-to-end path between users and cloud applications is natively established across SD-WAN fabric. This model fits well with infrastructure-as-a-service (IaaS) offerings.
This scenario provides the most comprehensive way of determining the best performing path, however it is predicated on having the ability to deploy SD-WAN appliances in the cloud application provider networks. This may or may not be possible, especially when software-as-a-service (SaaS) applications are concerned.
In the final scenario,SD-WAN fabric is able to extend path quality measurements all the way to cloud applications across traditionally routed networks without the need to provision SD-WAN appliances in those networks. These cloud path quality measurements may not be as comprehensive as the native SD-WAN capabilities, however they are generally sufficient to accommodate web-based cloud applications.
There’s no denying that hot potato routing and shortest network path have served enterprise networks well for many years. However, the advent of cloud computing architectures where applications no longer reside in corporate data centers requires an SLA-driven approach to application delivery, which hot potato routing was not designed to support.