Scaled Down Hyperscale for Enterprise
In our SDxCentral look at the telco cloud, I’ve pointed out that telco clouds won’t necessarily be exactly like webscale or hyperscale clouds – given a range of specialized requirements. It’s also worth pointing out that as enterprise clouds evolve, there is a place for a wide range of cloud architectures fitting various niches and market verticals. Think of this as scaled down hyperscale or specialized hyperscale.
If you take away the nuts and bolts of what the hyperscale players did for cloud, all forms of cloud environments are poised to benefit from the cloud model – even in a scaled down hyperscale down model. The hyperscale players have driven economies of scale into the industry, creating affordable commercial-off-the-shelf (COTS) hardware, better performing commodity chips, and lower prices for key components such as optics. We’ve also gained the disaggregated software model that has spawned software-defined networking (SDN) and network functions virtualization (NFV) – what we call software defined anything (SDx).
On the hardware side, disaggregation has created a robust market third-party components, including high-speed interconnects (NICs, switches, cables, and optics) that can boost the efficiency of all distributed applications in the cloud, using modern software architectures such as microservices. It is well-known that hyperscale giants such as Google and Microsoft were among the initiators of 25G, 50G and 100G Ethernet standards, and Facebook is deploying 100G networks in its data centers. This can simplify the design of applications, compute, and storage subsystems for enterprises and cloud providers of all size, and storage locality will become a thing in the past.
Walmart Goes to Microservices
Let’s give an example in which a large enterprise adopted the hyperscale model – so that it had its own flexible, scalable cloud platform. This is the story of Walmart’s development of a microservcies-based architecture for its web platform – it’s own flavor of scaled down hyperscale.
Prior to moving to microservices, Walmart had a legacy system designed around desktops, laptops and monolithic servers. In 2012, its website failed when it was unable to scale for 6 million pageviews per minute. They built a new system, based on microservices in the cloud, that it believes scales to a world with 4 billion people connected, 25+ million apps available, and 5.200 GB of data for each person on Earth. You can see a description of this evolution here.
Many other enterprises are making a similar shift to microservices, which will change the need of data center architectures over time. Enterprises of all sizes – some smaller than Walmart – will test, build, and deploy applications in the cloud, whether its on a public or private cloud platform.
Microservices Hardware Demands for the Cloud
This new wave of microservices applications change the dynamics of traffic and hardware performance, creating more “East-West” traffic and putting more pressure on data center interconnects and NICs to execute on better performance. In addition, server performance will have to improve to handle increased packet processing capabilities on networking hardware.
“The high-performance cloud will have to evolve on several fronts to handle these new demands – whether it’s in hyperscale model or scaled down hyperscale model. This includes a combination of high performance compute, intelligent networking, and the need for offload and accelerations technologies. These hardware and software improvements will be needed to boost the efficiency of distributed applications based on microservices and eliminate the need to engineer around the network, and simplify the design of applications, compute and storage subsystems”, according to Mellanox Senior Director for Cloud Market Development, Chloe Ma.
Enterprises of all sizes – from hyperscale to enterprise scale – need to move faster and implement IT flexibility. All of this is possible using the economies of scale for the cloud to boost IT productivity as a whole.