Serverless computing has been around for more than 20 years, but the technology began to gain significant attention over the past year as organizations continue to look for ways to squeeze more efficiencies from their cloud deployments.
A serverless computing architecture is similar to containers and designed to reduce the amount of overhead associated with offering services in the cloud. This includes the ability for a cloud provider to dynamically manage server resources.
Amazon Web Services (AWS) kicked of the current serverless craze with the launch of its Lambda platform in 2014. While AWS’ Lambda ended the year as the main serverless computing platform, other cloud providers have been jumping into the sandbox.
Microsoft last year launched its Azure Functions platform, which gained new capabilities this year. More recently, Pivotal Cloud Foundry and Oracle bolstered support for serverless computing within their broader cloud portfolios.
Who Wants to Save Money?
Serverless computing operates as a function-as-a-service (FaaS) that allows an organization to pay only for the compute time consumed – there is no charge when the code is not running. And that cost component is an important aspect of why serverless computing has been gaining new attention.
In a recent report, 451 Research noted serverless compute pricing is typically based on three parameters: script duration, or how long the code is used; the number of requests; and the memory required for the function.
The report favorably compared the total cost of ownership (TCO) of serverless computing to that of virtual machines (VMs). Owen Rogers, research director at the analyst firm, explained VMs needed to be up and running before a function request was placed, thus an enterprise would “need to pay for that, and there is an element of waste when capacity is not being used.”
“With serverless, we don’t have this problem,” Rogers said. “It scales instantly with a request. You just need to configure the request with the function and no other concerns.”
Analysts have noted that these potential cost savings have garnered attention from enterprises.
“There is a lot of interest in serverless as it’s in tune with consuming and paying for exactly what I need and not any more,” said Clifford Grossner, senior research director and advisor at IHS Markit.
One of the big challenges for serverless has been in differentiating itself from containers. From a 30,000-foot view, they may seem similar. But those that work in the weeds highlighted differences and unique use cases where serverless stakes its own path.
Charlie Li, chief cloud officer at Capgemini, said one potential serverless computing use case was in setting up a way to offload capacity during high-volume, short-lived events. Li said this could be something like Black Friday, “where you need to make sure your ordering systems are able to handle the extremely high demand, but for only a short period of time.”
“There is no need to set up an extensive back office platform that really only needs to scale a couple of days of the year,” Li said.
He also highlighted mobile telecom operator T-Mobile as a great example of an established organization that is taking advantage of its cloud infrastructure. He noted the carrier’s recent move to include free Netflix access for new customers showed impressive agility in being able to roll out a deep, systemwide billing change.
“Imagine the other telcos trying to do that,” Li said, half-jokingly adding, “Their IT guys would say it would need up to 10 months and would have to go through 13,000 business systems. It’s just not practical.”
Like many new technologies, continued adoption of serverless computing will be dependent on segment maturation and enterprise familiarity.
Nate Taggart, CEO and co-founder of Stackery, acknowledged that serverless computing architectures remain nascent in the enterprise space. While the company is seeing traction and use of its Serverless Operations Consolde by IT operations and DevOps engineers, some unique challenges remain to be solved ahead of broader adoption.
Taggart noted one of those challenges was ongoing uncertainty into how this architecture can be integrated into current cloud and container operations. This includes the basic lack of familiarity for management and control over deployment.
“Serverless is still in its early stages,” Taggart said. “This is a new skillset that needs to be mastered, and by putting in place some familiarity it allows enterprises to be more comfortable, quicker.”
Some are tagging onto the recent push behind Kubernetes in lowering the education barrier to serverless computing. Companies like Platform9 and LunchBadger have launched platforms that allow serverless pods to run inside a Kubernetes-managed container cluster.
“The value is that the infrastructure play is table stakes with Kubernetes,” explained Al Tsang, CEO of LunchBadger. “We have had two-plus years working with Kubernetes and can now offer enterprises a single pane of glass to manage all of their microservices, whether they are in containers or using serverless.”
As for the future of serverless computing, Li said adoption was running at least a year behind that of containers.
“We will see in a year,” Li said. “It may even be two years before there is mass adoption of serverless. All of our clients are expressing interest in trying serverless, but outside of a few new applications being launched and some PoCs [proof of concepts], mass adoption is still some time away.”